Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses
Abstract
:1. Introduction
2. Clinical Advances in Visual Prosthetics
Implant Site | Visual Prosthesis | Electrode Number | Visual Implant Vision | Clinical Trial Numbers | Status |
---|---|---|---|---|---|
Epiretinal | Argus® II [5,6,28,29,30,31,32,33,34,35,36,37,38,39] | 60 | 20/1260 | NCT03635645 | Received the CE mark in 2011, FDA approval in 2013, and two patients to identify a subset of the Sloan letters. |
IRIS II [40] | 150 | NA | NCT02670980 | Ten patients evaluated for functional visual tasks for up to 3 years | |
IMI [41,42] | 49 | NA | NCT02982811 | Follow-up of 20 patients with faint light perception for 3 months | |
Subretinal | Alpha-AMS [43,44,45] | 1500 | 20/546 | NCT03629899 | Received the CE mark in 2013 and had patients achieve an optimal visual acuity of 20/546. |
PRIMA [46,47,48,49,50,51] | 378 | 20/460 | NCT03333954 | Implantation of PRIMA to five patients was started in 2017 with 36 months of follow-up. | |
Suprachoroidal retinal prosthesis [52,53] | 49 | NA | NCT05158049 | Seven implants were assessed for vision, orientation, and movement. | |
Bionic Eye [54,55,56,57,58] | 44 | NA | NCT03406416 | The safety of the device was evaluated in 2018 by implantation in four subjects with increased electrode–retinal distance and stable impedance after the procedure, with no side effects. | |
Intracortical | ORION [59] | 60 | NA | NCT03344848 | Six patients without photoreceptors were approved by the FDA to be implanted in 2017, and each implant recipient received a 5-year follow-up; data from the relevant trials are not yet publicly available. |
optic cortex | ICVP [59,60,61] | 144 | NA | NCT04634383 | Five participants, tested weekly for 1 to 3 years, were assessed for electrical-stimulation-induced visual perception. |
CORTIVIS [18,59,62] | 100 | NA | NCT02983370 | After receiving FDA approval, it was implanted in five patients for six months in 2019. |
2.1. Epiretinal Prostheses
2.2. Subretinal Prostheses
2.3. Visual Cortex Prostheses
3. Optimization of Information Processing in Visual Prosthetics
3.1. The Optimization Strategy of Face Recognition
3.2. The Optimization Strategy for Character Recognition
3.3. The Optimization Strategy of Object Recognition
3.4. Summaries of Optimization of Information Processing
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- World Health Organization. World Report on Vision; World Health Organization: Geneva, Switzerland, 2019. [Google Scholar]
- Tassicker, G.E. Preliminary report on a retinal stimulator. Br. J. Physiol. Opt. 1956, 13, 102–105. [Google Scholar]
- Dobelle, W.H.; Mladejovsky, M.G.; Girvin, J.J.S. Artificial vision for the blind: Electrical stimulation of visual cortex offers hope for a functional prosthesis. Science 1974, 183, 440–444. [Google Scholar] [CrossRef] [PubMed]
- Rizzo, J.F., 3rd; Wyatt, J.; Loewenstein, J.; Kelly, S.; Shire, D. Perceptual efficacy of electrical stimulation of human retina with a microelectrode array during short-term surgical trials. Investig. Ophthalmol. Vis. Sci. 2003, 44, 5362–5369. [Google Scholar] [CrossRef] [PubMed]
- Humayun, M.S.; Weiland, J.D.; Fujii, G.Y.; Greenberg, R.; Williamson, R.; Little, J.; Mech, B.; Cimmarusti, V.; Van Boemel, G.; Dagnelie, G.; et al. Visual perception in a blind subject with a chronic microelectronic retinal prosthesis. Vis. Res. 2003, 43, 2573–2581. [Google Scholar] [CrossRef]
- Humayun, M.S.; Dorn, J.D.; da Cruz, L.; Dagnelie, G.; Sahel, J.A.; Stanga, P.E.; Cideciyan, A.V.; Duncan, J.L.; Eliott, D.; Filley, E.; et al. Interim results from the international trial of Second Sight’s visual prosthesis. Ophthalmology 2012, 119, 779–788. [Google Scholar] [CrossRef]
- Cheng, X.; Feng, X.; Li, W. Research on Feature Extraction Method of Fundus Image Based on Deep Learning. In Proceedings of the 2020 IEEE 3rd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 20–22 November 2020; pp. 443–447. [Google Scholar]
- Orlando, J.I.; Fu, H.; Barbosa Breda, J.; van Keer, K.; Bathula, D.R.; Diaz-Pinto, A.; Fang, R.; Heng, P.A.; Kim, J.; Lee, J.; et al. REFUGE Challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs. Med. Image Anal. 2020, 59, 101570. [Google Scholar] [CrossRef]
- Son, J.; Shin, J.Y.; Kim, H.D.; Jung, K.H.; Park, K.H.; Park, S.J. Development and Validation of Deep Learning Models for Screening Multiple Abnormal Findings in Retinal Fundus Images. Ophthalmology 2020, 127, 85–94. [Google Scholar] [CrossRef]
- Catalán, E.B.; Gámez, E.D.L.C.; Valverde, J.A.M.; Reyna, R.H.; Hernández, J.L.H. Detection of Exudates and Microaneurysms in the Retina by Segmentation in Fundus Images. Rev. Mex. Ing. Bioméd. 2021, 42, 67–77. [Google Scholar]
- Dagnelie, G.; Barnett, D.; Humayun, M.S.; Thompson, R.W., Jr. Paragraph text reading using a pixelized prosthetic vision simulator: Parameter dependence and task learning in free-viewing conditions. Investig. Opthalmol. Vis. Sci. 2006, 47, 1241–1250. [Google Scholar] [CrossRef]
- Abolfotuh, H.H.; Jawwad, A.; Abdullah, B.; Mahdi, H.M.; Eldawlatly, S. Moving object detection and background enhancement for thalamic visual prostheses. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 4711–4714. [Google Scholar]
- Bar-Yoseph, P.Z.; Brøns, M.; Gelfgat, A.; Oron, A. Fifth International Symposium on Bifurcations and Instabilities in Fluid Dynamics (BIFD2013). Fluid Dyn. Res. 2014, 49, 1015–1031. [Google Scholar] [CrossRef]
- White, J.; Kameneva, T.; McCarthy, C. Vision Processing for Assistive Vision: A Deep Reinforcement Learning Approach. IEEE Trans. Hum.-Mach. Syst. 2022, 52, 123–133. [Google Scholar] [CrossRef]
- Dowling, J.A.; Maeder, A.; Boles, W. Mobility enhancement and assessment for a visual prosthesis. In Proceedings of the Medical Imaging 2004: Physiology, Function, and Structure from Medical Images, San Diego, CA, USA, 30 April 2004; pp. 780–791. [Google Scholar]
- Thorn, J.T.; Migliorini, E.; Ghezzi, D. Virtual reality simulation of epiretinal stimulation highlights the relevance of the visual angle in prosthetic vision. J. Neural Eng. 2020, 17, 056019. [Google Scholar] [CrossRef] [PubMed]
- Adewole, D.O.; Struzyna, L.A.; Burrell, J.C.; Harris, J.P.; Nemes, A.D.; Petrov, D.; Kraft, R.H.; Chen, H.I.; Serruya, M.D.; Wolf, J.A. Development of optically controlled “living electrodes” with long-projecting axon tracts for a synaptic brain-machine interface. Sci. Adv. 2021, 7, eaay5347. [Google Scholar] [CrossRef] [PubMed]
- Fernandez, E.; Alfaro, A.; Soto-Sanchez, C.; Gonzalez-Lopez, P.; Lozano, A.M.; Pena, S.; Grima, M.D.; Rodil, A.; Gomez, B.; Chen, X.; et al. Visual percepts evoked with an intracortical 96-channel microelectrode array inserted in human occipital cortex. J. Clin. Investig. 2021, 131, e151331. [Google Scholar] [CrossRef]
- McCarthy, C.; Barnes, N.; Lieby, P. Ground surface segmentation for navigation with a low resolution visual prosthesis. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 4457–4460. [Google Scholar]
- Yang, K.; Wang, K.; Bergasa, L.M.; Romera, E.; Hu, W.; Sun, D.; Sun, J.; Cheng, R.; Chen, T.; Lopez, E. Unifying Terrain Awareness for the Visually Impaired through Real-Time Semantic Segmentation. Sensors 2018, 18, 1506. [Google Scholar] [CrossRef]
- Han, N.; Srivastava, S.; Xu, A.; Klein, D.; Beyeler, M. Deep Learning–Based Scene Simplification for Bionic Vision. In Proceedings of the Augmented Humans Conference 2021, Rovaniemi, Finland, 22–24 February 2021; pp. 45–54. [Google Scholar]
- De Luca, D.; Moccia, S.; Micera, S. Deploying an Instance Segmentation Algorithm to Implement Social Distancing for Prosthetic Vision. In Proceedings of the 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Pisa, Italy, 21–25 March 2022; pp. 735–740. [Google Scholar]
- Boyle, J.R.; Boles, W.W.; Maeder, A.J. Region-of-interest processing for electronic visual prostheses. J. Electron. Imaging 2008, 17, 013002. [Google Scholar] [CrossRef]
- McCarthy, C.; Barnes, N. Importance weighted image enhancement for prosthetic vision: An augmentation framework. In Proceedings of the 2014 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Munich, Germany, 10–12 September 2014; pp. 45–51. [Google Scholar]
- Li, W.H. Wearable Computer Vision Systems for a Cortical Visual Prosthesis. In Proceedings of the 2013 IEEE International Conference on Computer Vision Workshops, Sydney, NSW, Australia, 2–8 December 2013; pp. 428–435. [Google Scholar]
- Dai, C.; Lu, M.; Zhao, Y.; Lu, Y.; Zhou, C.; Chen, Y.; Ren, Q.; Chai, X. Correction for Chinese character patterns formed by simulated irregular phosphene map. In Proceedings of the 32nd Annual International Conference of the IEEE EMBS, Buenos Aires, Argentina, 31 August–4 September 2010. [Google Scholar]
- U.S. National Library of Medicine. Clinical Research Database. Available online: https://www.clinicaltrials.gov/ct2/home (accessed on 31 May 2022).
- Dagnelie, G.; Christopher, P.; Arditi, A.; da Cruz, L.; Duncan, J.L.; Ho, A.C.; Olmos de Koo, L.C.; Sahel, J.A.; Stanga, P.E.; Thumann, G.; et al. Performance of real-world functional vision tasks by blind subjects improves after implantation with the Argus(R) II retinal prosthesis system. Clin. Exp. Ophthalmol. 2017, 45, 152–159. [Google Scholar] [CrossRef]
- Demchinsky, A.M.; Shaimov, T.B.; Goranskaya, D.N.; Moiseeva, I.V.; Kuznetsov, D.I.; Kuleshov, D.S.; Polikanov, D.V. The first deaf-blind patient in Russia with Argus II retinal prosthesis system: What he sees and why. J. Neural Eng. 2019, 16, 025002. [Google Scholar] [CrossRef]
- Rizzo, S.; Barale, P.O.; Ayello-Scheer, S.; Devenyi, R.G.; Delyfer, M.N.; Korobelnik, J.F.; Rachitskaya, A.; Yuan, A.; Jayasundera, K.T.; Zacks, D.N.; et al. Hypotony and the Argus II retinal prosthesis: Causes, prevention and management. Br. J. Ophthalmol. 2020, 104, 518–523. [Google Scholar] [CrossRef]
- Yoon, Y.H.; Humayun, M.S.; Kim, Y.J. One-Year Anatomical and Functional Outcomes of the Argus II Implantation in Korean Patients with Late-Stage Retinitis Pigmentosa: A Prospective Case Series Study. Ophthalmologica 2021, 244, 291–300. [Google Scholar] [CrossRef]
- da Cruz, L.; Coley, B.F.; Dorn, J.; Merlini, F.; Filley, E.; Christopher, P.; Chen, F.K.; Wuyyuru, V.; Sahel, J.; Stanga, P.; et al. The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss. Br. J. Ophthalmol. 2013, 97, 632–636. [Google Scholar] [CrossRef] [PubMed]
- Greenwald, S.H.; Horsager, A.; Humayun, M.S.; Greenberg, R.J.; McMahon, M.J.; Fine, I. Brightness as a function of current amplitude in human retinal electrical stimulation. Investig. Ophthalmol. Vis. Sci. 2009, 50, 5017–5025. [Google Scholar] [CrossRef] [PubMed]
- Schiefer, M.A.; Grill, W.M. Sites of neuronal excitation by epiretinal electrical stimulation. IEEE Trans. Neural Syst. Rehabil. Eng. 2006, 14, 5–13. [Google Scholar] [CrossRef] [PubMed]
- Farvardin, M.; Afarid, M.; Attarzadeh, A.; Johari, M.K.; Mehryar, M.; Nowroozzadeh, M.H.; Rahat, F.; Peyvandi, H.; Farvardin, R.; Nami, M. The Argus-II Retinal Prosthesis Implantation; From the Global to Local Successful Experience. Front. Neurosci. 2018, 12, 584. [Google Scholar] [CrossRef]
- Christie, B.; Sadeghi, R.; Kartha, A.; Caspi, A.; Tenore, F.V.; Klatzky, R.L.; Dagnelie, G.; Billings, S. Sequential epiretinal stimulation improves discrimination in simple shape discrimination tasks only. J. Neural Eng. 2022, 19, 036033. [Google Scholar] [CrossRef] [PubMed]
- Beyeler, M.; Nanduri, D.; Weiland, J.D.; Rokem, A.; Boynton, G.M.; Fine, I. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Sci. Rep. 2019, 9, 9199. [Google Scholar] [CrossRef]
- Rizzo, S.; Belting, C.; Cinelli, L.; Allegrini, L.; Genovesi-Ebert, F.; Barca, F.; di Bartolo, E. The Argus II Retinal Prosthesis: 12-month outcomes from a single-study center. Am. J. Ophthalmol. 2014, 157, 1282–1290. [Google Scholar] [CrossRef]
- Naidu, A.; Ghani, N.; Yazdanie, M.S.; Chaudhary, K. Effect of the Electrode Array-Retina Gap Distance on Visual Function in Patients with the Argus II Retinal Prosthesis. BMC Ophthalmol. 2020, 20, 366. [Google Scholar] [CrossRef]
- Muqit, M.M.K.; Velikay-Parel, M.; Weber, M.; Dupeyron, G.; Audemard, D.; Corcostegui, B.; Sahel, J.; Le Mer, Y. Six-Month Safety and Efficacy of the Intelligent Retinal Implant System II Device in Retinitis Pigmentosa. Ophthalmology 2019, 126, 637–639. [Google Scholar] [CrossRef]
- Wolffsohn, J.S.; Kollbaum, P.S.; Berntsen, D.A.; Atchison, D.A.; Benavente, A.; Bradley, A.; Buckhurst, H.; Collins, M.; Fujikado, T.; Hiraoka, T.; et al. IMI—Clinical Myopia Control Trials and Instrumentation Report. Investig. Opthalmol. Vis. Sci. 2019, 60, M132–M160. [Google Scholar] [CrossRef]
- Keseru, M.; Feucht, M.; Bornfeld, N.; Laube, T.; Walter, P.; Rossler, G.; Velikay-Parel, M.; Hornig, R.; Richard, G. Acute electrical stimulation of the human retina with an epiretinal electrode array. Acta Ophthalmol. 2012, 90, e1–e8. [Google Scholar] [CrossRef] [PubMed]
- Stingl, K.; Bartz-Schmidt, K.U.; Besch, D.; Braun, A.; Bruckmann, A.; Gekeler, F.; Greppmaier, U.; Hipp, S.; Hortdorfer, G.; Kernstock, C.; et al. Artificial vision with wirelessly powered subretinal electronic implant alpha-IMS. Proc. Biol. Sci. 2013, 280, 20130077. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Daschner, R.; Rothermel, A.; Rudorf, R.; Rudorf, S.; Stett, A. Functionality and Performance of the Subretinal Implant Chip Alpha AMS. Sens. Mater. 2018, 30, 179–192. [Google Scholar] [CrossRef]
- Zrenner, E.; Bartz-Schmidt, K.U.; Benav, H.; Besch, D.; Bruckmann, A.; Gabel, V.P.; Gekeler, F.; Greppmaier, U.; Harscher, A.; Kibbel, S.; et al. Subretinal electronic chips allow blind patients to read letters and combine them to words. Proc. Biol. Sci. 2011, 278, 1489–1497. [Google Scholar] [CrossRef]
- Lorach, H.; Goetz, G.; Smith, R.; Lei, X.; Mandel, Y.; Kamins, T.; Mathieson, K.; Huie, P.; Harris, J.; Sher, A.; et al. Photovoltaic restoration of sight with high visual acuity. Nat. Med. 2015, 21, 476–482. [Google Scholar] [CrossRef]
- Lemoine, D.; Simon, E.; Buc, G.; Deterre, M. In vitro reliability testing and in vivo lifespan estimation of wireless Pixium Vision PRIMA photovoltaic subretinal prostheses suggest prolonged durability and functionality in clinical practice. J. Neural Eng. 2020, 17, 035005. [Google Scholar] [CrossRef]
- Palanker, D.; Le Mer, Y.; Mohand-Said, S.; Sahel, J.A. Simultaneous perception of prosthetic and natural vision in AMD patients. Nat. Commun. 2022, 13, 513. [Google Scholar] [CrossRef]
- Muqit, M.M.K.; Hubschman, J.P.; Picaud, S.; McCreery, D.B.; van Meurs, J.C.; Hornig, R.; Buc, G.; Deterre, M.; Nouvel-Jaillard, C.; Bouillet, E.; et al. PRIMA subretinal wireless photovoltaic microchip implantation in non-human primate and feline models. PLoS ONE 2020, 15, e0230713. [Google Scholar] [CrossRef]
- Prevot, P.H.; Gehere, K.; Arcizet, F.; Akolkar, H.; Khoei, M.A.; Blaize, K.; Oubari, O.; Daye, P.; Lanoe, M.; Valet, M.; et al. Behavioural responses to a photovoltaic subretinal prosthesis implanted in non-human primates. Nat. Biomed. Eng. 2020, 4, 172–180. [Google Scholar] [CrossRef]
- Palanker, D.; Le Mer, Y.; Mohand-Said, S.; Muqit, M.; Sahel, J.A. Photovoltaic Restoration of Central Vision in Atrophic Age-Related Macular Degeneration. Ophthalmology 2020, 127, 1097–1104. [Google Scholar] [CrossRef]
- Fujikado, T.; Kamei, M.; Sakaguchi, H.; Kanda, H.; Endo, T.; Hirota, M.; Morimoto, T.; Nishida, K.; Kishima, H.; Terasawa, Y.; et al. One-Year Outcome of 49-Channel Suprachoroidal-Transretinal Stimulation Prosthesis in Patients with Advanced Retinitis Pigmentosa. Investig. Ophthalmol. Vis. Sci. 2016, 57, 6147–6157. [Google Scholar] [CrossRef]
- Fujikado, T.; Kamei, M.; Sakaguchi, H.; Kanda, H.; Morimoto, T.; Ikuno, Y.; Nishida, K.; Kishima, H.; Konoma, K.; Ozawa, M. Feasibility of Semi-chronically Implanted Retinal Prosthesis by Suprachoroidal-Transretinal Stimulation in Patients with Retinitis Pigmentosa. Investig. Ophthalmol. Vis. Sci. 2011, 52, 2589. [Google Scholar]
- Abbott, C.J.; Nayagam, D.A.X.; Luu, C.D.; Epp, S.B.; Williams, R.A.; Salinas-LaRosa, C.M.; Villalobos, J.; McGowan, C.; Shivdasani, M.N.; Burns, O.; et al. Safety Studies for a 44-Channel Suprachoroidal Retinal Prosthesis: A Chronic Passive Study. Investig. Ophthalmol. Vis. Sci. 2018, 59, 1410–1424. [Google Scholar] [CrossRef] [Green Version]
- Titchener, S.A.; Kvansakul, J.; Shivdasani, M.N.; Fallon, J.B.; Nayagam, D.A.X.; Epp, S.B.; Williams, C.E.; Barnes, N.; Kentler, W.G.; Kolic, M.; et al. Oculomotor Responses to Dynamic Stimuli in a 44-Channel Suprachoroidal Retinal Prosthesis. Transl. Vis. Sci. Technol. 2020, 9, 31. [Google Scholar] [CrossRef] [PubMed]
- Petoe, M.A.; Titchener, S.A.; Kolic, M.; Kentler, W.G.; Abbott, C.J.; Nayagam, D.A.X.; Baglin, E.K.; Kvansakul, J.; Barnes, N.; Walker, J.G.; et al. A Second-Generation (44-Channel) Suprachoroidal Retinal Prosthesis: Interim Clinical Trial Results. Transl. Vis. Sci. Technol. 2021, 10, 12. [Google Scholar] [CrossRef]
- Titchener, S.A.; Nayagam, D.A.X.; Kvansakul, J.; Kolic, M.; Baglin, E.K.; Abbott, C.J.; McGuinness, M.B.; Ayton, L.N.; Luu, C.D.; Greenstein, S.; et al. A Second-Generation (44-Channel) Suprachoroidal Retinal Prosthesis: Long-Term Observation of the Electrode-Tissue Interface. Transl. Vis. Sci. Technol. 2022, 11, 12. [Google Scholar] [CrossRef]
- Kolic, M.; Baglin, E.K.; Titchener, S.A.; Kvansakul, J.; Abbott, C.J.; Barnes, N.; McGuinness, M.; Kentler, W.G.; Young, K.; Walker, J.; et al. A 44 channel suprachoroidal retinal prosthesis: Laboratory based visual function and functional vision outcomes. Investig. Ophthalmol. Vis. Sci. 2021, 62, 3168. [Google Scholar]
- Niketeghad, S.; Pouratian, N. Brain Machine Interfaces for Vision Restoration: The Current State of Cortical Visual Prosthetics. Neurotherapeutics 2019, 16, 134–143. [Google Scholar] [CrossRef]
- Schmidt, E.M.; Bak, M.J.; Hambrecht, F.T.; Kufta, C.V.; O’rourke, D.K.; Vallabhanath, P. Feasibility of a visual prosthesis for the blind based on intracorticai microstimulation of the visual cortex. Brain 1996, 119, 507–522. [Google Scholar] [CrossRef]
- Troyk, P.R. The Intracortical Visual Prosthesis Project. In Artificial Vision; Springer: Cham, Switzerland, 2017; pp. 203–214. [Google Scholar]
- Ong, J.M.; da Cruz, L. The bionic eye: A review. Clin. Exp. Ophthalmol. 2012, 40, 6–17. [Google Scholar] [CrossRef]
- Dobelle, W.H.; Mladejovsky, M.G.; Evans, J.R.; Roberts, T.; Girvin, J.J.N. ‘Braille’ reading by a blind volunteer by visual cortex stimulation. Nature 1976, 259, 111–112. [Google Scholar] [CrossRef] [PubMed]
- Fernández, E.; Normann, R.A. CORTIVIS Approach for an Intracortical Visual Prostheses. In Artificial Vision; Springer: Cham, Switzerland, 2017; pp. 191–201. [Google Scholar]
- Chen, X.; Wang, F.; Fernandez, E.; Roelfsema, P.R.J.S. Shape perception via a high-channel-count neuroprosthesis in monkey visual cortex. Science 2020, 370, 191–1196. [Google Scholar] [CrossRef] [PubMed]
- Fernandez, E. Development of visual Neuroprostheses: Trends and challenges. Bioelectron. Med. 2018, 4, 12. [Google Scholar] [CrossRef] [PubMed]
- Chernov, M.M.; Friedman, R.M.; Chen, G.; Stoner, G.R.; Roe, A.W. Functionally specific optogenetic modulation in primate visual cortex. Proc. Natl. Acad. Sci. USA 2018, 115, 10505–10510. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shivdasani, M.N.; Sinclair, N.C.; Dimitrov, P.N.; Varsamidis, M.; Ayton, L.N.; Luu, C.D.; Perera, T.; McDermott, H.J.; Blamey, P.J. Factors Affecting Perceptual Thresholds in a Suprachoroidal Retinal Prosthesis. Investig. Ophthalmol. Vis. Sci. 2014, 55, 6467–6481. [Google Scholar] [CrossRef]
- Weitz, A.C.; Nanduri, D.; Behrend, M.R.; Gonzalez-Calle, A.; Greenberg, R.J.; Humayun, M.S.; Chow, R.H.; Weiland, J.D. Improving the spatial resolution of epiretinal implants by increasing stimulus pulse duration. Investig. Ophthalmol. Vis. Sci. 2015, 7, ra203–ra318. [Google Scholar] [CrossRef]
- Beyeler, M.; Boynton, G.M.; Fine, I.; Rokem, A. Interpretable machine-learning predictions of perceptual sensitivity for retinal prostheses. Investig. Ophthalmol. Vis. Sci. 2020, 61, 2202. [Google Scholar]
- Lee, S.W.; Seo, J.-M.; Ha, S.; Kim, E.T.; Chung, H.; Kim, S.J. Development of Microelectrode Arrays for Artificial Retinal Implants Using Liquid Crystal Polymers. Investig. Ophthalmol. Vis. Sci. 2009, 50, 5859–5866. [Google Scholar] [CrossRef]
- Horsager, A.; Greenberg, R.J.; Fine, I. Spatiotemporal Interactions in Retinal Prosthesis Subjects. Investig. Ophthalmol. Vis. Sci. 2010, 51, 1223–1233. [Google Scholar] [CrossRef]
- Najarpour Foroushani, A.; Pack, C.C.; Sawan, M. Cortical visual prostheses: From microstimulation to functional percept. J. Neural Eng. 2018, 15, 021005. [Google Scholar] [CrossRef]
- Frederick, R.A.; Meliane, I.Y.; Joshi-Imre, A.; Troyk, P.R.; Cogan, S.F. Activated iridium oxide film (AIROF) electrodes for neural tissue stimulation. J. Neural Eng. 2020, 17, 056001. [Google Scholar] [CrossRef] [PubMed]
- Chenais, N.A.L.; Airaghi Leccardi, M.J.I.; Ghezzi, D. Naturalistic spatiotemporal modulation of epiretinal stimulation increases the response persistence of retinal ganglion cell. J. Neural Eng. 2021, 18, 016016. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Wu, X.; Lu, Y.; Wu, H.; Kan, H.; Chai, X. Face recognition in simulated prosthetic vision: Face detection-based image processing strategies. J. Neural Eng. 2014, 11, 046009. [Google Scholar] [CrossRef] [PubMed]
- Rollend, D.; Rosendall, P.; Billings, S.; Burlina, P.; Wolfe, K.; Katyal, K. Face Detection and Object Recognition for a Retinal Prosthesis. In Proceedings of the Asian Conference on Computer Vision, Taipei, China, 20–24 November 2016. [Google Scholar]
- Irons, J.L.; Gradden, T.; Zhang, A.; He, X.; Barnes, N.; Scott, A.F.; McKone, E. Face identity recognition in simulated prosthetic vision is poorer than previously reported and can be improved by caricaturing. Vis. Res. 2017, 137, 61–79. [Google Scholar] [CrossRef]
- Zhao, Y.; Yu, A.; Xu, D. Person Recognition Based on FaceNet under Simulated Prosthetic Vision. J. Phys. Conf. Ser. 2020, 1437, 012012. [Google Scholar] [CrossRef] [Green Version]
- Chang, M.H.; Kim, H.S.; Shin, J.H.; Park, K.S. Facial identification in very low-resolution images simulating prosthetic vision. J. Neural Eng. 2012, 9, 046012. [Google Scholar] [CrossRef]
- Xia, X.; He, X.; Feng, L.; Pan, X.; Li, N.; Zhang, J.; Pang, X.; Yu, F.; Ding, N. Semantic translation of face image with limited pixels for simulated prosthetic vision. Inf. Sci. 2022, 609, 507–532. [Google Scholar] [CrossRef]
- Duncan, J.L.; Richards, T.P.; Arditi, A.; da Cruz, L.; Dagnelie, G.; Dorn, J.D.; Ho, A.C.; Olmos de Koo, L.C.; Barale, P.O.; Stanga, P.E.J.C.; et al. Improvements in vision-related quality of life in blind patients implanted with the Argus II Epiretinal Prosthesis. Clin. Exp. Optom. 2017, 100, 144–150. [Google Scholar] [CrossRef]
- Chai, X.; Yu, W.; Wang, J.; Zhao, Y.; Cai, C.; Ren, Q. Recognition of pixelized Chinese characters using simulated prosthetic vision. Artif. Organs 2007, 31, 175–182. [Google Scholar] [CrossRef]
- Zhao, Y.; Lu, Y.; Zhao, J.; Wang, K.; Ren, Q.; Wu, K.; Chai, X. Reading pixelized paragraphs of Chinese characters using simulated prosthetic vision. Investig. Opthalmol. Vis. Sci. 2011, 52, 5987–5994. [Google Scholar] [CrossRef]
- Zhao, Y.; Lu, Y.; Zhou, C.; Chen, Y.; Ren, Q.; Chai, X. Chinese character recognition using simulated phosphene maps. Investig. Ophthalmol. Vis. Sci. 2011, 52, 3404–3412. [Google Scholar] [CrossRef] [PubMed]
- Fu, L.; Cai, S.; Zhang, H.; Hu, G.; Zhang, X. Psychophysics of reading with a limited number of pixels: Towards the rehabilitation of reading ability with visual prosthesis. Vis. Res. 2006, 46, 1292–1301. [Google Scholar] [CrossRef]
- Lu, Y.; Kan, H.; Liu, J.; Wang, J.; Tao, C.; Chen, Y.; Ren, Q.; Hu, J.; Chai, X. Optimizing chinese character displays improves recognition and reading performance of simulated irregular phosphene maps. Investig. Ophthalmol. Vis. Sci. 2013, 54, 2918–2926. [Google Scholar] [CrossRef] [PubMed]
- Kiral-Kornek, F.I.; O’Sullivan-Greene, E.; Savage, C.O.; McCarthy, C.; Grayden, D.B.; Burkitt, A.N. Improved visual performance in letter perception through edge orientation encoding in a retinal prosthesis simulation. J. Neural Eng. 2014, 11, 066002. [Google Scholar] [CrossRef] [PubMed]
- Kim, H.S.; Park, K.S. Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses. Sensors 2017, 17, 2439. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Li, H.; Han, T.; Wang, J.; Lu, Z.; Cao, X.; Chen, Y.; Li, L.; Zhou, C.; Chai, X. A real-time image optimization strategy based on global saliency detection for artificial retinal prostheses. Inf. Sci. 2017, 415–416, 1–18. [Google Scholar] [CrossRef]
- Li, H.; Su, X.; Wang, J.; Kan, H.; Han, T.; Zeng, Y.; Chai, X. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision. Artif. Intell. Med. 2018, 84, 64–78. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, Q.; Wang, D.; Yu, A. Image Processing Strategies Based on Deep Neural Network for Simulated Prosthetic Vision. In Proceedings of the 2018 11th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 8–9 December 2018; pp. 200–203. [Google Scholar]
- Li, Q. Research on Optimization of Image Processing Based Generative Adversarial Networks in Simulated Prosthetic Vision. Ph.D. Thesis, Inner Mongolia University of Science & Technology, Baotou, China, 2019. [Google Scholar]
- Guerrero, J.; Martinez-Cantin, R.; Sanchez-Garcia, M. Indoor Scenes Understanding for Visual Prosthesis with Fully Convolutional Networks. In Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, Czech Republic, 25–27 February 2019; pp. 218–225. [Google Scholar]
- Sanchez-Garcia, M.; Martinez-Cantin, R.; Guerrero, J.J. Semantic and structural image segmentation for prosthetic vision. PLoS ONE 2020, 15, e0227677. [Google Scholar] [CrossRef]
- Jiang, H.; Li, H.; Liang, J.; Chai, X. A hierarchical image processing strategy for artificial retinal prostheses. In Proceedings of the 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Beijing, China, 23–25 October 2020; pp. 359–362. [Google Scholar]
- Avraham, D.; Yitzhaky, Y. Effects of Depth-Based Object Isolation in Simulated Retinal Prosthetic Vision. Symmetry 2021, 13, 1763. [Google Scholar] [CrossRef]
- Dagnelie, G.; Kalpin, S.; Yang, L.; Legge, G. Visual Performance with Images Spectrally Augmented by Infrared: A Tool for Severely Impaired and Prosthetic Vision. Investig. Ophthalmol. Vis. Sci. 2005, 46, 1490. [Google Scholar]
- Liang, J.; Li, H.; Chen, J.; Zhai, Z.; Wang, J.; Di, L.; Chai, X. An infrared image-enhancement algorithm in simulated prosthetic vision: Enlarging working environment of future retinal prostheses. Artif. Organs 2022. early view. [Google Scholar] [CrossRef] [PubMed]
- Perez-Yus, A.; Bermudez-Cameo, J.; Lopez-Nicolas, G.; Guerrero, J.J. Depth and Motion Cues with Phosphene Patterns for Prosthetic Vision. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshop (ICCVW), Venice, Italy, 22–29 October 2017. [Google Scholar]
- Rasla, A.; Beyeler, M. The Relative Importance of Depth Cues and Semantic Edges for Indoor Mobility Using Simulated Prosthetic Vision in Immersive Virtual Reality. arXiv 2022, arXiv:2208.05066. [Google Scholar]
- Ariadna Quattoni, A.T. Recognizing indoor scenes. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009. [Google Scholar]
- Fornos, A.P.; Sommerhalder, J.; Pelizzone, M. Reading with a simulated 60-channel implant. Front. Neurosci. 2011, 5, 57. [Google Scholar] [CrossRef] [PubMed]
- Han, T.; Li, H.; Lyu, Q.; Zeng, Y.; Chai, X. Object recognition based on a foreground extraction method under simulated prosthetic vision. In Proceedings of the 2015 International Symposium on Bioelectronics and Bioinformatics (ISBB), Hangzhou, China, 8–9 December 2018. [Google Scholar]
- Guo, F.; Yang, Y.; Xiao, Y.; Gao, Y.; Yu, N. Recognition of Moving Object in High Dynamic Scene for Visual Prosthesis. IEICE Trans. Inf. Syst. 2019, E102.D, 1321–1331. [Google Scholar] [CrossRef]
- Lozano, A.; Suarez, J.S.; Soto-Sanchez, C.; Garrigos, J.; Martinez-Alvarez, J.J.; Ferrandez, J.M.; Fernandez, E. Neurolight: A Deep Learning Neural Interface for Cortical Visual Prostheses. Int. J. Neural Syst. 2020, 30, 2050045. [Google Scholar] [CrossRef]
- White, J.; Kameneva, T.; McCarthy, C. Deep reinforcement learning for task-based feature learning in prosthetic vision. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019; pp. 2809–2812. [Google Scholar]
- Alevizaki, A.; Melanitis, N.; Nikita, K. Predicting eye fixations using computer vision techniques. In Proceedings of the 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), Athens, Greece, 28–30 October 2019; pp. 309–315. [Google Scholar]
- Seuthe, A.-M.; Haus, A.; Januschowski, K.; Szurman, P. First simultaneous explantation and re-implantation of an Argus II retinal prosthesis system. Ophthalmic Surg. Lasers Imaging Retin. 2019, 50, 462–465. [Google Scholar] [CrossRef]
- Ayton, L.N.; Barnes, N.; Dagnelie, G.; Fujikado, T.; Goetz, G.; Hornig, R.; Jones, B.W.; Muqit, M.M.K.; Rathbun, D.L.; Stingl, K.; et al. An update on retinal prostheses. Clin. Neurophysiol. 2020, 131, 1383–1398. [Google Scholar] [CrossRef]
- Xue, K.; MacLaren, R.E.J. Correcting visual loss by genetics and prosthetics. Curr. Opin. Physiol. 2020, 16, 1–7. [Google Scholar] [CrossRef]
- Erickson-Davis, C.; Korzybska, H. What do blind people “see” with retinal prostheses? Observations and qualitative reports of epiretinal implant users. PLoS ONE 2021, 16, e0229189. [Google Scholar]
- Faber, H.; Ernemann, U.; Sachs, H.; Gekeler, F.; Danz, S.; Koitschev, A.; Besch, D.; Bartz-Schmidt, K.-U.; Zrenner, E.; Stingl, K.; et al. CT Assessment of Intraorbital Cable Movement of Electronic Subretinal Prosthesis in Three Different Surgical Approaches. Vis. Sci. Technol. 2021, 10, 16. [Google Scholar] [CrossRef]
- Schiller, P.H.; Slocum, W.M.; Kwak, M.C.; Kendall, G.L.; Tehovnik, E.J. New methods devised specify the size and color of the spots monkeys see when striate cortex (area V1) is electrically stimulated. Proc. Natl. Acad. Sci. USA 2011, 108, 17809–17814. [Google Scholar] [CrossRef]
- Yue, L.; Castillo, J.; Gonzalez, A.C.; Neitz, J.; Humayun, M.S. Restoring Color Perception to the Blind: An Electrical Stimulation Strategy of Retina in Patients with End-stage Retinitis Pigmentosa. Ophthalmology 2021, 128, 453–462. [Google Scholar] [CrossRef] [PubMed]
- Towle, V.L.; Pham, T.; McCaffrey, M.; Allen, D.; Troyk, P.R. Toward the development of a color visual prosthesis. J. Neural Eng. 2021, 18, 023001. [Google Scholar] [CrossRef] [PubMed]
- Flores, T.; Huang, T.; Bhuckory, M.; Ho, E.; Chen, Z.; Dalal, R.; Galambos, L.; Kamins, T.; Mathieson, K.; Palanker, D. Honeycomb-shaped electro-neural interface enables cellular-scale pixels in subretinal prosthesis. Sci. Rep. 2019, 9, 10657. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Visual Tasks | Optimization Methods | Array Distortion | Distortion | Dataset | Evaluation Indicators | Results |
---|---|---|---|---|---|---|
Optimization | ||||||
Significance | no | no | self-construction | Subjects selected the | ||
amplification | Subject | significance amplification | ||||
window | preference | window as the most | ||||
[23] | helpful method. | |||||
no | no | self-construction | Recognition accuracy | The recognition accuracy of | ||
VJFR; | VJFR-ROI, SFR-ROI, and | |||||
SFR; | MFR-ROI were | |||||
52.78 ± 18.52%, 62.78 ± 14.83% | ||||||
MFR | and 67.22 ± 14.45% | |||||
[76] | respectively | |||||
Histogram | no | no | self-construction | real-time | ||
face detection | ||||||
equalization | Algorithm | at low resolution (30 fps) | ||||
enhancement | runtime | |||||
[77] | yes | yes | 26 faces | Correct recognition rates of | ||
Face | Caricatured human | Recognition | 53% and 65% were | |||
Recognition | face [78] | accuracy | obtained with old faces and | |||
new faces, respectively. | ||||||
FaceNet [79] | no | no | self-construction | The average face | ||
Recognition | recognition accuracy | |||||
accuracy | obtained by the subjects | |||||
reached over 77.37%. | ||||||
no | no | self-construction | The average recognition | |||
accuracy at 8 × 8, 12 × 12, and | ||||||
16 × 16 resolutions were | ||||||
Sobel edge | Recognition | 27 ± 12.96%, 56.43 ± 17.54%, | ||||
detection and | accuracy, | and 84.05 ± 11.23%, | ||||
contrast | response time | respectively; the average | ||||
enhancement | response times were | |||||
techniques [80] | 3.21 ± 0.68 s, 0.73 s, and | |||||
1.93 ± 0.53 s. | ||||||
F2Pnet [81] | yes | no | AIRS-PFD | Mean individual | ||
Individual | identifiability of 46% at a | |||||
identifiability | low resolution with | |||||
dropout | ||||||
yes | yes | The irregularity index | ||||
Commonly Used | reached 0.4, and the | |||||
NNS and | Chinese | Recognition | average recognition | |||
expansion | Character | accuracy | accuracy of the subjects | |||
method [26] | Database | after using the correction | ||||
method was over 80%. | ||||||
no | no | Standardized | Reading speed | The reading speeds of the | ||
MNREAD | subjects using 6 × 6 and 8 × 8 | |||||
Threshold judgment | reading test | resolutions reached 15 | ||||
[86] | provided by | words/min and 30 | ||||
Dr. G.E. Legge | words/min. | |||||
yes | yes | Commonly used | ||||
Character | modern Chinese | The recognition accuracy of | ||||
Recognition | Projection and | characters (the | Recognition | the subjects using the NNS | ||
NNS [87] | first 500 in the | accuracy | method exceeded 68%. | |||
statistical table) | ||||||
yes | no | N, H, R, S | The average recognition | |||
Directed | Recognition | accuracy of the subjects | ||||
phosphenes [88] | accuracy | was 65%. | ||||
SP [89] | no | no | After SP, the character | |||
26 English letters | Recognition | recognition accuracy of the | ||||
40 Korean letters | accuracy | subjects broke the passing | ||||
line (60%). | ||||||
Checkerboard-style | no | no | NA * | NA * | ||
phosphene guide | RGB-D camera | |||||
walking [100] | capture | |||||
no | self-construction | Percentage of | The mean PC of subjects in | |||
correctly | single task was | |||||
completed | 88.72 ± 1.41%, mean CT was | |||||
Top-down global | tasks (PC), | 41.76 ± 2.9s, and mean | ||||
contrast | completion | HMID was 575.70 ± 38.53°; | ||||
significance | time (CT), | the mean PC of subjects in | ||||
detection [90] | head | multitask was 84.72 ± 1.41%, | ||||
movements in | mean CT was 40.73 ± 2.1 s, | |||||
degrees | and mean HMID was | |||||
(HMID) | 487.38 ± 14.71°. | |||||
no | no | self-construction | The average recognition | |||
accuracy of the subjects | ||||||
GBVS and edge | Recognition | was 70.63 ± 7.59% for single | ||||
detection [91] | accuracy | object recognition and | ||||
75.31 ± 11.40% for | ||||||
double-target recognition. | ||||||
Generating an | yes | yes | ETH-80 | The subjects were able to | ||
Object | additive model for | Recognition | accomplish an average | |||
Recognition | adversarial | accuracy | recognition accuracy of | |||
networks [92,93] | 80.3 ± 7.7% for all objects. | |||||
SIE-OMS [94,95] | no | no | Object recognition correct | |||
Public | Recognition | rate reached 62.78%; room | ||||
Indoor scenes | accuracy | recognition correct rate | ||||
dataset [102] | reached 70.33%. | |||||
no | no | self-construction | Subjects achieved a mean | |||
PC of 87.08 ± 1.92% in the | ||||||
Percentage of | object test task in the | |||||
Mask-RCNN layers | correctly | description scene and a | ||||
[96] | completed | mean PC of 60.31 ± 1.99% in | ||||
tasks (PC) | the description scene | |||||
content test. | ||||||
InI-based object | yes | no | self-construction | NA * | NA * | |
segmentation [97] | ||||||
Improved SAPHE | no | no | Captured directly | Recognition | The average RA of subjects | |
algorithm [99] | with the camera | accuracy (RA) | was 86.24 ± 1.88%. | |||
no | no | self-construction | Success rate | The success rate for | ||
Depth and edge | subjects with depth and | |||||
combinations [101] | edge was over 80%. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, J.; Zhao, R.; Li, P.; Fang, Z.; Li, Q.; Han, Y.; Zhou, R.; Zhang, Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. Sensors 2022, 22, 6544. https://doi.org/10.3390/s22176544
Wang J, Zhao R, Li P, Fang Z, Li Q, Han Y, Zhou R, Zhang Y. Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. Sensors. 2022; 22(17):6544. https://doi.org/10.3390/s22176544
Chicago/Turabian StyleWang, Jing, Rongfeng Zhao, Peitong Li, Zhiqiang Fang, Qianqian Li, Yanling Han, Ruyan Zhou, and Yun Zhang. 2022. "Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses" Sensors 22, no. 17: 6544. https://doi.org/10.3390/s22176544
APA StyleWang, J., Zhao, R., Li, P., Fang, Z., Li, Q., Han, Y., Zhou, R., & Zhang, Y. (2022). Clinical Progress and Optimization of Information Processing in Artificial Visual Prostheses. Sensors, 22(17), 6544. https://doi.org/10.3390/s22176544