A Visually Inspired Computational Model for Recognition of Optic Flow
Abstract
:1. Introduction
2. Materials and Methods
2.1. Visual Input
2.2. Foundation Model of Spiking Neural Networks
2.3. Decision Model of Echo State Network
3. Results
Algorithm 1: A visually inspired computational model for recognition of optic flow |
3.1. Feature Extraction of SNN Model
3.2. Recognition Performance
4. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the Opportunities and Risks of Foundation Models. arXiv 2022, arXiv:cs.LG/2108.07258. [Google Scholar]
- Bashivan, P.; Kar, K.; Dicarlo, J.J. Neural Population Control via Deep Image Synthesis. Science 2018, 364, eaav9436. [Google Scholar] [CrossRef]
- Walker, E.Y.; Sinz, F.H.; Cobos, E.; Muhammad, T.; Froudarakis, E.; Fahey, P.G.; Ecker, A.S.; Reimer, J.; Pitkow, X.; Tolias, A.S. Inception loops discover what excites neurons most using deep predictive models. Nat. Neurosci. 2019, 22, 2060–2065. [Google Scholar] [CrossRef]
- Ponce, C.R.; Xiao, W.; Schade, P.F.; Hartmann, T.S.; Kreiman, G.; Livingstone, M.S. Evolving Images for Visual Neurons Using a Deep Generative Network Reveals Coding Principles and Neuronal Preferences. Cell 2019, 177, 999–1009.e10. [Google Scholar] [CrossRef]
- Franke, K.; Willeke, K.F.; Ponder, K.; Galdamez, M.; Zhou, N.; Muhammad, T.; Patel, S.; Froudarakis, E.; Reimer, J.; Sinz, F.H.; et al. State-dependent pupil dilation rapidly shifts visual feature selectivity. Nature 2022, 610, 128–134. [Google Scholar] [CrossRef] [PubMed]
- Höfling, L.; Szatko, K.P.; Behrens, C.; Qiu, Y.; Klindt, D.A.; Jessen, Z.; Schwartz, G.W.; Bethge, M.; Berens, P.; Franke, K.; et al. A chromatic feature detector in the retina signals visual context changes. bioRxiv 2022. [Google Scholar] [CrossRef]
- Chen, K.; Kashyap, H.J.; Krichmar, J.L.; Li, X. What can computer vision learn from visual neuroscience? Introduction to the special issue. Biol. Cybern. 2023, 117, 297–298. [Google Scholar] [CrossRef] [PubMed]
- Tavanaei, A.; Maida, A. BP-STDP: Approximating backpropagation using spike timing dependent plasticity. Neurocomputing 2019, 330, 39–47. [Google Scholar] [CrossRef]
- Rueckauer, B.; Liu, S.C. Conversion of analog to spiking neural networks using sparse temporal coding. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Diehl, P.U.; Neil, D.; Binas, J.; Cook, M.; Liu, S.C.; Pfeiffer, M. Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015; pp. 1–8. [Google Scholar] [CrossRef]
- Zhang, A.; Li, X.; Gao, Y.; Niu, Y. Event-Driven Intrinsic Plasticity for Spiking Convolutional Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1986–1995. [Google Scholar] [CrossRef]
- Zhang, A.; Zhou, H.; Li, X.; Zhu, W. Fast and robust learning in Spiking Feed-forward Neural Networks based on Intrinsic Plasticity mechanism. Neurocomputing 2019, 365, 102–112. [Google Scholar] [CrossRef]
- Kim, S.; Park, S.; Na, B.; Yoon, S. Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection. Proc. AAAI Conf. Artif. Intell. 2020, 34, 11270–11277. [Google Scholar] [CrossRef]
- Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [CrossRef] [PubMed]
- Beyeler, M.; Rounds, E.L.; Carlson, K.D.; Dutt, N.; Krichmar, J.L. Neural correlates of sparse coding and dimensionality reduction. PLoS Comput. Biol. 2019, 15, e1006908. [Google Scholar] [CrossRef] [PubMed]
- Nishimoto, S.; Gallant, J.L. A Three-Dimensional Spatiotemporal Receptive Field Model Explains Responses of Area MT Neurons to Naturalistic Movies. J. Neurosci. 2011, 31, 14551–14564. [Google Scholar] [CrossRef]
- Beyeler, M.; Dutt, N.; Krichmar, J.L. 3D Visual Response Properties of MSTd Emerge from an Efficient, Sparse Population Code. J. Neurosci. 2016, 36, 8399–8415. [Google Scholar] [CrossRef]
- Chen, K.; Beyeler, M.; Krichmar, J.L. Cortical Motion Perception Emerges from Dimensionality Reduction with Evolved Spike-Timing-Dependent Plasticity Rules. J. Neurosci. 2022, 42, 5882–5898. [Google Scholar] [CrossRef]
- Browning, N.A.; Grossberg, S.; Mingolla, E. A neural model of how the brain computes heading from optic flow in realistic scenes. Cogn. Psychol. 2009, 59, 320–356. [Google Scholar] [CrossRef] [PubMed]
- Logan, D.J.; Duffy, C.J. Cortical Area MSTd Combines Visual Cues to Represent 3-D Self-Movement. Cereb. Cortex 2005, 16, 1494–1507. [Google Scholar] [CrossRef]
- Layton, O.W. ARTFLOW: A Fast, Biologically Inspired Neural Network that Learns Optic Flow Templates for Self-Motion Estimation. Sensors 2021, 21, 8217. [Google Scholar] [CrossRef]
- Layton, O.W.; Powell, N.; Steinmetz, S.T.; Fajen, B.R. Estimating curvilinear self-motion from optic flow with a biologically inspired neural system*. Bioinspirat. Biomimetics 2022, 17, 046013. [Google Scholar] [CrossRef]
- The interpretation of a moving retinal image. Proc. R. Soc. Lond. Ser. B. Biol. Sci. 1980, 208, 385–397. [CrossRef]
- Izhikevich, E. Which Model to Use for Cortical Spiking Neurons? IEEE Trans. Neural Netw. 2004, 15, 1063–1070. [Google Scholar] [CrossRef] [PubMed]
- Lin, W.; Yi, H.; Li, X. Image Reconstruction and Recognition of Optical Flow Based on Local Feature Extraction Mechanism of Visual Cortex. In Proceedings of the International Conference on Neural Computing for Advanced Applications, Hefei, China, 7–9 July 2023; Zhang, H., Ke, Y., Wu, Z., Hao, T., Zhang, Z., Meng, W., Mu, Y., Eds.; Springer: Singapore, 2023; pp. 18–32. [Google Scholar]
- Niedermeier, L.; Krichmar, J.L. Experience-Dependent Axonal Plasticity in Large-Scale Spiking Neural Network Simulations. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), IEEE, Gold Coast, Australia, 18–23 June 2023. [Google Scholar] [CrossRef]
- Niedermeier, L.; Chen, K.; Xing, J.; Das, A.; Kopsick, J.; Scott, E.; Sutton, N.; Weber, K.; Dutt, N.; Krichmar, J.L. CARLsim 6: An Open Source Library for Large-Scale, Biologically Detailed Spiking Neural Network Simulation. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), IEEE, Padua, Italy, 18–23 July 2022. [Google Scholar] [CrossRef]
- Kheradpisheh, S.R.; Ganjtabesh, M.; Thorpe, S.J.; Masquelier, T. STDP-based spiking deep convolutional neural networks for object recognition. Neural Netw. 2018, 99, 56–67. [Google Scholar] [CrossRef]
- Luke, S. ECJ then and now. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, ACM, Berlin, Germany, 15–19 July 2017. [Google Scholar] [CrossRef]
- Liu, B. A Prediction Method Based on Improved Echo State Network for COVID-19 Nonlinear Time Series. J. Comput. Commun. 2020, 8, 113. [Google Scholar] [CrossRef]
- Jaeger, H. Tutorial on Training Recurrent Neural Networks, Covering BPPT, RTRL, EKF and the “Echo State Network” Approach; German National Research Center for Information Technology: Bonn, Germany, 2002. [Google Scholar]
- Jaeger, H. The “Echo State” Approach to Analysing and Training Recurrent Neural Networks-with an Erratum Note; GMD Technical Report; German National Research Center for Information Technology: Bonn, Germany, 2001; Volume 148, p. 13. [Google Scholar]
- Rodan, A.; Tino, P. Minimum Complexity Echo State Network. IEEE Trans. Neural Netw. 2011, 22, 131–144. [Google Scholar] [CrossRef] [PubMed]
- Wang, G.; Kossenkov, A.V.; Ochs, M.F. LS-NMF: A modified non-negative matrix factorization algorithm utilizing uncertainty estimates. BMC Bioinform. 2006, 7, 1–10. [Google Scholar] [CrossRef] [PubMed]
- Beyeler, M. Visual Stimulus Toolbox: v1.0.0.; Zenodo: Genève, Switzerland, 2016. [Google Scholar] [CrossRef]
- Beyeler, M.; Richert, M.; Dutt, N.D.; Krichmar, J.L. Efficient Spiking Neural Network Model of Pattern Motion Selectivity in Visual Cortex. Neuroinformatics 2014, 12, 435. [Google Scholar] [CrossRef]
ESN Parameters | Values |
---|---|
Reservoir Sparsity (SP) | 0.5 |
Displacement Scale (IS) | 1 |
Input Unit Scale (IC) | 1 |
Spectral Radius (SR) | 0.85 |
Reservoir Activation Function (f) | |
Output Unit Activation Function () | 1 |
Regularization Coefficient () | 1 × 10 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Lin, W.; Yi, H.; Wang, L.; Chen, J. A Visually Inspired Computational Model for Recognition of Optic Flow. Mathematics 2023, 11, 4777. https://doi.org/10.3390/math11234777
Li X, Lin W, Yi H, Wang L, Chen J. A Visually Inspired Computational Model for Recognition of Optic Flow. Mathematics. 2023; 11(23):4777. https://doi.org/10.3390/math11234777
Chicago/Turabian StyleLi, Xiumin, Wanyan Lin, Hao Yi, Lei Wang, and Jiawei Chen. 2023. "A Visually Inspired Computational Model for Recognition of Optic Flow" Mathematics 11, no. 23: 4777. https://doi.org/10.3390/math11234777
APA StyleLi, X., Lin, W., Yi, H., Wang, L., & Chen, J. (2023). A Visually Inspired Computational Model for Recognition of Optic Flow. Mathematics, 11(23), 4777. https://doi.org/10.3390/math11234777