A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms
Abstract
:1. Introduction
2. Background
- Connectionism: This is described using neural networks (NN) which consist of many simple units (neurons) interconnected together with weights. Determining the appropriate weights results in the NNs’ ability to learn and solve a given problem [9].
- Parallelism: All neurons work in parallel to each other to simultaneously perform various functions and ensure the efficient and successful operation of neural networks [9].
- Asynchrony: To achieve parallelism, synchronisation of all neurons is not required as each neuron performs a specified task. Asynchrony reduces the power consumption that would otherwise be required to achieve synchronisation [9].
- Impulse nature of information transmission: The information encoded as spikes differs between different pairs of neurons and does not occur instantly. A synapse is therefore characterised by the weight and time delay and provides advantages over traditional neural networks. It is asynchronous, allows the use of dynamic data due to its inclusion of the time component, it is a complex non-linear dynamic system, and the neuron is only activated upon the receival of a spike, reducing the power consumption as its inactive state does not consume a large amount of energy [9].
- On-device learning: It has the ability to learn in a continuous and incremental manner which in turn allows the customisation and personalisation of smart devices based on the user’s needs while maintaining privacy through the avoidance of user data transmission to the cloud [9].
- Local learning: Conventional neural networks use backpropagation algorithms which introduce two problems: the weight transport problem and the update locking problem. The weight transport problem is the system’s inability to exchange information about the weight value and the update locking problem is the requirement of forward pass activation values to be stored for backward pass. Local learning is an alternative to backpropagation and uses a spike-timing-dependent plasticity (STDP) model where the synapses are strengthened upon receival of a spike before the neuron generated the spike or weakened if the spike was received after the neuron generated the spike. As a result, local learning can train any size of network as it does not require large amounts of global data transfer operations [9].
- Sparsity: Not all neurons are activated to perform a task. Neuromorphic chips have temporal, spatial, and structural sparsity. Temporal sparsity is the data sparse in time which is determined by the transmission of only the changed part of a signal. Spatial sparsity is sparsity in data streams that are a result of neurons that only activate only upon reaching a certain threshold value. Structural sparsity refers to the data flow with respect to the network topology, as each neuron has a limited number of connections, and they are not all fully interconnected together [9,20].
- Analog computing: Digital computing is limited due to its high costs. Analog circuits can be used to model the dynamics of the membrane potential and to model synaptic operations. Analog circuits provide a more time- and energy-efficient alternative [9].
- In-memory computing: Each individual neuron has its own memory or stored state which eliminates the need for transferring intermediate data or the competitive memory access [9].
2.1. Spiking Neural Networks (SNN)
2.2. Spiking Neuron Models
2.3. SNN Testing
3. Neuromorphic Circuit Design
3.1. Analog Design
3.2. Digital Design
3.3. Mixed Design
4. Machine Learning Algorithms
4.1. Supervised Learning
4.2. Unsupervised Learning
4.3. Reinforcement Learning
5. Neuromorphic Projects
6. Proposed Method and Future Work
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Arikpo, I.I.; Ogban, F.U.; Eteng, I.E. Von neumann architecture and modern computers. Glob. J. Math. Sci. 2008, 6, 97–103. [Google Scholar] [CrossRef]
- Luo, T.; Wong, W.F.; Goh, R.S.M.; Do, A.T.; Chen, Z.; Li, H.; Jiang, W.; Yau, W. Achieving Green AI with Energy-Efficient Deep Learning Using Neuromorphic Computing. Commun. ACM 2023, 66, 52–57. [Google Scholar] [CrossRef]
- Kumar, S.; Wang, X.; Strachan, J.P.; Yang, Y.; Lu, W.D. Dynamical memristors for higher-complexity neuromorphic computing. Nat. Rev. Mater. 2022, 7, 575–591. [Google Scholar] [CrossRef]
- Xu, B.; Huang, Y.; Fang, Y.; Wang, Z.; Yu, S.; Xu, R. Recent Progress of Neuromorphic Computing Based on Silicon Photonics: Electronic–Photonic Co-Design, Device, and Architecture. Photonics 2022, 9, 698. [Google Scholar] [CrossRef]
- Schuman, C.D.; Kulkarni, S.R.; Parsa, M.; Mitchell, J.P.; Date, P.; Kay, B. Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2022, 2, 10–19. [Google Scholar] [CrossRef]
- Byun, K.; Choi, I.; Kwon, S.; Kim, Y.; Kang, D.; Cho, Y.W.; Yoon, S.K.; Kim, S. Recent Advances in Synaptic Nonvolatile Memory Devices and Compensating Architectural and Algorithmic Methods toward Fully Integrated Neuromorphic Chips. Adv. Mater. Technol. 2022, 8, 2200884. [Google Scholar] [CrossRef]
- Javanshir, A.; Nguyen, T.T.; Mahmud, M.A.P.; Kouzani, A.Z. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput. 2022, 34, 1289–1328. [Google Scholar] [CrossRef]
- Bartolozzi, C.; Indiveri, G.; Donati, E. Embodied neuromorphic intelligence. Nat. Commun. 2022, 13, 1024. [Google Scholar] [CrossRef]
- Ivanov, D.; Chezhegov, A.; Kiselev, M.; Grunin, A.; Larionov, D. Neuromorphic artificial intelligence systems. Front. Neurosci. 2022, 16, 959626. [Google Scholar] [CrossRef] [PubMed]
- Shrestha, A.; Fang, H.; Mei, Z.; Rider, D.P.; Wu, Q.; Qiu, Q. A Survey on Neuromorphic Computing: Models and Hardware. IEEE Circuits Syst. Mag. 2022, 22, 6–35. [Google Scholar] [CrossRef]
- Wei, Q.; Gao, B.; Tang, J.; Qian, H.; Wu, H. Emerging Memory-Based Chip Development for Neuromorphic Computing: Status, Challenges, and Perspectives. IEEE Electron Devices Mag. 2023, 1, 33–49. [Google Scholar] [CrossRef]
- Guo, T.; Pan, K.; Jiao, Y.; Sun, B.; Du, C.; Mills, J.P.; Chen, Z.; Zhao, X.; Wei, L.; Zhou, Y.N.; et al. Versatile memristor for memory and neuromorphic computing. Nanoscale Horiz. 2022, 7, 299–310. [Google Scholar] [CrossRef]
- Zhu, Y.; Mao, H.; Zhu, Y.; Wang, X.; Fu, C.; Ke, S.; Wan, C.; Wan, Q. CMOS-compatible neuromorphic devices for neuromorphic perception and computing: A review. Int. J. Extrem. Manuf. 2023, 5, 042010. [Google Scholar] [CrossRef]
- Kimura, M.; Shibayama, Y.; Nakashima, Y. Neuromorphic chip integrated with a large-scale integration circuit and amorphous-metal-oxide semiconductor thin-film synapse devices. Sci. Rep. 2022, 12, 5359. [Google Scholar] [CrossRef]
- Li, B.; Zhong, D.; Chen, X.; Liu, C. Enabling Neuromorphic Computing for Artificial Intelligence with Hardware-Software Co-Design. Artif. Intell. 2023. [CrossRef]
- Christensen, D.V.; Dittmann, R.; Linares-Barranco, B.; Sebastian, A.; Le Gallo, M.; Redaelli, A.; Slesazeck, S.; Mikolajick, T.; Spiga, S.; Menzel, S.; et al. 2022 roadmap on neuromorphic computing and engineering. Neuromorphic Comput. Eng. 2022, 2, 022501. [Google Scholar] [CrossRef]
- Pham, M.D.; D’Angiulli, A.; Dehnavi, M.M.; Chhabra, R. From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems? Brain Sci. 2023, 13, 1316. [Google Scholar] [CrossRef]
- Zhang, H.; Ho, N.M.; Polat, D.Y.; Chen, P.; Wahib, M.; Nguyen, T.T.; Meng, J.; Goh, R.S.M.; Matsuoka, S.; Luo, T.; et al. Simeuro: A Hybrid CPU-GPU Parallel Simulator for Neuromorphic Computing Chips. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 2767–2782. [Google Scholar] [CrossRef]
- Das, R.P.; Biswas, C.; Majumder, S. Study of Spiking Neural Network Architecture for Neuromorphic Computing. In Proceedings of the 2022 IEEE 11th International Conference on Communication Systems and Network Technologies (CSNT), Indore, India, 23–24 April 2022. [Google Scholar] [CrossRef]
- Panzeri, S.; Janotte, E.; Pequeño-Zurro, A.; Bonato, J.; Bartolozzi, C. Constraints on the design of neuromorphic circuits set by the properties of neural population codes. Neuromorphic Comput. Eng. 2023, 3, 012001. [Google Scholar] [CrossRef]
- Nguyen, D.-A.; Tran, X.-T.; Iacopi, F. A Review of Algorithms and Hardware Implementations for Spiking Neural Networks. J. Low Power Electron. Appl. 2021, 11, 23. [Google Scholar] [CrossRef]
- Frenkel, C.; Lefebvre, M.; Legat, J.-D.; Bol, D. A 0.086-mm2 12.7-pJ/SOP 64k-Synapse 256-Neuron Online-Learning Digital Spiking Neuromorphic Processor in 28 nm CMOS. IEEE Trans. Biomed. Circuits Syst. 2018, 13, 145–158. [Google Scholar] [CrossRef] [PubMed]
- Yin, S.; Venkataramanaiah, S.K.; Chen, G.K.; Krishnamurthy, R.; Cao, Y.; Chakrabarti, C.; Seo, J.-S. Algorithm and hardware design of discrete-time spiking neural networks based on back propagation with binary activations. In Proceedings of the 2017 IEEE Biomedical Circuits and Systems Conference (BioCAS), Turin, Italy, 19–21 October 2017. [Google Scholar] [CrossRef]
- Zheng, N.; Mazumder, P. A Low-Power Hardware Architecture for On-Line Supervised Learning in Multi-Layer Spiking Neural Networks. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018. [Google Scholar] [CrossRef]
- Chen, G.K.; Kumar, R.; Sumbul, H.E.; Knag, P.C.; Krishnamurthy, R.K. A 4096-Neuron 1M-Synapse 3.8-pJ/SOP Spiking Neural Network with On-Chip STDP Learning and Sparse Weights in 10-nm FinFET CMOS. IEEE J. Solid-State Circuits 2019, 54, 992–1002. [Google Scholar] [CrossRef]
- Spyrou, T.; Stratigopoulos, H.-G. On-Line Testing of Neuromorphic Hardware. In Proceedings of the 2023 IEEE European Test Symposium (ETS), Venezia, Italy, 22–26 May 2023. [Google Scholar] [CrossRef]
- Frenkel, C.; Bol, D.; Indiveri, G. Bottom-Up and Top-Down Approaches for the Design of Neuromorphic Processing Systems: Tradeoffs and Synergies between Natural and Artificial Intelligence. Proc. IEEE 2023, 111, 623–652. [Google Scholar] [CrossRef]
- Ye, N.; Cao, L.; Yang, L.; Zhang, Z.; Fang, Z.; Gu, Q.; Yang, G.-Z. Improving the robustness of analog deep neural networks through a Bayes-optimized noise injection approach. Commun. Eng. 2023, 2, 25. [Google Scholar] [CrossRef]
- Ye, N.; Mei, J.; Fang, Z.; Zhang, Y.; Zhang, Z.; Wu, H.; Liang, X. BayesFT: Bayesian Optimization for Fault Tolerant Neural Network Architecture. In Proceedings of the 2021 58th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 5–9 December 2021. [Google Scholar] [CrossRef]
- Zhong, Y.; Wang, Z.; Cui, X.; Cao, J.; Wang, Y. An Efficient Neuromorphic Implementation of Temporal Coding Based On-chip STDP Learning. IEEE Trans. Circuits Syst. II-Express Briefs 2023, 70, 4241–4245. [Google Scholar] [CrossRef]
- Agebure, M.A.; Wumnaya, P.A.; Baagyere, E.Y. A Survey of Supervised Learning Models for Spiking Neural Network. Asian J. Res. Comput. Sci. 2021, 9, 35–49. [Google Scholar] [CrossRef]
- Clark, K.; Wu, Y. Survey of Neuromorphic Computing: A Data Science Perspective. In Proceedings of the 2023 IEEE 3rd International Conference on Computer Communication and Artificial Intelligence (CCAI), Taiyuan, China, 26–28 May 2023. [Google Scholar] [CrossRef]
- Garg, N.; Balafrej, I.; Stewart, T.C.; Portal, J.M.; Bocquet, M.; Querlioz, D.; Rouat, J.; Beilliard, Y.; Alibart, F. Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential. Front. Neurosci. 2022, 16, 983950. [Google Scholar] [CrossRef] [PubMed]
- Wunderlich, T.; Kungl, A.F.; Müller, E.; Hartel, A.; Stradmann, Y.; Aamir, S.A.; Grübl, A.; Heimbrecht, A.; Schreiber, K.; Stöckel, D.; et al. Demonstrating Advantages of Neuromorphic Computation: A Pilot Study. Front. Neurosci. 2019, 13, 260. [Google Scholar] [CrossRef]
- Ghosh, S.; Nakajima, K.; Krisnanda, T.; Fujii, K.; Liew, T.C.H. Quantum Neuromorphic Computing with Reservoir Computing Networks. Adv. Quantum Technol. 2021, 4, 2100053. [Google Scholar] [CrossRef]
- Hoffmann, A.; Ramanathan, S.; Grollier, J.; Kent, A.D.; Rozenberg, M.J.; Schuller, I.K.; Shpyrko, O.G.; Dynes, R.C.; Fainman, Y.; Frano, A.; et al. Quantum materials for energy-efficient neuromorphic computing: Opportunities and challenges. APL Mater. 2022, 10, 070904. [Google Scholar] [CrossRef]
- Asad, A.; Kaur, R.; Mohammadi, F. A Survey on Memory Subsystems for Deep Neural Network Accelerators. Future Internet 2022, 14, 146. [Google Scholar] [CrossRef]
- Asad, A.; Mohammadi, F. NeuroTower: A 3D Neuromorphic Architecture with Low-Power TSVs. In Lecture Notes in Networks and Systems; Springer International Publishing: Cham, Switzerland, 2022; pp. 227–236. [Google Scholar] [CrossRef]
- Kaur, R.; Asad, A.; Mohammadi, F. A Comprehensive Review on Processing-in-Memory Architectures for Deep Neural Networks. Computers 2024, 13, 174. [Google Scholar] [CrossRef]
Property | TrueNorth | Loihi | |
In-memory computation | Near-memory | Near-memory | |
Signal | Spikes | Spikes | |
Size neurons/synapses | 1 M/256 M | 128 K/128 M | |
On-device learning | No | STDP | |
Analogue | No | No | |
Event-based | Yes | Yes | |
nm | 28 | 14 | |
Features | First industrial neuromorphic chip without training (IBM) | First neuromorphic chip with training (Intel) | |
Property | Loihi2 | Tianjic | |
In-memory computation | Near-memory | Near-memory | |
Signal | Real numbers, Spikes | Real numbers, Spikes | |
Size neurons/synapses | 120 K/1 M | 40 K/10 M | |
On-device learning | STDP | No | |
Analogue | No | No | |
Event-based | Yes | Yes | |
nm | 7 | 28 | |
Features | Non-binary spikes, neurons can be programmed | Hybrid chip | |
Property | SpiNNaker | Brain-ScaleS | |
In-memory computation | Near-memory | Yes | |
Signal | Real numbers, Spikes | Real numbers, Spikes | |
Size neurons/synapses | - | 512/130 K | |
On-device learning | STDP | STDP | |
Analogue | No | Yes | |
Event-based | No | Yes | |
nm | 22 | 65 | |
Features | Scalable computer for SNN simulation | Analog neurons, large size | |
Property | GrAIOne | Akida | |
In-memory computation | Near-memory | Near-memory | |
Signal | Real numbers, Spikes | Spikes | |
Size neurons/synapses | 200 K/- | 1.2 M/10 B | |
On-device learning | No | STDP | |
Analogue | No | No | |
Event-based | Yes | Yes | |
nm | 28 | 28 | |
Features | NeuronFlow architecture, effective support of sparse computations | Incremental, one-shot and continuous learning for CNN | |
Property | Memristor (IBM) | ||
In-memory computation | Yes | ||
Signal | Spikes | ||
Size neurons/synapses | 512/64 K | ||
On-device learning | Yes | ||
Analogue | Yes | ||
Event-based | Yes | ||
nm | 50 | ||
Features | Allows each synaptic cell to operate asynchronously |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Al Abdul Wahid, S.; Asad, A.; Mohammadi, F. A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms. Electronics 2024, 13, 2963. https://doi.org/10.3390/electronics13152963
Al Abdul Wahid S, Asad A, Mohammadi F. A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms. Electronics. 2024; 13(15):2963. https://doi.org/10.3390/electronics13152963
Chicago/Turabian StyleAl Abdul Wahid, Seham, Arghavan Asad, and Farah Mohammadi. 2024. "A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms" Electronics 13, no. 15: 2963. https://doi.org/10.3390/electronics13152963
APA StyleAl Abdul Wahid, S., Asad, A., & Mohammadi, F. (2024). A Survey on Neuromorphic Architectures for Running Artificial Intelligence Algorithms. Electronics, 13(15), 2963. https://doi.org/10.3390/electronics13152963