EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing
Abstract
:1. Introduction
- Energy constraints: Edge computing devices, typically standalone units, operate under stringent energy constraints, making power efficiency a critical factor in their operation. Different mapping schemes can lead to substantial disparities in energy consumption, significantly affecting the performance and viability of these devices. These realities underscore the urgent need for mapping schemes that thoroughly consider energy efficiency.
- Real-time processing: Many edge computing applications require real-time or near-real-time processing, necessitating low latency and efficient network communication. However, achieving these performance characteristics without compromising other essential attributes, such as energy efficiency, is a challenging balancing issue.
- Complex environments: Edge computing scenarios are inherently complex and diverse, often with varying constraints and requirements. Edge environments, unlike server-end environments, can have much more diverse and demanding constraints and requirements.
- We propose a novel partitioning method inspired by the streaming-based approach [14]. This method efficiently partition SNNs into multiple neuron clusters, significantly reducing partitioning time while minimizing the number of spikes between clusters. Importantly, our approach takes into account the fan-in and fan-out limitations of neuromorphic hardware, ensuring compatibility with edge device constraints.
- We propose a multi-objective optimization method to place neuron clusters onto neuromorphic devices. This method addresses the diverse challenges in edge computing, including bandwidth limitations, communication costs, and energy consumption. Our approach, while successfully reducing energy consumption and latency, crucially enhances the overall performance of neuromorphic computation in edge environments.
- We propose a versatile mapping toolchain suitable for a wide range of neuromorphic edge devices. By conducting extensive evaluations of various SNNs and scales on NoC-based hardware, we demonstrate significant enhancements in edge computing performance. In addition, our toolchain can accommodate the diverse constraints and requirements of different edge computing scenarios.
2. Related Work
3. Background and Motivation
3.1. A Brief Introduction to SNN
3.2. Edge Computing
4. Framework and Design Method
4.1. Framework Overview
- Offline training stage: The design and training of models with different network structures are crucial, considering the diverse application scenarios in edge computing. These structures must meet varied accuracy and scale requirements. The training process can be implemented using frameworks like Nengo [39], CARLsim [40], and Brain2 [41]. For clarity, we use the commonly adopted convolutional SNNs for recognition applications in Figure 3, primarily used for classification tasks [42].
- Processing stage: As shown in Figure 4, this stage is pivotal to the entire process. We start by extracting information from trained SNNs, such as neuron connection structure, weight information, and spike communication details. The SNN topology is represented as a computing graph, as depicted in Figure 4a. Subsequently, we divide the SNN into smaller neuron clusters for more manageable processing, demonstrated in Figure 4b. Finally, we map these partitioned neuron clusters onto the neuromorphic core, as illustrated in Figure 4c. This process forms the core function of our EdgeMap toolchain. It optimizes hardware resource utilization and ensures computational efficiency.
- Edge computing stage: In this part, we briefly introduce two edge computing modes: individual edge computing scenarios and collaborative edge computing scenarios. The individual edge computing scenario involves a variety of computing modes and different types of devices. As for collaborative edge computing scenarios, multiple edge devices cooperate via network connections to accomplish more computing require tasks.
4.2. Neuromorphic Edge Computing Hardware Model
- The spike communication latency and energy consumption between devices are equivalent to those within the device’s neuromorphic cores.
- The energy required for spike processing is uniformly distributed across all edge devices.
- The NCs across different devices possess equal computational capacity, including identical quantities of neurons and synapses.
4.3. Streaming-Based SNN Partition
Algorithm 1: Flow-based SNN partition |
4.4. Multi-Objective Optimization Mapping
- (1)
- A neuron cluster can be mapped to only one computing device, thereby establishing a unique mapping relationship. Mathematically, this constraint can be expressed as follows:
- (2)
- A computing device can accommodate, at most, one neuron cluster, ensuring exclusivity of the mapping process. This can be formally stated as follows:
- (1)
- Total energy consumption: This objective captures the cumulative energy consumed by all spikes on the hardware, providing a direct reflection of the energy consumption. It is computed as
- (2)
- Communication cost: This is the total communication count of the NCs, acting as a reflection of the system’s communication efficiency. This metric has substantial implications for performance and power consumption. It is computed as
- (3)
- Throughput: This metric represents the maximum number of spikes traveling on a link between any two neighboring clusters. As a pivotal metric directly influencing the system’s performance, it is computed as follows:
- (4)
- Average Hop: This metric offers insights into the efficiency of communication and overall performance. It denotes the average number of hops a spike requires to travel between neuron clusters, which mirrors the communication efficiency of the edge computing system. It is computed as follows:
- (5)
- Average latency: This metric represents the average delay experienced by spikes on the NoC. The latency is critical to system responsiveness and the real-time nature of inference computations, particularly in edge computing applications requiring swift responses. It can be computed as follows:
- (6)
- Average congestion: This metric represents the average communication pattern of the NoC, which is the connection network in the edge scenarios. It is especially critical for applications demanding real-time responses. It is computed as follows:
Algorithm 2: Multi-objective optimized mapping for edge computing |
Algorithm 3: Procedure for the optimal mapping result |
5. Evaluation Methodology
5.1. Experimental Environment
5.2. Evaluated Applications
5.3. Evaluated Metrics
- Energy consumption: This reflects the energy consumed by the neuromorphic hardware, a critical factor in edge device computing. It is modeled in Equation (6).
- Communication cost: This represents the cost of multi-chip or multi-devices edge computing scenarios. It is modeled in Equation (7).
- Throughput: This refers to the throughput of each application on the multi-core neuromorphic hardware. It is modeled in Equation (8).
- Average hop: Serves as an important indicator of communication efficiency in NoC-based neuromorphic hardware, which is modeled in Equation (9). We further analyze the maximum hop count across the mapping schemes to capture extremes.
- Average latency: This metric is indicative of the time delay in processing and is essential for time-sensitive edge computing applications. It is modeled in Equation (10); we also assess the maximum latency observed in the mapping schemes.
- Average congestion: This metric provides insight into the load balance across the network. It is modeled in Equation (11), and we extend our analysis to the maximum congestion experienced in the mapping schemes.
- Execution time: This metric measures the duration required for a mapping toolchain to produce a mapping result, encompassing both the partitioning and mapping stages.
5.4. Comparison Approaches
6. Results and Discussion
6.1. Energy Consumption Analysis
6.2. Communication Cost Analysis
6.3. Throughput Analysis
6.4. Spike Hop Analysis
6.5. Spike Latency Analysis
6.6. Congestion Analysis
6.7. Execution Time Analysis
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Ghosh-Dastidar, S.; Adeli, H. Spiking neural networks. Int. J. Neural Syst. 2009, 19, 295–308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, X.; Han, Y.; Leung, V.C.M.; Niyato, D.; Yan, X.; Chen, X. Convergence of Edge Computing and Deep Learning: A Comprehensive Survey. IEEE Commun. Surv. Tutor. 2020, 22, 869–904. [Google Scholar] [CrossRef] [Green Version]
- Taherkhani, A.; Belatreche, A.; Li, Y.; Cosma, G.; Maguire, L.P.; McGinnity, T.M. A review of learning in biologically plausible spiking neural networks. Neural Netw. 2020, 122, 253–272. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Xu, Y.; Yan, R.; Tang, H. Deep Spiking Neural Networks with Binary Weights for Object Recognition. IEEE Trans. Cogn. Dev. Syst. 2021, 13, 514–523. [Google Scholar] [CrossRef]
- Bittar, A.; Garner, P.N. Surrogate Gradient Spiking Neural Networks as Encoders for Large Vocabulary Continuous Speech Recognition. arXiv 2022, arXiv:2212.01187. [Google Scholar]
- Bing, Z.; Meschede, C.; Röhrbein, F.; Huang, K.; Knoll, A.C. A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks. Front. Neurorobot. 2018, 12, 35. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Akopyan, F.; Sawada, J.; Cassidy, A.; Alvarez-Icaza, R.; Arthur, J.V.; Merolla, P.; Imam, N.; Nakamura, Y.Y.; Datta, P.; Nam, G.; et al. TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 2015, 34, 1537–1557. [Google Scholar] [CrossRef]
- Davies, M.; Srinivasa, N.; Lin, T.; Chinya, G.N.; Cao, Y.; Choday, S.H.; Dimou, G.D.; Joshi, P.; Imam, N.; Jain, S.; et al. Loihi: A Neuromorphic Manycore Processor with On-Chip Learning. IEEE Micro 2018, 38, 82–99. [Google Scholar] [CrossRef]
- Furber, S.B.; Lester, D.R.; Plana, L.A.; Garside, J.D.; Painkras, E.; Temple, S.; Brown, A.D. Overview of the SpiNNaker System Architecture. IEEE Trans. Comput. 2013, 62, 2454–2467. [Google Scholar] [CrossRef] [Green Version]
- Rajendran, B.; Sebastian, A.; Schmuker, M.; Srinivasa, N.; Eleftheriou, E. Low-power neuromorphic hardware for signal processing applications: A review of architectural and system-level design approaches. IEEE Signal Process. Mag. 2019, 36, 97–110. [Google Scholar] [CrossRef] [Green Version]
- Galluppi, F.; Davies, S.; Rast, A.; Sharp, T.; Plana, L.A.; Furber, S. A hierachical configuration system for a massively parallel neural hardware platform. In Proceedings of the 9th Conference on Computing Frontiers, Cagliari, Italy, 15–17 May 2012; pp. 183–192. [Google Scholar]
- Rueckauer, B.; Bybee, C.; Goettsche, R.; Singh, Y.; Mishra, J.; Wild, A. NxTF: An API and Compiler for Deep Spiking Neural Networks on Intel Loihi. ACM J. Emerg. Technol. Comput. Syst. 2022, 18, 48:1–48:22. [Google Scholar] [CrossRef]
- Balaji, A.; Catthoor, F.; Das, A.; Wu, Y.; Huynh, K.; Dell’Anna, F.; Indiveri, G.; Krichmar, J.L.; Dutt, N.D.; Schaafsma, S. Mapping Spiking Neural Networks to Neuromorphic Hardware. IEEE Trans. Very Large Scale Integr. Syst. 2020, 28, 76–86. [Google Scholar] [CrossRef]
- Tsourakakis, C.E.; Gkantsidis, C.; Radunovic, B.; Vojnovic, M. FENNEL: Streaming graph partitioning for massive scale graphs. In Proceedings of the Seventh ACM International Conference on Web Search and Data Mining, WSDM 2014, New York, NY, USA, 24–28 February 2014; pp. 333–342. [Google Scholar] [CrossRef]
- Varshika, M.L.; Balaji, A.; Corradi, F.; Das, A.; Stuijt, J.; Catthoor, F. Design of Many-Core Big Little μBrains for Energy-Efficient Embedded Neuromorphic Computing. In Proceedings of the 2022 Design, Automation & Test in Europe Conference & Exhibition, DATE 2022, Antwerp, Belgium, 14–23 March 2022; Bolchini, C., Verbauwhede, I., Vatajelu, I., Eds.; IEEE: Antwerp, Belgium, 2022; pp. 1011–1016. [Google Scholar] [CrossRef]
- Stuijt, J.; Sifalakis, M.; Yousefzadeh, A.; Corradi, F. μBrain: An event-driven and fully synthesizable architecture for spiking neural networks. Front. Neurosci. 2021, 15, 664208. [Google Scholar] [CrossRef] [PubMed]
- Amir, A.; Datta, P.; Risk, W.P.; Cassidy, A.S.; Kusnitz, J.A.; Esser, S.K.; Andreopoulos, A.; Wong, T.M.; Flickner, M.; Alvarez-Icaza, R.; et al. Cognitive computing programming paradigm: A Corelet Language for composing networks of neurosynaptic cores. In Proceedings of the 2013 International Joint Conference on Neural Networks, IJCNN 2013, Dallas, TX, USA, 4–9 August 2013; pp. 1–10. [Google Scholar] [CrossRef]
- Lin, C.; Wild, A.; Chinya, G.N.; Lin, T.; Davies, M.; Wang, H. Mapping spiking neural networks onto a manycore neuromorphic architecture. In Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2018, Philadelphia, PA, USA, 18–22 June 2018; pp. 78–89. [Google Scholar] [CrossRef]
- Wang, S.; Yu, Q.; Xie, T.; Ma, C.; Pei, J. Approaching the mapping limit with closed-loop mapping strategy for deploying neural networks on neuromorphic hardware. Front. Neurosci. 2023, 17, 1168864. [Google Scholar] [CrossRef]
- Deng, L.; Wang, G.; Li, G.; Li, S.; Liang, L.; Zhu, M.; Wu, Y.; Yang, Z.; Zou, Z.; Pei, J.; et al. Tianjic: A unified and scalable chip bridging spike-based and continuous neural computation. IEEE J. Solid-State Circuits 2020, 55, 2228–2246. [Google Scholar] [CrossRef]
- Ji, Y.; Zhang, Y.; Li, S.; Chi, P.; Jiang, C.; Qu, P.; Xie, Y.; Chen, W. NEUTRAMS: Neural network transformation and co-design under neuromorphic hardware constraints. In Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO 2016, Taipei, Taiwan, 15–19 October 2016; pp. 21:1–21:13. [Google Scholar] [CrossRef]
- Li, S.; Guo, S.; Zhang, L.; Kang, Z.; Wang, S.; Shi, W.; Wang, L.; Xu, W. SNEAP: A fast and efficient toolchain for mapping large-scale spiking neural network onto NoC-based neuromorphic platform. In Proceedings of the 2020 on Great Lakes Symposium on VLSI, Virtual Event, China, 7–9 September 2020; pp. 9–14. [Google Scholar]
- Song, S.; Chong, H.; Balaji, A.; Das, A.; Shackleford, J.A.; Kandasamy, N. DFSynthesizer: Dataflow-based Synthesis of Spiking Neural Networks to Neuromorphic Hardware. ACM Trans. Embed. Comput. Syst. 2022, 21, 27:1–27:35. [Google Scholar] [CrossRef]
- Kernighan, B.W.; Lin, S. An efficient heuristic procedure for partitioning graphs. Bell Syst. Tech. J. 1970, 49, 291–307. [Google Scholar] [CrossRef]
- Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN’95), Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar] [CrossRef]
- Jin, O.; Xing, Q.; Li, Y.; Deng, S.; He, S.; Pan, G. Mapping Very Large Scale Spiking Neuron Network to Neuromorphic Hardware. In Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2023, Vancouver, BC, Canada, 25–29 March 2023; Aamodt, T.M., Jerger, N.D.E., Swift, M.M., Eds.; ACM: New York, NY, USA, 2023; Volume 3, pp. 419–432. [Google Scholar] [CrossRef]
- Nair, M.V.; Indiveri, G. Mapping high-performance RNNs to in-memory neuromorphic chips. arXiv 2019, arXiv:1905.10692. [Google Scholar]
- Dang, K.N.; Doan, N.A.V.; Abdallah, A.B. MigSpike: A Migration Based Algorithms and Architecture for Scalable Robust Neuromorphic Systems. IEEE Trans. Emerg. Top. Comput. 2022, 10, 602–617. [Google Scholar] [CrossRef]
- Cheng, X.; Hao, Y.; Xu, J.; Xu, B. LISNN: Improving Spiking Neural Networks with Lateral Interactions for Robust Object Recognition. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, Yokohama, Japan, 11–17 July 2020; pp. 1519–1525. [Google Scholar] [CrossRef]
- Hazan, H.; Saunders, D.J.; Khan, H.; Patel, D.; Sanghavi, D.T.; Siegelmann, H.T.; Kozma, R. BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python. Front. Neuroinform. 2018, 12, 89. [Google Scholar] [CrossRef] [Green Version]
- Li, E.; Zeng, L.; Zhou, Z.; Chen, X. Edge AI: On-demand accelerating deep neural network inference via edge computing. IEEE Trans. Wirel. Commun. 2019, 19, 447–457. [Google Scholar] [CrossRef] [Green Version]
- Firouzi, F.; Jiang, S.; Chakrabarty, K.; Farahani, B.J.; Daneshmand, M.; Song, J.; Mankodiya, K. Fusion of IoT, AI, Edge-Fog-Cloud, and Blockchain: Challenges, Solutions, and a Case Study in Healthcare and Medicine. IEEE Internet Things J. 2023, 10, 3686–3705. [Google Scholar] [CrossRef]
- Zhang, H.; Jolfaei, A.; Alazab, M. A face emotion recognition method using convolutional neural network and image edge computing. IEEE Access 2019, 7, 159081–159089. [Google Scholar] [CrossRef]
- Yang, S.; Gong, Z.; Ye, K.; Wei, Y.; Huang, Z.; Huang, Z. EdgeRNN: A compact speech recognition network with spatio-temporal features for edge computing. IEEE Access 2020, 8, 81468–81478. [Google Scholar] [CrossRef]
- Bilal, K.; Erbad, A. Edge computing for interactive media and video streaming. In Proceedings of the 2017 Second International Conference on Fog and Mobile Edge Computing (FMEC), Valencia, Spain, 8–11 May 2017; pp. 68–73. [Google Scholar]
- Shi, W.; Cao, J.; Zhang, Q.; Li, Y.; Xu, L. Edge computing: Vision and challenges. IEEE Internet Things J. 2016, 3, 637–646. [Google Scholar] [CrossRef]
- Liu, S.; Liu, L.; Tang, J.; Yu, B.; Wang, Y.; Shi, W. Edge computing for autonomous driving: Opportunities and challenges. Proc. IEEE 2019, 107, 1697–1716. [Google Scholar] [CrossRef]
- Wang, B.; Li, M.; Jin, X.; Guo, C. A reliable IoT edge computing trust management mechanism for smart cities. IEEE Access 2020, 8, 46373–46399. [Google Scholar]
- Bekolay, T.; Bergstra, J.; Hunsberger, E.; DeWolf, T.; Stewart, T.C.; Rasmussen, D.; Choo, F.; Voelker, A.; Eliasmith, C. Nengo: A Python tool for building large-scale functional brain models. Front. Neuroinform. 2013, 7, 48. [Google Scholar] [CrossRef]
- Niedermeier, L.; Chen, K.; Xing, J.; Das, A.; Kopsick, J.; Scott, E.; Sutton, N.; Weber, K.; Dutt, N.D.; Krichmar, J.L. CARLsim 6: An Open Source Library for Large-Scale, Biologically Detailed Spiking Neural Network Simulation. In Proceedings of the International Joint Conference on Neural Networks, IJCNN 2022, Padua, Italy, 18–23 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–10. [Google Scholar] [CrossRef]
- Stimberg, M.; Brette, R.; Goodman, D.F. Brian 2, an intuitive and efficient neural simulator. eLife 2019, 8, e47314. [Google Scholar]
- Wei, D.; Wang, J.; Niu, X.; Li, Z. Wind speed forecasting system based on gated recurrent units and convolutional spiking neural networks. Appl. Energy 2021, 292, 116842. [Google Scholar]
- Catania, V.; Mineo, A.; Monteleone, S.; Palesi, M.; Patti, D. Noxim: An open, extensible and cycle-accurate network on chip simulator. In Proceedings of the 2015 IEEE 26th International Conference on Application-Specific Systems, Architectures and Processors (ASAP), Toronto, ON, Canada, 27–29 July 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 162–163. [Google Scholar]
- Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
- Deng, L. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
- Rueckauer, B.; Liu, S.C. Conversion of analog to spiking neural networks using sparse temporal coding. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Xu, X.; Liu, H. ECG heartbeat classification using convolutional neural networks. IEEE Access 2020, 8, 8614–8619. [Google Scholar] [CrossRef]
- Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
- Iglovikov, V.; Shvets, A. Ternausnet: U-net with VGG11 encoder pre-trained on imagenet for image segmentation. arXiv 2018, arXiv:1801.05746. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Rueckauer, B.; Lungu, I.A.; Hu, Y.; Pfeiffer, M.; Liu, S.C. Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 2017, 11, 682. [Google Scholar] [CrossRef] [Green Version]
Parameter | Neurons/Core | Synapses/Core | ||||
---|---|---|---|---|---|---|
Value | 256 | 64k | 1 | 0.1 | 1 | 0.01 |
Applications | MNIST-MLP | MNIST-LeNet | Fashion-MNIST | Heart Class | CIFAR10-LeNet | CIFAR10-AlexNet | CIFAR10-VGG11 | CIFAR10-ResNet |
---|---|---|---|---|---|---|---|---|
Topology | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |
Neuron number | 1193 | 7403 | 1393 | 17,001 | 11,461 | 794,232 | 9,986,862 | 9,675,543 |
Synapses | 97,900 | 377,990 | 444,600 | 773,112 | 804,614 | 39,117,304 | 47,737,200 | 48,384,204 |
Total spikes | 358,000 | 1,555,986 | 10,846,940 | 2,209,232 | 7,978,094 | 574,266,873 | 796,453,842 | 5,534,290,865 |
Toolchains | SpiNeMap [13] | SNEAP [22] | DFSynthesizer [23] | NEUTRAMS [21] | EdgeMap | |
---|---|---|---|---|---|---|
Partition | Algorithm | Kernighan–Lin(KL) | METIS | Greedy algorithm | Kernighan–Lin(KL) | Streaming-based |
Objective | Communication cost | Communication cost | Resource Utilization. | Communication cost | Communication cost | |
Mapping | Algorithm | PSO | SA | LSA | – | NSGA-II |
Objective | Communication cost | Average Hop | Energy consumption | – | Communication cost & Energy consumption |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xue, J.; Xie, L.; Chen, F.; Wu, L.; Tian, Q.; Zhou, Y.; Ying, R.; Liu, P. EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing. Sensors 2023, 23, 6548. https://doi.org/10.3390/s23146548
Xue J, Xie L, Chen F, Wu L, Tian Q, Zhou Y, Ying R, Liu P. EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing. Sensors. 2023; 23(14):6548. https://doi.org/10.3390/s23146548
Chicago/Turabian StyleXue, Jianwei, Lisheng Xie, Faquan Chen, Liangshun Wu, Qingyang Tian, Yifan Zhou, Rendong Ying, and Peilin Liu. 2023. "EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing" Sensors 23, no. 14: 6548. https://doi.org/10.3390/s23146548
APA StyleXue, J., Xie, L., Chen, F., Wu, L., Tian, Q., Zhou, Y., Ying, R., & Liu, P. (2023). EdgeMap: An Optimized Mapping Toolchain for Spiking Neural Network in Edge Computing. Sensors, 23(14), 6548. https://doi.org/10.3390/s23146548