Fast and Accurate SNN Model Strengthening for Industrial Applications
Abstract
:1. Introduction
2. Related Work
3. Our Model Strengthening Method
3.1. Basic Idea
3.2. Model Strengthening Method
Algorithm 1 Our Model Strengthening Method. |
|
4. Experiments
4.1. Experiment Settings
4.2. Model Strengthening Accuracy
4.3. Model Strengthening Time Cost
4.4. Performance Tradeoff
4.5. Model Strengthening Accuracy vs. Model Architecture
4.6. Performance Comparison with Two Baselines
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Zhu, E.; Zhang, J.; Yan, J.; Chen, K.; Gao, C. N-gram MalGAN: Evading machine learning detection via feature n-gram. Digit. Commun. Netw. 2022, 8, 485–491. [Google Scholar] [CrossRef]
- Hou, R.; Ai, S.; Chen, Q.; Yan, H.; Huang, T.; Chen, K. Similarity-based integrity protection for deep learning systems. Inf. Sci. 2022, 601, 255–267. [Google Scholar] [CrossRef]
- Abad, G.; Ersoy, O.; Picek, S.; Urbieta, A. Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data. arXiv 2023, arXiv:2302.06279. [Google Scholar]
- Chen, K.; Zhang, H.; Feng, X.; Zhang, X.; Mi, B.; Jin, Z. Backdoor Attacks against Distributed Swarm Learning. ISA Trans. 2023; online ahead of print. [Google Scholar] [CrossRef]
- Wang, Y.; Chen, K.; Tan, Y.; Huang, S.; Ma, W.; Li, Y. Stealthy and Flexible Trojan in Deep Learning Framework. IEEE Trans. Dependable Secur. Comput. 2023, 20, 1789–1798. [Google Scholar] [CrossRef]
- Li, Y.; Yan, H.; Huang, T.; Pan, Z.; Lai, J.; Zhang, X.; Chen, K.; Li, J. Model Architecture Level Privacy Leakage in Neural Networks. Sci. China Inf. Sci. 2022. [Google Scholar]
- Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership inference attacks against machine learning models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; IEEE: New York, NY, USA, 2017; pp. 3–18. [Google Scholar]
- Xiao, Y.; Zhou, J.; Chen, K.; Liu, Z. Revisiting the Transferability of Adversarial Examples via Source-agnostic Adversarial Feature Inducing Method. Pattern Recognit. 2023, 144, 109828. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, Q.; Mo, K.; Xiang, X.; Li, J.; Cheng, D.; Gao, R.; Liu, B.; Chen, K.; Wei, G. An efficient adversarial example generation algorithm based on an accelerated gradient iterative fast gradient. Comput. Stand. Interfaces 2022, 82, 103612. [Google Scholar] [CrossRef]
- Zhu, L.; Liu, Z.; Han, S. Deep leakage from gradients. Adv. Neural Inf. Process. Syst. 2019, 32, 17–31. [Google Scholar]
- Zhang, X.; Li, J.; Zhang, J.; Yan, J.; Zhu, E.; Chen, K. Data Reconstruction from Gradient Updates in Federated Learning. In Proceedings of the Machine Learning for Cyber Security—4th International Conference, ML4CS 2022, Guangzhou, China, 2–4 December 2022; Xu, Y., Yan, H., Teng, H., Cai, J., Li, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13655, pp. 586–596, Part I. [Google Scholar]
- Zhang, X.; Zhou, X.; Chen, K. Data Leakage with Label Reconstruction in Distributed Learning Environments. In Proceedings of the Machine Learning for Cyber Security—4th International Conference, ML4CS, Guangzhou, China, 2–4 December 2022; Xu, Y., Yan, H., Teng, H., Cai, J., Li, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; Volume 13655, pp. 185–197, Part I. [Google Scholar]
- Chen, K.; Zhang, X.; Zhou, X.; Mi, B.; Xiao, Y.; Zhou, L.; Wu, Z.; Wu, L.; Wang, X. Privacy Preserving Federated Learning for Full Heterogeneity. ISA Trans. 2023; online ahead of print. [Google Scholar] [CrossRef]
- Regulation, P. General data protection regulation. Intouch 2018, 25, 1–5. [Google Scholar]
- Chen, K.; Wang, Y.; Huang, Y. Lightweight machine unlearning in neural network. arXiv 2021, arXiv:2111.05528. [Google Scholar]
- Regulation, P. Regulation (EU) 2016/679 of the European Parliament and of the Council. Regul. EU 2016, 679, 2016. [Google Scholar]
- Chen, K.; Huang, Y.; Wang, Y. Machine unlearning via GAN. arXiv 2021, arXiv:2111.11869. [Google Scholar]
- Cao, Y.; Yang, J. Towards making systems forget with machine unlearning. In Proceedings of the 2015 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 17–21 May 2015; IEEE: New York, NY, USA, 2015; pp. 463–480. [Google Scholar]
- Bourtoule, L.; Chandrasekaran, V.; Choquette-Choo, C.A.; Jia, H.; Travers, A.; Zhang, B.; Lie, D.; Papernot, N. Machine unlearning. In Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 24–27 May 2021; IEEE: New York, NY, USA, 2021; pp. 141–159. [Google Scholar]
- Felps, D.L.; Schwickerath, A.D.; Williams, J.D.; Vuong, T.N.; Briggs, A.; Hunt, M.; Sakmar, E.; Saranchak, D.D.; Shumaker, T. Class Clown: Data Redaction in Machine Unlearning at Enterprise Scale. arXiv 2020, arXiv:2012.04699. [Google Scholar]
- Chen, K.; Huang, Y.; Wang, Y.; Zhang, X.; Mi, B.; Wang, Y. Privacy Preserving Machine Unlearning for Smart Cities. Ann. Telecommun. 2023; online ahead of print. [Google Scholar] [CrossRef]
- Liu, G.; Ma, X.; Yang, Y.; Wang, C.; Liu, J. Federated unlearning. arXiv 2020, arXiv:2012.13891. [Google Scholar]
- Brophy, J.; Lowd, D. Machine unlearning for random forests. In Proceedings of the International Conference on Machine Learning; PMLR: Baltimore, MA, USA, 2021; pp. 1092–1104. [Google Scholar]
- Nguyen, Q.P.; Oikawa, R.; Divakaran, D.M.; Chan, M.C.; Low, B.K.H. Markov chain monte carlo-based machine unlearning: Unlearning what needs to be forgotten. arXiv 2022, arXiv:2202.13585. [Google Scholar]
- Maass, W. Networks of spiking neurons: The third generation of neural network models. Neural Netw. 1997, 10, 1659–1671. [Google Scholar] [CrossRef]
- Deng, S.; Gu, S. Optimal conversion of conventional artificial neural networks to spiking neural networks. arXiv 2021, arXiv:2103.00476. [Google Scholar]
- Diehl, P.U.; Cook, M. Unsupervised learning of digit recognition using spike-timing-dependent plasticity. Front. Comput. Neurosci. 2015, 9, 99. [Google Scholar] [CrossRef] [PubMed]
- Dong, M.; Huang, X.; Xu, B. Unsupervised speech recognition through spike-timing-dependent plasticity in a convolutional spiking neural network. PloS ONE 2018, 13, e0204596. [Google Scholar] [CrossRef]
- Wu, Y.; Deng, L.; Li, G.; Zhu, J.; Shi, L. Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 2018, 12, 331. [Google Scholar] [CrossRef] [PubMed]
- Wu, Y.; Deng, L.; Li, G.; Zhu, J.; Xie, Y.; Shi, L. Direct training for spiking neural networks: Faster, larger, better. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 1311–1318. [Google Scholar]
- Hunsberger, E.; Eliasmith, C. Spiking deep networks with LIF neurons. arXiv 2015, arXiv:1510.08829. [Google Scholar]
- Chawla, N.; Bowyer, K.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic Minority Over-sampling Technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Han, H.; Wang, W.; Mao, B. Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning. In Proceedings of the International Conference on Intelligent Computing, Hefei, China., 23–26 August 2005. [Google Scholar]
- Kubát, M.; Matwin, S. Addressing the Curse of Imbalanced Training Sets: One-Sided Selection. In Proceedings of the International Conference on Machine Learning, Nashville, TN, USA, 8–12 July 1997. [Google Scholar]
- Batista, G.E.A.P.A.; Prati, R.C.; Monard, M.C. A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explor. 2004, 6, 20–29. [Google Scholar] [CrossRef]
Untrained Data | Training Data | Test Data | |
---|---|---|---|
NEU-CLS-64 | 1298 | 10,384 | 1445 |
NEU-CLS-200 | 240 | 1200 | 360 |
Dataset | Learning Rate | Mini-Batch Size | Input Data Size | Class | |
---|---|---|---|---|---|
NEU-CLS-64 | 32 | 0.01 | 1 | ||
NEU-CLS-200 | 32 | 0.01 | 2 |
Dataset | Classes | Initial Model | Retraining | Ours |
---|---|---|---|---|
NEU-CLS-64 | malicious class | 99.31 | 0 | 6.39 |
benign classes | 95.65 | 84.35 | 83.17 | |
NEU-CLS-200 | malicious class | 97.08 | 0 | 7.5 |
benign classes | 89.5 | 82.58 | 77.5 |
Datasets | Retraining (min) | Ours (min) | Speed Up |
---|---|---|---|
NEU-CLS-64 | 37.24 | 3.19 | |
NEU-CLS-200 | 13.61 | 0.50 |
Malicious Class | Benign Classes | Time Cost (min) | |
---|---|---|---|
99.31 | 95.64 | ∞ | |
6.86 | 85.32 | 3.08 | |
7.16 | 84.94 | 2.79 | |
7.24 | 84.91 | 2.86 | |
7.55 | 70.04 | 2.83 |
Malicious Class | Benign Classes | Time Cost (min) | |
---|---|---|---|
96.67 | 89.75 | ∞ | |
6.25 | 77.67 | 0.76 | |
7.08 | 77.83 | 0.58 | |
7.08 | 77.75 | 0.58 | |
6.67 | 77.83 | 0.56 |
Label | Class | Initial Model | Our | Time Cost (min) |
---|---|---|---|---|
‘cr’ | malicious class | 99.77 | 6.24 | 2.2441 |
benign classes | 95.58 | 84.07 | ||
‘gg’ | malicious class | 99.31 | 6.39 | 3.1923 |
benign classes | 95.65 | 83.17 | ||
‘pa’ | malicious class | 99.77 | 7.7 | 4.1807 |
benign classes | 95.58 | 59.37 | ||
‘ps’ | malicious class | 92.22 | 7.55 | 1.4699 |
benign classes | 95.65 | 93.01 | ||
‘rs’ | malicious class | 99.23 | 6.78 | 7.4573 |
benign classes | 95.65 | 52.04 | ||
‘sp’ | malicious class | 83.98 | 3.62 | 1.3547 |
benign classes | 97.55 | 76.87 |
Label | Class | Initial Model | Ours | Time Cost (min) |
---|---|---|---|---|
‘rp’ | malicious class | 100 | 7.78 | 27.67 |
benign classes | 99.47 | 53.22 | ||
‘sc’ | malicious class | 100 | 4.55 | 42.49 |
benign classes | 99.47 | 30.55 |
Classes | Initial Model | Retraining | SISA | Ours |
---|---|---|---|---|
malicious class | 99.31 | 0 | 0 | 6.39 |
benign classes | 95.65 | 84.35 | 78.75 | 83.17 |
time cost | / | 37.24 | 26.8 | 3.19 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhou, D.; Chen, W.; Chen, K.; Mi, B. Fast and Accurate SNN Model Strengthening for Industrial Applications. Electronics 2023, 12, 3845. https://doi.org/10.3390/electronics12183845
Zhou D, Chen W, Chen K, Mi B. Fast and Accurate SNN Model Strengthening for Industrial Applications. Electronics. 2023; 12(18):3845. https://doi.org/10.3390/electronics12183845
Chicago/Turabian StyleZhou, Deming, Weitong Chen, Kongyang Chen, and Bing Mi. 2023. "Fast and Accurate SNN Model Strengthening for Industrial Applications" Electronics 12, no. 18: 3845. https://doi.org/10.3390/electronics12183845
APA StyleZhou, D., Chen, W., Chen, K., & Mi, B. (2023). Fast and Accurate SNN Model Strengthening for Industrial Applications. Electronics, 12(18), 3845. https://doi.org/10.3390/electronics12183845