In-Network Learning: Distributed Training and Inference in Networks †
Abstract
:1. Introduction
1.1. Contributions
1.2. Outline and Notation
2. Network Inference: Problem Formulation
3. Proposed Solution: In-Network Learning and Inference
3.1. A Specific Model: Fusing of Inference
3.1.1. Inference Phase
3.1.2. Training Phase
3.2. General Model: Fusion and Propagation of Inference
3.2.1. Inference Phase
3.2.2. Training Phase
3.3. Bandwidth Requirements
4. Experimental Results
4.1. Experiment 1
4.2. Experiment 2
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Proof of Theorem 1
Appendix A.1. Codebook Generation
Appendix A.2. Compression of the Observations
Appendix A.3. Transmission of the Compression Indices over the Graph Network
Appendix A.4. Decompression and Estimation
Appendix B. Proof of Proposition 1
Appendix C. Proof of Lemma 1
Appendix D. Proof of Lemma 2
References
- Zou, Z.; Shi, Z.; Guo, Y.; Ye, J. Object Detection in 20 Years: A Survey. arXiv 2019, arXiv:1905.05055. [Google Scholar] [CrossRef]
- Glaser, J.I.; Benjamin, A.S.; Farhoodi, R.; Kording, K.P. The roles of supervised machine learning in systems neuroscience. Prog. Neurobiol. 2019, 175, 126–137. [Google Scholar] [CrossRef] [Green Version]
- Pluim, J.P.W.; Maintz, J.B.A.; Viergever, M.A. Mutual-information-based registration of medical images: A survey. IEEE Trans. Med. Imaging 2003, 22, 986–1004. [Google Scholar] [CrossRef]
- Kober, J.; Bagnell, J.; Peters, J. Reinforcement Learning in Robotics: A Survey. Int. J. Robot. Res. 2013, 32, 1238–1274. [Google Scholar] [CrossRef] [Green Version]
- Vinyals, O.; Le, Q.V. A Neural Conversational Model. arXiv 2015, arXiv:1506.05869. [Google Scholar]
- Farsad, N.; Yilmaz, H.B.; Eckford, A.; Chae, C.; Guo, W. A Comprehensive Survey of Recent Advancements in Molecular Communication. IEEE Commun. Surv. Tutor. 2016, 18, 1887–1919. [Google Scholar] [CrossRef] [Green Version]
- Peter Hong, Y.W.; Wang, C.C. In-Network Learning via Over-the-Air Computation in Internet-of-Things. In Proceedings of the 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Lucca, Italy, 27–30 September 2021; pp. 141–145. [Google Scholar] [CrossRef]
- Du, R.; Magnusson, S.; Fischione, C. The Internet of Things as a deep neural network. IEEE Commun. Mag. 2020, 58, 20–25. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, Fort Lauderdale, FL, USA, 20–22 April 2017; Volume 54, pp. 1273–1282. [Google Scholar]
- Xiao, J.J.; Ribeiro, A.; Luo, Z.Q.; Giannakis, G. Distributed compression-estimation using wireless sensor networks. IEEE Signal Process. Mag. 2006, 23, 27–41. [Google Scholar] [CrossRef]
- Kreidl, O.P.; Tsitsiklis, J.N.; Zoumpoulis, S.I. Decentralized detection in sensor network architectures with feedback. In Proceedings of the 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 29 September–1 October 2010; pp. 1605–1609. [Google Scholar] [CrossRef] [Green Version]
- Chamberland, J.f.; Veeravalli, V.V. Wireless Sensors in Distributed Detection Applications. IEEE Signal Process. Mag. 2007, 24, 16–25. [Google Scholar] [CrossRef]
- Tsitsiklis, J.N. Decentralized detection. In Advances in Statistical Signal Processing; JAI Press: Stamford, CT, USA, 1993; pp. 297–344. [Google Scholar]
- Simic, S. A learning-theory approach to sensor networks. IEEE Pervasive Comput. 2003, 2, 44–49. [Google Scholar] [CrossRef]
- Predd, J.; Kulkarni, S.; Poor, H. Distributed learning in wireless sensor networks. IEEE Signal Process. Mag. 2006, 23, 56–69. [Google Scholar] [CrossRef] [Green Version]
- Nguyen, X.; Wainwright, M.; Jordan, M. Nonparametric decentralized detection using kernel methods. IEEE Trans. Signal Process. 2005, 53, 4053–4066. [Google Scholar] [CrossRef] [Green Version]
- Jagyasi, B.; Raval, J. Data aggregation in multihop wireless mesh sensor Neural Networks. In Proceedings of the 2015 9th International Conference on Sensing Technology (ICST), Auckland, New Zealand, 8–10 December 2015; pp. 65–70. [Google Scholar] [CrossRef]
- Tran, N.H.; Bao, W.; Zomaya, A.; Nguyen, M.N.H.; Hong, C.S. Federated Learning over Wireless Networks: Optimization Model Design and Analysis. In Proceedings of the IEEE INFOCOM 2019—IEEE Conference on Computer Communications, Paris, France, 29 April–2 May 2019; pp. 1387–1395. [Google Scholar] [CrossRef]
- Amiri, M.M.; Gündüz, D. Federated learning over wireless fading channels. IEEE Trans. Wirel. Commun. 2020, 19, 3546–3557. [Google Scholar] [CrossRef] [Green Version]
- Yang, H.H.; Liu, Z.; Quek, T.Q.S.; Poor, H.V. Scheduling Policies for Federated Learning in Wireless Networks. IEEE Trans. Commun. 2020, 68, 317–333. [Google Scholar] [CrossRef] [Green Version]
- Gupta, O.; Raskar, R. Distributed learning of deep neural network over multiple agents. J. Netw. Comput. Appl. 2018, 116, 1–8. [Google Scholar] [CrossRef] [Green Version]
- Ceballos, I.; Sharma, V.; Mugica, E.; Singh, A.; Roman, A.; Vepakomma, P.; Raskar, R. SplitNN-driven Vertical Partitioning. arXiv 2020, arXiv:2008.04137. [Google Scholar]
- National Institutes of Health. NIH Data Sharing Policy and Implementation Guidance; National Institutes of Health: Bethesda, MD, USA, 2003; Volume 18, p. 2009. [Google Scholar]
- Aguerri, I.E.; Zaidi, A. Distributed Information Bottleneck Method for Discrete and Gaussian Sources. In Proceedings of the IEEE International Zurich Seminar on Information and Communications, Zurich, Switzerland, 21–23 February 2018. [Google Scholar]
- Aguerri, I.E.; Zaidi, A. Distributed Variational Representation Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 120–138. [Google Scholar] [CrossRef] [Green Version]
- Zaidi, A.; Aguerri, I.E.; Shamai (Shitz), S. On the Information Bottleneck Problems: Models, Connections, Applications and Information Theoretic Views. Entropy 2020, 22, 151. [Google Scholar] [CrossRef] [Green Version]
- Moldoveanu, M.; Zaidi, A. On in-network learning. A comparative study with federated and split learning. In Proceedings of the 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Lucca, Italy, 27–30 September 2021; pp. 221–225. [Google Scholar]
- Moldoveanu, M.; Zaidi, A. In-network Learning for Distributed Training and Inference in Networks. In Proceedings of the IEEE Globecom 2021 Workshops, Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Kingma, D.P.; Welling, M. Auto-encoding variational bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
- Flamich, G.; Havasi, M.; Hernández-Lobato, J.M. Compressing images by encoding their latent representations with relative entropy coding. Adv. Neural Inf. Process. Syst. 2020, 33, 16131–16141. [Google Scholar]
- Li, C.T.; Gamal, A.E. Strong Functional Representation Lemma and Applications to Coding Theorems. IEEE Trans. Inf. Theory 2018, 64, 6967–6978. [Google Scholar] [CrossRef] [Green Version]
- Berger, T.; Yeung, R. Multiterminal source encoding with one distortion criterion. IEEE Trans. Inf. Theory 1989, 35, 228–236. [Google Scholar] [CrossRef]
- El Gamal, A.; Kim, Y.H. Network Information Theory; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar] [CrossRef]
- Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 730–734. [Google Scholar] [CrossRef]
- Liu, H.; Chen, M.; Er, S.; Liao, W.; Zhang, T.; Zhao, T. Benefits of overparameterized convolutional residual networks: Function approximation under smoothness constraint. In Proceedings of the International Conference on Machine Learning. PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 13669–13703. [Google Scholar]
Federated Learning | Split Learning | In-Network Learning | |
---|---|---|---|
Bandwidth requirement | |||
VGG 16 50,000 data points | 4427 Gbits | 324 Gbits | 0.16 Gbits |
ResNet 50 50,000 data points | 820 Gbits | 441 Gbits | 0.16 Gbits |
VGG 16 500,000 data points | 4427 Gbits | 1046 Gbits | 1.6 Gbits |
ResNet 50 500,000 data points | 820 Gbits | 1164 Gbits | 1.6 Gbits |
Federated Learning | Split Learning | In-Network Learning | |
---|---|---|---|
Bandwidth requirement | |||
250,000 data points | 2.96 GB | 2.5 GB | 0.2 GB |
2,500,000 data points | 2.96 GB | 11.71 GB | 2.05 GB |
Federated Learning | Split Learning | In-Network Learning | |
---|---|---|---|
Bandwidth requirement | |||
250,000 data points | 0.6 GB | 1.32 GB | 0.2 GB |
2,500,000 data points | 0.6 GB | 10.53 GB | 2.05 GB |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Moldoveanu, M.; Zaidi, A. In-Network Learning: Distributed Training and Inference in Networks. Entropy 2023, 25, 920. https://doi.org/10.3390/e25060920
Moldoveanu M, Zaidi A. In-Network Learning: Distributed Training and Inference in Networks. Entropy. 2023; 25(6):920. https://doi.org/10.3390/e25060920
Chicago/Turabian StyleMoldoveanu, Matei, and Abdellatif Zaidi. 2023. "In-Network Learning: Distributed Training and Inference in Networks" Entropy 25, no. 6: 920. https://doi.org/10.3390/e25060920
APA StyleMoldoveanu, M., & Zaidi, A. (2023). In-Network Learning: Distributed Training and Inference in Networks. Entropy, 25(6), 920. https://doi.org/10.3390/e25060920