On Dynamic Node Cooperation Strategy Design for Energy Efficiency in Hierarchical Federated Learning
Abstract
:1. Introduction
- (1)
- We propose node cooperation for energy cost minimization with the delay constraint in HFL. We formulate the optimization problem and prove its decision problem is NP-hard.
- (2)
- We design an online optimized node cooperation strategy, in which each node training local model can select optimal relay node dynamically. Relay nodes can help transmit the model parameters while minimizing energy cost.
- (3)
- We conduct thorough experiments to evaluate the performance of our proposed strategy. It is found that energy cost can be reduced by 24.49% and 22.04% compared with HierFAVG and SNNR, respectively. Compared with existing CFL and THF, it is also observed that the energy cost is reduced by 20.20% and 13.54%, individually.
2. Related Work
3. System Model and Problem Definition
3.1. System Model of HFL
3.2. Opportunistic Communication between Nodes
3.3. Computation and Communication Model of HFL
3.4. Problem Definition
4. Design of Online Node Cooperation Strategy
4.1. Optimal Relay Node Selection for Node Cooperation
- (1)
- ,
- (2)
- .
Algorithm 1 Online node cooperation algorithm (OSRN) |
Input, number of nodes number of edge servers of all nodes Output: the optimal relay node and optimal stopping time slot chosen by each node of all nodes to upload parameters successfully ; do do of its encounter with neighboring node ; for neighboring then meets node in the current time slot ; under remaining delay constraint; of all encounter nodes of each node with the largest optimization value; if the number of relay nodes currently encountered by node then ; ; for node ; then obtained by relay using Equation (25); and current time slot , Otherwise, node waits for the next time slot; in current time slot ; and current time slot . Otherwise, upload parameters directly, and return and current time slot ; of all nodes uploading parameters; |
4.2. Cooperative Hierarchical Federated Learning Algorithm
Algorithm 2 Cooperative hierarchical federated learning |
Input: number of nodes , number of edge servers , local accuracy , initial model parameters of all nodes Output: the global model : do in parallel do using Equation (28); then use Algorithm 1 to obtain the optimal relay node for every node ; do sends parameters to optimal relay node and then uploads parameters to the edge server; do using Equation (29); : and uses Equation (30) to obtain the global model ; |
5. Performance Evaluation
5.1. Theoretical Analysis
5.2. Simulation Results and Analysis
5.2.1. Simulation Environment
5.2.2. Experimental Results Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Zhou, Z.; Chen, X.; Li, E.; Zeng, L.; Luo, K.; Zhang, J. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proc. IEEE 2019, 107, 1738–1762. [Google Scholar] [CrossRef]
- Put More of Your Business Data to Work—From Edge to Cloud. Available online: https://www.seagate.com/files/www-content/our-story/rethink-data/files/Rethink_Data_Report_2020.pdf (accessed on 26 November 2022).
- Li, P.; Li, J.; Huang, Z.; Li, T.; Gao, C.-Z.; Yiu, S.-M.; Chen, K. Multi-key privacy-preserving deep learning in cloud computing. Future Gener. Comput. Syst. 2017, 74, 76–85. [Google Scholar] [CrossRef]
- Custers, B.; Sears, A.M.; Dechesne, F.; Georgieva, I.; Tani, T.; van der Hof, S. EU Personal Data Protection in Policy and Practice; TMC Asser Press: The Hague, The Netherlands, 2019; Volume 29, pp. 1–249. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics; PMLR: New York, NY, USA, 2017; pp. 1273–1282. [Google Scholar]
- Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.-C.; Yang, Q.; Niyato, D.; Miao, C. Federated learning in mobile edge networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2020, 22, 2031–2063. [Google Scholar] [CrossRef]
- Kong, X.; Wu, Y.; Wang, H.; Xia, F. Edge Computing for Internet of Everything: A Survey. IEEE Internet Things J. 2022, 9, 23472–23485. [Google Scholar] [CrossRef]
- Kong, X.; Wang, K.; Hou, M.; Hao, X.; Shen, G.; Chen, X.; Xia, F. A Federated Learning-based License Plate Recognition Scheme for 5G-enabled Internet of Vehicles. IEEE Trans. Ind. Inform. 2021, 17, 8523–8530. [Google Scholar] [CrossRef]
- Liu, L.; Zhang, J.; Song, S.H.; Letaief, K.B. Client-edge-cloud hierarchical federated learning. In Proceedings of the IEEE International Conference on Communications, Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar]
- Abdellatif, A.A.; Mhaisen, N.; Mohamed, A.; Erbad, A.; Guizani, M.; Dawy, Z.; Nasreddine, W. Communication-efficient hierarchical federated learning for IoT heterogeneous systems with imbalanced data. Future Gener. Comput. Syst. 2022, 128, 406–419. [Google Scholar] [CrossRef]
- Saadat, H.; Aboumadi, A.; Mohamed, A.; Erbad, A.; Guizani, M. Hierarchical federated learning for collaborative IDS in IoT applications. In Proceedings of the 2021 10th Mediterranean Conference on Embedded Computing, Budva, Montenegro, 7–10 June 2021; pp. 1–6. [Google Scholar]
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, H.B.; et al. Towards federated learning at scale: System design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
- Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef]
- Yao, X.; Huang, C.; Sun, L. Two-stream federated learning: Reduce the communication costs. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 December 2018; pp. 1–4. [Google Scholar]
- Liu, L.; Zhang, J.; Song, S.H. Edge-Assisted Hierarchical Federated Learning with Non-IID Data. arXiv 2019, arXiv:1905.06641. [Google Scholar]
- Reisizadeh, A.; Mokhtari, A.; Hassani, H.; Jadbabaie, A.; Pedarsani, R. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. Int. Conf. Artif. Intell. Stat. 2020, 108, 2021–2031. [Google Scholar]
- Wang, S.; Lee, M.; Hosseinalipour, S.; Morabito, R.; Chiang, M.; Brinton, C.G. Device sampling for heterogeneous federated learning: Theory, algorithms, and implementation. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
- Konecny, J.; McMahan, H.B.; Felix, X.Y.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated Learning: Strategies for Improving Communication Efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
- Alistarh, D.; Grubic, D.; Li, J.; Tomioka, R.; Vojnovic, M. QSGD: Communication-efficient SGD via Gradient Quantization and Encoding. Adv. Neural Inf. Process. Syst. 2017, 30, 1709–1720. [Google Scholar]
- Suresh, A.T.; Felix, X.Y.; Kumar, S.; McMahan, H.B. Distributed mean estimation with limited communication. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; Volume 70, pp. 3329–3337. [Google Scholar]
- Zhang, X.; Fang, M.; Liu, J.; Zhu, Z. Private and communication-efficient edge learning: A sparse differential gaussian-masking distributed SGD approach. In Proceedings of the Twenty-First International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, Shanghai, China, 26–29 July 2020; pp. 261–270. [Google Scholar]
- Zheng, H.; Gao, M.; Chen, Z.; Feng, X. A distributed hierarchical deep computation model for federated learning in edge computing. IEEE Trans. Ind. Inform. 2021, 17, 7946–7956. [Google Scholar] [CrossRef]
- Liu, L.; Zhang, J.; Song, S.; Khaled, B.L. Hierarchical quantized federated learning: Convergence analysis and system design. arXiv 2021, arXiv:2103.14272. [Google Scholar]
- Ren, L.; Liu, Y.; Wang, X.; Lv, J.; Deen, M.J. Cloud-Edge based Lightweight Temporal Convolutional Networks for Remaining Useful Life Prediction in IIoT. IEEE Internet Things J. 2020, 8, 12578–12587. [Google Scholar] [CrossRef]
- Lou, S.; Chen, X.; Wu, Q.; Zhou, Z.; Yu, S. HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans. Wirel. Commun. 2020, 19, 6535–6548. [Google Scholar]
- Wang, X.; Yang, L.T.; Ren, L.; Wang, Y.; Deen, M.J. A Tensor-based Computing and Optimization Model for Intelligent Edge Services. IEEE Netw. 2022, 36, 40–44. [Google Scholar] [CrossRef]
- Yi, Y.; Zhang, Z.; Yang, L.T.; Wang, X.; Gan, C. Edge-aided Control Dynamics for Information Diffusion in Social Internet of Things. Neurocomputing 2021, 485, 274–284. [Google Scholar] [CrossRef]
- Wang, Z.; Xu, H.; Liu, J.; Huang, H.; Qiao, C.; Zhao, Y.; Wang, Z.; Xu, H.; Liu, J.; Huang, H.; et al. Resource-Efficient Federated Learning with Hierarchical Aggregation in Edge Computing. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
- Hosseinalipour, S.; Brinton, C.G.; Aggarwal, V.; Dai, H.; Chiang, M. From federated to fog learning: Distributed machine learning over heterogeneous wireless networks. IEEE Commun. Mag. 2020, 58, 41–47. [Google Scholar] [CrossRef]
- Liu, Y.; Pan, C.; You, L.; Han, W. D2D-Enabled User Cooperation in Massive MIMO. IEEE Syst. J. 2020, 14, 4406–4417. [Google Scholar] [CrossRef]
- Park, S.H.; Jin, X. Joint Secure Design of Downlink and D2D Cooperation Strategies for Multi-User Systems. IEEE Signal Process. Lett. 2021, 28, 917–921. [Google Scholar] [CrossRef]
- Mustafa, H.A.; Shakir, M.Z.; Imran, M.A.; Tafazolli, R. Distance Based Cooperation Region for D2D Pair. In Proceedings of the 2015 IEEE 81st Vehicular Technology Conference (VTC Spring), Glasgow, UK, 11–14 May 2015; pp. 1–6. [Google Scholar]
- Asad, M.; Moustafa, A.; Rabhi, F.A.; Aslam, M. THF: 3-Way Hierarchical Framework for Efficient Client Selection and Resource Management in Federated Learning. IEEE Internet Things J. 2021, 9, 11085–11097. [Google Scholar] [CrossRef]
- Wang, Z.; Wang, Y.; Wang, L.; Wang, T.; Xu, D. A delay-driven early caching and sharing strategy for D2D transmission network. In Proceedings of the 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), Antwerp, Belgium, 25–28 May 2020; pp. 1–5. [Google Scholar]
- Qiu, M.; Chen, Z.; Liu, M. Low-power low-latency data allocation for hybrid scratch-pad memory. IEEE Embed. Syst. Lett. 2014, 6, 69–72. [Google Scholar] [CrossRef]
- Zhao, L.; Ran, Y.; Wang, H.; Wang, J.; Luo, J. Towards Cooperative Caching for Vehicular Networks with Multi-level Federated Reinforcement Learning. In Proceedings of the ICC 2021-IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021; pp. 1–6. [Google Scholar]
- Cheng, R.; Sun, Y.; Liu, Y.; Xia, L.; Sun, S.; Imran, M.A. A Privacy-preserved D2D Caching Scheme Underpinned by Blockchain-enabled Federated Learning. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar]
- Cheng, R.; Sun, Y.; Liu, Y.; Xia, L.; Feng, D.; Imran, M.A. Blockchain-Empowered Federated Learning Approach for an Intelligent and Reliable D2D Caching Scheme. IEEE Internet Things J. 2022, 9, 7879–7890. [Google Scholar] [CrossRef]
- Qiao, D.; Guo, S.; Liu, D.; Long, S.; Zhou, P.; Li, Z. Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 4767–4782. [Google Scholar] [CrossRef]
- Khanal, S.; Thar, K.; Huh, E.N. Route-Based Proactive Content Caching Using Self-Attention in Hierarchical Federated Learning. IEEE Access 2022, 10, 29514–29527. [Google Scholar] [CrossRef]
- Liu, S.; Zheng, C.; Huang, Y.; Quek, T.Q.S. Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge Caching. IEEE J. Sel. Areas Commun. 2022, 40, 749–760. [Google Scholar] [CrossRef]
- Batabyal, S.; Bhaumik, P. Mobility models, traces and impact of mobility on opportunistic routing algorithms: A survey. IEEE Commun. Surv. Tutor. 2015, 17, 1679–1707. [Google Scholar] [CrossRef]
- Zhan, Y.; Li, P.; Qu, Z.; Zeng, D.; Guo, S. A learning-based incentive mechanism for federated learning. IEEE Internet Things J. 2020, 7, 6360–6368. [Google Scholar] [CrossRef]
- Amanda, H.; Bryant, A.J. The quadratic multiple knapsack problem and three heuristic approaches to it. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; pp. 547–552. [Google Scholar]
- Wang, F.; Wang, G. Study on Energy Minimization Data Transmission Strategy in Mobile Cloud Computing. In Proceedings of the IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, Guangzhou, China, 8–12 October 2018; pp. 1211–1218. [Google Scholar]
- Zheng, D.; Ge, W.; Zhang, J. Distributed opportunistic scheduling for ad hoc networks with random access: An optimal stopping approach. IEEE Trans. Inf. Theory 2008, 55, 205–222. [Google Scholar]
Literature | Optimization Target | Enhance Local Iteration | Compression Model | Multiple Edge Aggregation | Node Cooperation | Cache Management Optimization | Features |
---|---|---|---|---|---|---|---|
[14,15,16,17] | Reduce communication rounds | √ | Nodes communicate directly with edge servers | ||||
[18,19,20,21,22,23,24] | Reduce the amount of data in the communication round | √ | Nodes communicate directly with edge servers | ||||
[25,26,27,28,29] | Reduce the energy cost of model uploading | √ | Nodes communicate directly with edge servers | ||||
[30,31,32,33] | Reduce the energy cost of model uploading | √ | Only consider communication between nodes as D2D | ||||
[34,35,36,37,38,39,40,41] | Reduce the energy cost of model uploading | √ | Only optimize caching strategy to reduce energy cost | ||||
This paper | Reduce the energy cost of model uploading | √ | Use opportunistic communication and optimize the process of uploading model to reduce energy cost |
Parameter | Description | Values |
---|---|---|
The number of edge servers | ||
Transmit power | ||
Noise power | ||
Channel bandwidth | ||
Model parameters size | ||
Frequency size | ||
Network range | ||
Communication radius |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Z.; Zou, S.; Chen, X. On Dynamic Node Cooperation Strategy Design for Energy Efficiency in Hierarchical Federated Learning. Electronics 2023, 12, 2362. https://doi.org/10.3390/electronics12112362
Li Z, Zou S, Chen X. On Dynamic Node Cooperation Strategy Design for Energy Efficiency in Hierarchical Federated Learning. Electronics. 2023; 12(11):2362. https://doi.org/10.3390/electronics12112362
Chicago/Turabian StyleLi, Zhuo, Sailan Zou, and Xin Chen. 2023. "On Dynamic Node Cooperation Strategy Design for Energy Efficiency in Hierarchical Federated Learning" Electronics 12, no. 11: 2362. https://doi.org/10.3390/electronics12112362
APA StyleLi, Z., Zou, S., & Chen, X. (2023). On Dynamic Node Cooperation Strategy Design for Energy Efficiency in Hierarchical Federated Learning. Electronics, 12(11), 2362. https://doi.org/10.3390/electronics12112362