Limitations and Future Aspects of Communication Costs in Federated Learning: A Survey
Abstract
:1. Introduction
2. Fundamentals of Federated Learning
- Decentralized data: FL involves multiple clients or devices that hold their respective data. As a result, the data are decentralized and not stored in a central location [32,33]. This decentralized nature of data in FL helps preserve the local data’s privacy, but it can also lead to increased communication costs [34]. The decentralized data distribution means more data must be transferred between the clients and the central server during the training process, leading to higher communication costs [35].
- Local model training: FL allows each client to perform local model training on its respective data. This local training ensures that the privacy of the local data is preserved, but it can also lead to increased communication costs [36]. The local model updates need to be sent to the central server, which aggregates them to generate a global model. The communication costs of sending these updates to the central server can be significant, particularly when the number of clients or data size is large [37,38].
- Model aggregation: After the local model training is completed, the clients send their model updates to the central server for aggregation [39,40]. The server aggregates the model updates to generate a global model, which reflects the characteristics of the data from all the clients [41]. The model aggregation process can lead to significant communication costs, particularly when the size of the model updates is large or the number of clients is high [22,42,43].
- Privacy preservation: FL is designed to preserve the privacy of the local data, but it can also lead to increased communication costs [44,45]. The privacy-preserving nature of FL means that the local data remain on the clients, and only the model updates are shared with the central server [46]. However, this also means more data must be transferred between the clients and the server during the training process, leading to higher communication costs.
3. Communication Deficiency
3.1. Local Model Updating
- Quality and quantity of local data: The quality and quantity of local data available on each participating device can significantly impact the performance of LMU in FL. If the local data are noisy or unrepresentative of the global dataset, it can lead to a poor model performance and increased communication costs [68,69]. Moreover, if the quantity of local data is too small, it can lead to overfitting and poor generalization, which can also affect the overall performance of the FL system [52,70]. Several techniques have been proposed to overcome these challenges, such as data filtering and data augmentation [71,72]. Data filtering involves removing noisy or irrelevant data from the local dataset before training the model. In contrast, data augmentation involves generating new data from the existing data to increase the quantity and diversity of the local dataset. These techniques can improve the quality and quantity of local data, thereby improving the performance of LMU in FL.
- Frequency of updates: The frequency of updates refers to how often the participating devices send their updated parameters to the central server for aggregation [73,74,75]. A higher frequency of updates can lead to faster convergence and an improved model performance but can also increase communication costs and latency. However, a lower frequency of updates can reduce communication costs but may result in slower convergence and suboptimal model performance. Several approaches have been proposed to balance these trade-offs, such as asynchronous updates and adaptive learning rates [76,77]. Asynchronous updates allow participating devices to update the shared model at their own pace, which can reduce communication cost and latency but may lead to slower convergence. Adaptive learning rates adjust the learning rate based on the frequency of updates, which can improve convergence and reduce communication costs.
- Selection of participating devices: The selection of participating devices in FL can significantly impact the performance of LMU [49,78]. If the participating devices are too few or diverse, it can lead to poor model generalization and increased communication costs. Moreover, if the participating devices are biased toward a particular subset of the data, it can lead to a poor model performance and increased communication costs. Several techniques have been proposed to overcome these challenges, such as stratified sampling [79] and weighted aggregation [80]. Stratified sampling involves selecting participating devices based on their similarity to the global dataset, which can improve model generalization and reduce communication costs. Weighted aggregation involves assigning different weights to the participating devices based on their local data quality and quantity, which can improve model performance and reduce communication costs.
3.2. Model Averaging
3.3. Broadcasting the Global Model
4. Resource Management
4.1. Edge Resource Management
4.1.1. Device Selection
4.1.2. Communication Scheduling
4.1.3. Compression Techniques
4.1.4. Model Partitioning
4.2. Server Resource Management
4.2.1. Device Selection
4.2.2. Communication Scheduling
4.2.3. Compression Techniques
4.2.4. Model Partitioning
5. Client Selection
5.1. Device Heterogeneity
5.1.1. System Heterogeneity
5.1.2. Statistical Heterogeneity
5.1.3. Non-IID-Ness
5.2. Device Adaptivity
5.2.1. Flexible Participation
5.2.2. Partial Updates
5.3. Incentive Mechanism
- Monetary incentives: Monetary incentives involve rewarding the clients with a monetary value for their contributions. This approach can effectively motivate the clients to contribute actively to the system [171]. However, it may not be practical in all situations, as it requires a budget to support the incentive program.
- Reputation-based incentives: Reputation-based incentives are based on the principle of recognition and reputation. The clients who contribute actively and provide high-quality updates to the system can be recognized and rewarded with a higher reputation score [172]. This approach can effectively motivate the clients to contribute to the system actively.
- Token-based incentives: Token-based incentives involve rewarding the clients with tokens that can be used to access additional features or services [173]. This approach can effectively motivate the clients to contribute actively to the system and help build a vibrant ecosystem around the FL system.
5.4. Adaptive Aggregation
6. Optimization Techniques
6.1. Compression Schemes
6.1.1. Quantization
6.1.2. Sparsification
- Thresholding is a popular technique for sparsification that involves setting all model or gradient values below a certain threshold to zero [191]. This reduces the number of non-zero values that need to be transmitted, which can result in significant communication savings. The threshold can be set using various techniques, such as absolute thresholding, percentage thresholding, and dynamic thresholding. Absolute thresholding involves setting a fixed threshold for all values, whereas percentage thresholding involves setting a threshold based on the percentage of non-zero values. Dynamic thresholding involves adjusting the threshold based on the distribution of the model or gradient values [192].
- Random pruning is another sparsification technique that randomly sets some model or gradient values to zero [123]. This reduces the number of non-zero values that need to be transmitted and can result in significant communication savings. Random pruning can be achieved using techniques like Bernoulli sampling and stochastic rounding [193]. Bernoulli sampling involves setting each value to zero with a certain probability, whereas stochastic rounding involves rounding each value to zero with a certain probability.
- Structured pruning is a sparsification technique that sets entire rows, columns, or blocks of the model or gradient matrices to zero [194]. This reduces the number of non-zero values that need to be transmitted and can result in significant communication savings. Structured pruning can be achieved using various techniques like channel, filter, and tensor pruning. Channel pruning involves setting entire channels of the model to zero, whereas filter pruning involves setting entire model filters to zero. Tensor pruning involves setting entire blocks of the model to zero, which can be useful when the model has a structured block-wise pattern. Structured pruning can preserve the underlying structure of the model and can result in higher compression rates than random pruning [195]. Still, it may require more complex implementation and may introduce more errors in the model or gradient values.
6.1.3. Low-Rank Factorization
- Singular Value Decomposition (SVD): SVD is a matrix factorization technique that decomposes a matrix X into three matrices A, B, and C such that . Here, A and C are orthogonal matrices, and B is a diagonal matrix containing the singular values of X. The script T represents the transpose operator, which flips the rows and columns of a matrix. The singular values represent the amount of variation captured by each singular vector. By retaining only the singular values and their corresponding singular vectors, we can approximate the original matrix X with a lower rank matrix , where and are the truncated orthogonal matrices, and contains only the singular values [200].
- Principal Component Analysis (PCA): PCA is a dimensionality reduction technique that can be used to compress data. Given a data matrix X, PCA aims to find a lower-dimensional representation of X that retains the maximum amount of variance. This is achieved by computing the eigenvectors of the covariance matrix of X and selecting the eigenvectors corresponding to the largest eigenvalues. The selected eigenvectors form a new orthogonal basis for the data, and the projection of X onto this basis yields the lower-dimensional representation of X [201].
6.2. Structured Updates
6.2.1. Gradient Sparsification
6.2.2. Weight Differencing
7. Future Directions
7.1. Edge Intelligence
7.2. Quantum Computing
7.3. Federated Transfer Learning
7.4. Multi-Task Learning
7.5. Federated Reinforcement Learning
7.6. Federated Meta-Learning
7.7. Hybrid Approaches
7.8. Automatic Model Compression
7.9. Federated Learning in Medical Fields
8. Discussion and Analysis
8.1. Challenges and Complexities
8.2. Benefits of Energy-Efficient FL
- Reduced data transmission: At its core, FL minimizes the need for data centralization. Instead of transmitting extensive datasets, devices share compressed model updates. This direct reduction in data transmission not only conserves bandwidth but also considerably reduces the energy expended in the communication process, given that data transmission and reception are among the most energy-intensive operations in wireless communication.
- Decentralized computation: In FL, computations are performed at the edge, on user devices themselves. This decentralization aids in leveraging the collective computational prowess of these devices, reducing the burden on centralized servers. Consequently, servers consume less energy for computations, ensuring a more balanced and energy-efficient system.
- Intelligent client participation: Energy efficiency in FL is not just about reducing communication. It extends to judiciously determining which clients participate in the training. By selecting devices that are currently charging or have high battery levels, FL processes can minimize battery drain issues, leading to a more sustainable execution of federated tasks.
- Adaptive communication protocols: Modern FL implementations have started employing adaptive communication techniques. By assessing the network’s current state, these techniques modulate the frequency and size of model updates. Such dynamism ensures that devices communicate optimally, preserving energy in low-bandwidth or unstable network conditions.
- Synergy with modern hardware: With the advent of energy-efficient hardware tailored for AI and ML tasks, FL can further amplify energy savings. By integrating with low-power neural network accelerators, for instance, the computational aspect of FL becomes even more energy efficient.
9. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowl.-Based Syst. 2021, 216, 106775. [Google Scholar] [CrossRef]
- Aledhari, M.; Razzak, R.; Parizi, R.M.; Saeed, F. Federated learning: A survey on enabling technologies, protocols, and applications. IEEE Access 2020, 8, 140699–140725. [Google Scholar] [CrossRef]
- AbdulRahman, S.; Tout, H.; Ould-Slimane, H.; Mourad, A.; Talhi, C.; Guizani, M. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet Things J. 2020, 8, 5476–5497. [Google Scholar] [CrossRef]
- Wang, T.; Rausch, J.; Zhang, C.; Jia, R.; Song, D. A principled approach to data valuation for federated learning. In Federated Learning: Privacy and Incentive; Springer: Cham, Switzerland, 2020; pp. 153–167. [Google Scholar]
- Kaiwartya, O.; Kaushik, K.; Gupta, S.K.; Mishra, A.; Kumar, M. Security and Privacy in Cyberspace; Springer Nature: Singapore, 2022. [Google Scholar]
- Luo, B.; Li, X.; Wang, S.; Huang, J.; Tassiulas, L. Cost-effective federated learning design. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications; 2021; pp. 1–10. Available online: https://ieeexplore.ieee.org/document/9488679 (accessed on 19 August 2023).
- Shahid, O.; Pouriyeh, S.; Parizi, R.M.; Sheng, Q.Z.; Srivastava, G.; Zhao, L. Communication efficiency in federated learning: Achievements and challenges. arXiv 2021, arXiv:2107.10996. [Google Scholar]
- Konečnỳ, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
- Tran, N.H.; Bao, W.; Zomaya, A.; Nguyen, M.N.; Hong, C.S. Federated learning over wireless networks: Optimization model design and analysis. In Proceedings of the IEEE INFOCOM 2019-IEEE Conference on Computer Communications; 2019; pp. 1387–1395. Available online: https://ieeexplore.ieee.org/document/8737464 (accessed on 19 August 2023).
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics; 2017; pp. 1273–1282. Available online: https://proceedings.mlr.press/v54/mcmahan17a/mcmahan17a.pdf (accessed on 19 August 2023).
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečnỳ, J.; Mazzocchi, S.; McMahan, B.; et al. Towards federated learning at scale: System design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
- Xu, J.; Du, W.; Jin, Y.; He, W.; Cheng, R. Ternary compression for communication-efficient federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2020, 33, 1162–1176. [Google Scholar] [CrossRef]
- Reisizadeh, A.; Mokhtari, A.; Hassani, H.; Jadbabaie, A.; Pedarsani, R. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In Proceedings of the International Conference on Artificial Intelligence and Statistics; 2020; pp. 2021–2031. Available online: http://proceedings.mlr.press/v108/reisizadeh20a/reisizadeh20a.pdf (accessed on 19 August 2023).
- Lorincz, J.; Klarin, Z.; Begusic, D. Advances in Improving Energy Efficiency of Fiber–Wireless Access Networks: A Comprehensive Overview. Sensors 2023, 23, 2239. [Google Scholar] [CrossRef]
- Lorincz, J.; Klarin, Z. How trend of increasing data volume affects the energy efficiency of 5g networks. Sensors 2021, 22, 255. [Google Scholar] [CrossRef]
- Al-Abiad, M.S.; Obeed, M.; Hossain, M.; Chaaban, A. Decentralized aggregation for energy-efficient federated learning via D2D communications. IEEE Trans. Commun. 2023, 71, 3333–3351. [Google Scholar] [CrossRef]
- Rodríguez-Barroso, N.; Jiménez-López, D.; Luzón, M.V.; Herrera, F.; Martínez-Cámara, E. Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges. Inf. Fusion 2023, 90, 148–173. [Google Scholar] [CrossRef]
- Li, Q.; Wen, Z.; Wu, Z.; Hu, S.; Wang, N.; Li, Y.; Liu, X.; He, B. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. IEEE Trans. Knowl. Data Eng. 2021, 35, 3347–3366. [Google Scholar] [CrossRef]
- Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
- Xia, Q.; Ye, W.; Tao, Z.; Wu, J.; Li, Q. A survey of federated learning for edge computing: Research problems and solutions. High-Confid. Comput. 2021, 1, 100008. [Google Scholar] [CrossRef]
- Zhu, H.; Xu, J.; Liu, S.; Jin, Y. Federated learning on non-IID data: A survey. Neurocomputing 2021, 465, 371–390. [Google Scholar] [CrossRef]
- Nguyen, J.; Malik, K.; Zhan, H.; Yousefpour, A.; Rabbat, M.; Malek, M.; Huba, D. Federated learning with buffered asynchronous aggregation. In Proceedings of the International Conference on Artificial Intelligence and Statistics; 2022; pp. 3581–3607. Available online: https://proceedings.mlr.press/v151/nguyen22b/nguyen22b.pdf (accessed on 19 August 2023).
- Zhu, J.; Cao, J.; Saxena, D.; Jiang, S.; Ferradi, H. Blockchain-empowered federated learning: Challenges, solutions, and future directions. ACM Comput. Surv. 2023, 55, 1–31. [Google Scholar] [CrossRef]
- Ghimire, B.; Rawat, D.B. Recent advances on federated learning for cybersecurity and cybersecurity for federated learning for internet of things. IEEE Internet Things J. 2022, 9, 8229–8249. [Google Scholar] [CrossRef]
- Gupta, R.; Alam, T. Survey on federated-learning approaches in distributed environment. Wirel. Pers. Commun. 2022, 125, 1631–1652. [Google Scholar] [CrossRef]
- Boobalan, P.; Ramu, S.P.; Pham, Q.V.; Dev, K.; Pandya, S.; Maddikunta, P.K.R.; Gadekallu, T.R.; Huynh-The, T. Fusion of federated learning and industrial Internet of Things: A survey. Comput. Netw. 2022, 212, 109048. [Google Scholar] [CrossRef]
- Al-Quraan, M.; Mohjazi, L.; Bariah, L.; Centeno, A.; Zoha, A.; Arshad, K.; Assaleh, K.; Muhaidat, S.; Debbah, M.; Imran, M.A. Edge-native intelligence for 6G communications driven by federated learning: A survey of trends and challenges. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 7, 957–979. [Google Scholar] [CrossRef]
- Zhao, Z.; Mao, Y.; Liu, Y.; Song, L.; Ouyang, Y.; Chen, X.; Ding, W. Towards Efficient Communications in Federated Learning: A Contemporary Survey. J. Frankl. Inst. 2023, 360, 8669–8703. [Google Scholar] [CrossRef]
- Sikandar, H.S.; Waheed, H.; Tahir, S.; Malik, S.U.; Rafique, W. A Detailed Survey on Federated Learning Attacks and Defenses. Electronics 2023, 12, 260. [Google Scholar] [CrossRef]
- Lim, W.Y.B.; Luong, N.C.; Hoang, D.T.; Jiao, Y.; Liang, Y.C.; Yang, Q.; Niyato, D.; Miao, C. Federated learning in mobile edge networks: A comprehensive survey. IEEE Commun. Surv. Tutor. 2020, 22, 2031–2063. [Google Scholar] [CrossRef]
- Wang, Z.; Nakazato, J.; Asad, M.; Javanmardi, E.; Tsukada, M. Overcoming Environmental Challenges in CAVs through MEC-based Federated Learning. In Proceedings of the 2023 Fourteenth International Conference on Ubiquitous and Future Networks (ICUFN); 2023; pp. 151–156. Available online: https://ieeexplore.ieee.org/document/10200688 (accessed on 19 August 2023).
- Kulkarni, V.; Kulkarni, M.; Pant, A. Survey of personalization techniques for federated learning. In Proceedings of the 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4); 2020; pp. 794–797. Available online: https://ieeexplore.ieee.org/document/9210355 (accessed on 19 August 2023).
- Roy, A.G.; Siddiqui, S.; Pölsterl, S.; Navab, N.; Wachinger, C. Braintorrent: A peer-to-peer environment for decentralized federated learning. arXiv 2019, arXiv:1905.06731. [Google Scholar]
- Li, W.; Chen, J.; Wang, Z.; Shen, Z.; Ma, C.; Cui, X. Ifl-gan: Improved federated learning generative adversarial network with maximum mean discrepancy model aggregation. IEEE Trans. Neural Netw. Learn. Syst. 2022; early access. [Google Scholar]
- Hegedus, I.; Danner, G.; Jelasity, M. Decentralized learning works: An empirical comparison of gossip learning and federated learning. J. Parallel Distrib. Comput. 2021, 148, 109–124. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Zou, Y.; Zhang, Y.; Guizani, M. Reliable federated learning for mobile networks. IEEE Wirel. Commun. 2020, 27, 72–80. [Google Scholar] [CrossRef]
- Ye, Y.; Li, S.; Liu, F.; Tang, Y.; Hu, W. EdgeFed: Optimized federated learning based on edge computing. IEEE Access 2020, 8, 209191–209198. [Google Scholar] [CrossRef]
- Yao, X.; Huang, C.; Sun, L. Two-stream federated learning: Reduce the communication costs. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP); 2018; pp. 1–4. Available online: https://ieeexplore.ieee.org/document/8698609 (accessed on 19 August 2023).
- Ye, D.; Yu, R.; Pan, M.; Han, Z. Federated learning in vehicular edge computing: A selective model aggregation approach. IEEE Access 2020, 8, 23920–23935. [Google Scholar] [CrossRef]
- Pillutla, K.; Kakade, S.M.; Harchaoui, Z. Robust aggregation for federated learning. IEEE Trans. Signal Process. 2022, 70, 1142–1154. [Google Scholar] [CrossRef]
- Ma, X.; Zhang, J.; Guo, S.; Xu, W. Layer-wised model aggregation for personalized federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022; pp. 10092–10101. Available online: https://openaccess.thecvf.com/content/CVPR2022/html/Ma_Layer-Wised_Model_Aggregation_for_Personalized_Federated_Learning_CVPR_2022_paper.html (accessed on 19 August 2023).
- Hu, L.; Yan, H.; Li, L.; Pan, Z.; Liu, X.; Zhang, Z. MHAT: An efficient model-heterogenous aggregation training scheme for federated learning. Inf. Sci. 2021, 560, 493–503. [Google Scholar] [CrossRef]
- Deng, Y.; Lyu, F.; Ren, J.; Chen, Y.C.; Yang, P.; Zhou, Y.; Zhang, Y. Fair: Quality-aware federated learning with precise user incentive and model aggregation. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications; 2021; pp. 1–10. Available online: https://ieeexplore.ieee.org/document/9488743 (accessed on 19 August 2023).
- Xu, R.; Baracaldo, N.; Zhou, Y.; Anwar, A.; Ludwig, H. Hybridalpha: An efficient approach for privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security; 2019; pp. 13–23. Available online: https://dl.acm.org/doi/abs/10.1145/3338501.3357371?casa_token=npneF7k5jXMAAAAA:16iC0bT3mCxKmPch0GrVlR_qlO72nQKPvwx6zICPYhHreVHWMaDKJEiv9dGEn9NTC7YSHDY6J5MDXg (accessed on 19 August 2023).
- Gu, B.; Xu, A.; Huo, Z.; Deng, C.; Huang, H. Privacy-preserving asynchronous vertical federated learning algorithms for multiparty collaborative learning. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 6103–6115. [Google Scholar] [CrossRef] [PubMed]
- Alam, T.; Gupta, R. Federated learning and its role in the privacy preservation of IoT devices. Future Internet 2022, 14, 246. [Google Scholar] [CrossRef]
- Chen, M.; Shlezinger, N.; Poor, H.V.; Eldar, Y.C.; Cui, S. Communication-efficient federated learning. Proc. Natl. Acad. Sci. USA 2021, 118, e2024789118. [Google Scholar] [CrossRef] [PubMed]
- Asad, M.; Moustafa, A.; Ito, T.; Aslam, M. Evaluating the communication efficiency in federated learning algorithms. In Proceedings of the 2021 IEEE 24th International Conference on Computer Supported Cooperative Work in Design (CSCWD); 2021; pp. 552–557. Available online: https://ieeexplore.ieee.org/document/9437738 (accessed on 19 August 2023).
- Zhang, W.; Wang, X.; Zhou, P.; Wu, W.; Zhang, X. Client selection for federated learning with non-iid data in mobile edge computing. IEEE Access 2021, 9, 24462–24474. [Google Scholar] [CrossRef]
- Albelaihi, R.; Yu, L.; Craft, W.D.; Sun, X.; Wang, C.; Gazda, R. Green Federated Learning via Energy-Aware Client Selection. In Proceedings of the GLOBECOM 2022-2022 IEEE Global Communications Conference; 2022; pp. 13–18. Available online: https://ieeexplore.ieee.org/document/10001569 (accessed on 19 August 2023).
- Asad, M.; Moustafa, A.; Rabhi, F.A.; Aslam, M. THF: 3-way hierarchical framework for efficient client selection and resource management in federated learning. IEEE Internet Things J. 2021, 9, 11085–11097. [Google Scholar] [CrossRef]
- Chai, Z.; Ali, A.; Zawad, S.; Truex, S.; Anwar, A.; Baracaldo, N.; Zhou, Y.; Ludwig, H.; Yan, F.; Cheng, Y. Tifl: A tier-based federated learning system. In Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing; 2020; pp. 125–136. Available online: https://dl.acm.org/doi/abs/10.1145/3369583.3392686?casa_token=H-rLbMWgQcgAAAAA:4W7rio6RI5d19VplBX6jmf7vCoxYDmQzQSFOeliE75eG7aQZcvBGvs5v8Sdy1SiEISKPdmjAcqxz5Q (accessed on 19 August 2023).
- Wang, X.; Han, Y.; Wang, C.; Zhao, Q.; Chen, X.; Chen, M. In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning. IEEE Netw. 2019, 33, 156–165. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Low-latency federated learning and blockchain for edge association in digital twin empowered 6G networks. IEEE Trans. Ind. Inform. 2020, 17, 5098–5107. [Google Scholar] [CrossRef]
- Tianqing, Z.; Zhou, W.; Ye, D.; Cheng, Z.; Li, J. Resource allocation in IoT edge computing via concurrent federated reinforcement learning. IEEE Internet Things J. 2021, 9, 1414–1426. [Google Scholar] [CrossRef]
- Kang, J.; Li, X.; Nie, J.; Liu, Y.; Xu, M.; Xiong, Z.; Niyato, D.; Yan, Q. Communication-efficient and cross-chain empowered federated learning for artificial intelligence of things. IEEE Trans. Netw. Sci. Eng. 2022, 9, 2966–2977. [Google Scholar] [CrossRef]
- Sun, P.; Che, H.; Wang, Z.; Wang, Y.; Wang, T.; Wu, L.; Shao, H. Pain-FL: Personalized privacy-preserving incentive for federated learning. IEEE J. Sel. Areas Commun. 2021, 39, 3805–3820. [Google Scholar] [CrossRef]
- Li, Y.; Tao, X.; Zhang, X.; Liu, J.; Xu, J. Privacy-preserved federated learning for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2021, 23, 8423–8434. [Google Scholar] [CrossRef]
- Zeng, T.; Semiari, O.; Chen, M.; Saad, W.; Bennis, M. Federated learning on the road autonomous controller design for connected and autonomous vehicles. IEEE Trans. Wirel. Commun. 2022, 21, 10407–10423. [Google Scholar] [CrossRef]
- Ng, J.S.; Lim, W.Y.B.; Xiong, Z.; Cao, X.; Niyato, D.; Leung, C.; Kim, D.I. A hierarchical incentive design toward motivating participation in coded federated learning. IEEE J. Sel. Areas Commun. 2021, 40, 359–375. [Google Scholar] [CrossRef]
- Liu, L.; Zhang, J.; Song, S.; Letaief, K.B. Client-edge-cloud hierarchical federated learning. In Proceedings of the ICC 2020-2020 IEEE International Conference on Communications (ICC); 2020; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9148862 (accessed on 19 August 2023).
- Shi, W.; Zhou, S.; Niu, Z.; Jiang, M.; Geng, L. Joint device scheduling and resource allocation for latency constrained wireless federated learning. IEEE Trans. Wirel. Commun. 2020, 20, 453–467. [Google Scholar] [CrossRef]
- Lim, W.Y.B.; Ng, J.S.; Xiong, Z.; Jin, J.; Zhang, Y.; Niyato, D.; Leung, C.; Miao, C. Decentralized edge intelligence: A dynamic resource allocation framework for hierarchical federated learning. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 536–550. [Google Scholar] [CrossRef]
- Asad, M.; Otoum, S.; Shaukat, S. Resource and Heterogeneity-aware Clients Eligibility Protocol in Federated Learning. In Proceedings of the GLOBECOM 2022-2022 IEEE Global Communications Conference; 2022; pp. 1140–1145. Available online: https://ieeexplore.ieee.org/document/10000884/ (accessed on 19 August 2023).
- Li, Q.; He, B.; Song, D. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021; pp. 10713–10722. Available online: https://openaccess.thecvf.com/content/CVPR2021/html/Li_Model-Contrastive_Federated_Learning_CVPR_2021_paper.html (accessed on 19 August 2023).
- Amiri, M.M.; Gündüz, D.; Kulkarni, S.R.; Poor, H.V. Update aware device scheduling for federated learning at the wireless edge. In Proceedings of the 2020 IEEE International Symposium on Information Theory (ISIT); 2020; pp. 2598–2603. Available online: https://ieeexplore.ieee.org/document/9173960/ (accessed on 19 August 2023).
- Wang, T.; Liu, Y.; Zheng, X.; Dai, H.N.; Jia, W.; Xie, M. Edge-based communication optimization for distributed federated learning. IEEE Trans. Netw. Sci. Eng. 2021, 9, 2015–2024. [Google Scholar] [CrossRef]
- Li, A.; Zhang, L.; Tan, J.; Qin, Y.; Wang, J.; Li, X.Y. Sample-level data selection for federated learning. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications; 2021; pp. 1–10. Available online: https://ieeexplore.ieee.org/document/9488723 (accessed on 19 August 2023).
- Deng, Y.; Lyu, F.; Ren, J.; Wu, H.; Zhou, Y.; Zhang, Y.; Shen, X. Auction: Automated and quality-aware client selection framework for efficient federated learning. IEEE Trans. Parallel Distrib. Syst. 2021, 33, 1996–2009. [Google Scholar] [CrossRef]
- Shyu, C.R.; Putra, K.T.; Chen, H.C.; Tsai, Y.Y.; Hossain, K.T.; Jiang, W.; Shae, Z.Y. A systematic review of federated learning in the healthcare area: From the perspective of data properties and applications. Appl. Sci. 2021, 11, 11191. [Google Scholar]
- Hu, K.; Wu, J.; Weng, L.; Zhang, Y.; Zheng, F.; Pang, Z.; Xia, M. A novel federated learning approach based on the confidence of federated Kalman filters. Int. J. Mach. Learn. Cybern. 2021, 12, 3607–3627. [Google Scholar] [CrossRef]
- Lewy, D.; Mańdziuk, J.; Ganzha, M.; Paprzycki, M. StatMix: Data augmentation method that relies on image statistics in federated learning. In Proceedings of the Neural Information Processing: 29th International Conference, ICONIP 2022, Virtual Event, 22–26 November 2022; pp. 574–585. [Google Scholar]
- Tang, M.; Ning, X.; Wang, Y.; Sun, J.; Wang, Y.; Li, H.; Chen, Y. FedCor: Correlation-based active client selection strategy for heterogeneous federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022; pp. 10102–10111. Available online: https://openaccess.thecvf.com/content/CVPR2022/html/Tang_FedCor_Correlation-Based_Active_Client_Selection_Strategy_for_Heterogeneous_Federated_Learning_CVPR_2022_paper.html (accessed on 19 August 2023).
- Sultana, A.; Haque, M.M.; Chen, L.; Xu, F.; Yuan, X. Eiffel: Efficient and fair scheduling in adaptive federated learning. IEEE Trans. Parallel Distrib. Syst. 2022, 33, 4282–4294. [Google Scholar] [CrossRef]
- Liu, S.; Chen, Q.; You, L. Fed2a: Federated learning mechanism in asynchronous and adaptive modes. Electronics 2022, 11, 1393. [Google Scholar] [CrossRef]
- Chen, Y.; Ning, Y.; Slawski, M.; Rangwala, H. Asynchronous online federated learning for edge devices with non-iid data. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data); 2020; pp. 15–24. Available online: https://ieeexplore.ieee.org/document/9378161/ (accessed on 19 August 2023).
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Huang, T.; Lin, W.; Wu, W.; He, L.; Li, K.; Zomaya, A.Y. An efficiency-boosting client selection scheme for federated learning with fairness guarantee. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1552–1564. [Google Scholar] [CrossRef]
- Shen, G.; Gao, D.; Yang, L.; Zhou, F.; Song, D.; Lou, W.; Pan, S. Variance-reduced heterogeneous federated learning via stratified client selection. arXiv 2022, arXiv:2201.05762. [Google Scholar]
- Ma, Z.; Zhao, M.; Cai, X.; Jia, Z. Fast-convergent federated learning with class-weighted aggregation. J. Syst. Archit. 2021, 117, 102125. [Google Scholar] [CrossRef]
- Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Federated learning with matched averaging. arXiv 2020, arXiv:2002.06440. [Google Scholar]
- Haddadpour, F.; Mahdavi, M. On the convergence of local descent methods in federated learning. arXiv 2019, arXiv:1910.14425. [Google Scholar]
- Li, C.; Li, G.; Varshney, P.K. Decentralized federated learning via mutual knowledge transfer. IEEE Internet Things J. 2021, 9, 1136–1147. [Google Scholar] [CrossRef]
- Lee, S.; Sahu, A.K.; He, C.; Avestimehr, S. Partial model averaging in federated learning: Performance guarantees and benefits. arXiv 2022, arXiv:2201.03789. [Google Scholar] [CrossRef]
- Beaussart, M.; Grimberg, F.; Hartley, M.A.; Jaggi, M. Waffle: Weighted averaging for personalized federated learning. arXiv 2021, arXiv:2110.06978. [Google Scholar]
- Giuseppi, A.; Manfredi, S.; Pietrabissa, A. A weighted average consensus approach for decentralized federated learning. Mach. Intell. Res. 2022, 19, 319–330. [Google Scholar] [CrossRef]
- Chen, J.; Li, J.; Huang, R.; Yue, K.; Chen, Z.; Li, W. Federated learning for bearing fault diagnosis with dynamic weighted averaging. In Proceedings of the 2021 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD); 2021; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9670854 (accessed on 19 August 2023).
- Li, L.; Fan, Y.; Tse, M.; Lin, K.Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854. [Google Scholar] [CrossRef]
- Kholod, I.; Yanaki, E.; Fomichev, D.; Shalugin, E.; Novikova, E.; Filippov, E.; Nordlund, M. Open-source federated learning frameworks for IoT: A comparative review and analysis. Sensors 2020, 21, 167. [Google Scholar] [CrossRef] [PubMed]
- Poli, C. An Adaptive Model Averaging Procedure for Federated Learning (AdaFed). J. Adv. Inf. Technol. 2022, 13, 539–548. [Google Scholar]
- Wang, S.; Suwandi, R.C.; Chang, T.H. Demystifying model averaging for communication-efficient federated matrix factorization. In Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2021; pp. 3680–3684. Available online: https://ieeexplore.ieee.org/document/9413927 (accessed on 19 August 2023).
- Ji, S.; Saravirta, T.; Pan, S.; Long, G.; Walid, A. Emerging trends in federated learning: From model fusion to federated x learning. arXiv 2021, arXiv:2102.12920. [Google Scholar]
- Liang, P.P.; Liu, T.; Ziyin, L.; Allen, N.B.; Auerbach, R.P.; Brent, D.; Salakhutdinov, R.; Morency, L.P. Think locally, act globally: Federated learning with local and global representations. arXiv 2020, arXiv:2001.01523. [Google Scholar]
- Hanzely, F.; Richtárik, P. Federated learning of a mixture of global and local models. arXiv 2020, arXiv:2002.05516. [Google Scholar]
- Luping, W.; Wei, W.; Bo, L. CMFL: Mitigating communication overhead for federated learning. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS); 2019; pp. 954–964. Available online: https://ieeexplore.ieee.org/document/8885054 (accessed on 19 August 2023).
- Zhang, L.; Shen, L.; Ding, L.; Tao, D.; Duan, L.Y. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022; pp. 10174–10183. Available online: https://openaccess.thecvf.com/content/CVPR2022/html/Zhang_Fine-Tuning_Global_Model_via_Data-Free_Knowledge_Distillation_for_Non-IID_Federated_CVPR_2022_paper.html (accessed on 19 August 2023).
- Zhan, Y.; Li, P.; Qu, Z.; Zeng, D.; Guo, S. A learning-based incentive mechanism for federated learning. IEEE Internet Things J. 2020, 7, 6360–6368. [Google Scholar] [CrossRef]
- Wink, T.; Nochta, Z. An approach for peer-to-peer federated learning. In Proceedings of the 2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W); 2021; pp. 150–157. Available online: https://ieeexplore.ieee.org/document/9502443/ (accessed on 19 August 2023).
- Lalitha, A.; Kilinc, O.C.; Javidi, T.; Koushanfar, F. Peer-to-peer federated learning on graphs. arXiv 2019, arXiv:1901.11173. [Google Scholar]
- Mills, J.; Hu, J.; Min, G. Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet Things J. 2019, 7, 5986–5994. [Google Scholar] [CrossRef]
- Liu, Y.; Garg, S.; Nie, J.; Zhang, Y.; Xiong, Z.; Kang, J.; Hossain, M.S. Deep anomaly detection for time-series data in industrial IoT: A communication-efficient on-device federated learning approach. IEEE Internet Things J. 2020, 8, 6348–6358. [Google Scholar] [CrossRef]
- Ding, J.; Tramel, E.; Sahu, A.K.; Wu, S.; Avestimehr, S.; Zhang, T. Federated learning challenges and opportunities: An outlook. In Proceedings of the ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2022; pp. 8752–8756. Available online: https://ieeexplore.ieee.org/document/9746925 (accessed on 19 August 2023).
- Haddadpour, F.; Kamani, M.M.; Mokhtari, A.; Mahdavi, M. Federated learning with compression: Unified analysis and sharp guarantees. In Proceedings of the International Conference on Artificial Intelligence and Statistics; 2021; pp. 2350–2358. Available online: https://proceedings.mlr.press/v130/haddadpour21a.html (accessed on 19 August 2023).
- Zhao, Z.; Xia, J.; Fan, L.; Lei, X.; Karagiannidis, G.K.; Nallanathan, A. System optimization of federated learning networks with a constrained latency. IEEE Trans. Veh. Technol. 2021, 71, 1095–1100. [Google Scholar] [CrossRef]
- Chen, M.; Yang, Z.; Saad, W.; Yin, C.; Poor, H.V.; Cui, S. Performance optimization of federated learning over wireless networks. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM); 2019; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9013160 (accessed on 19 August 2023).
- Al-Shedivat, M.; Gillenwater, J.; Xing, E.; Rostamizadeh, A. Federated learning via posterior averaging: A new perspective and practical algorithms. arXiv 2020, arXiv:2010.05273. [Google Scholar]
- Gao, H.; Thai, M.T.; Wu, J. When Decentralized Optimization Meets Federated Learning. IEEE Netw. 2023; early access. [Google Scholar]
- Wang, Z.; Xu, H.; Liu, J.; Huang, H.; Qiao, C.; Zhao, Y. Resource-efficient federated learning with hierarchical aggregation in edge computing. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications; 2021; pp. 1–10. Available online: https://ieeexplore.ieee.org/document/9488756 (accessed on 19 August 2023).
- Balakrishnan, R.; Akdeniz, M.; Dhakal, S.; Himayat, N. Resource management and fairness for federated learning over wireless edge networks. In Proceedings of the 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC); 2020; pp. 1–5. Available online: https://ieeexplore.ieee.org/document/9154285 (accessed on 19 August 2023).
- Balasubramanian, V.; Aloqaily, M.; Reisslein, M.; Scaglione, A. Intelligent resource management at the edge for ubiquitous IoT: An SDN-based federated learning approach. IEEE Netw. 2021, 35, 114–121. [Google Scholar] [CrossRef]
- Nishio, T.; Yonetani, R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proceedings of the ICC 2019-2019 IEEE International Conference on Communications (ICC); 2019; pp. 1–7. Available online: https://ieeexplore.ieee.org/document/8761315 (accessed on 19 August 2023).
- Trindade, S.; Bittencourt, L.F.; da Fonseca, N.L. Management of resource at the network edge for federated learning. arXiv 2021, arXiv:2107.03428. [Google Scholar] [CrossRef]
- Imteaj, A.; Thakker, U.; Wang, S.; Li, J.; Amini, M.H. A survey on federated learning for resource-constrained IoT devices. IEEE Internet Things J. 2021, 9, 1–24. [Google Scholar] [CrossRef]
- Victor, N.; Alazab, M.; Bhattacharya, S.; Magnusson, S.; Maddikunta, P.K.R.; Ramana, K.; Gadekallu, T.R. Federated Learning for IoUT: Concepts, Applications, Challenges and Opportunities. arXiv 2022, arXiv:2207.13976. [Google Scholar]
- Abreha, H.G.; Hayajneh, M.; Serhani, M.A. Federated learning in edge computing: A systematic survey. Sensors 2022, 22, 450. [Google Scholar] [CrossRef]
- Yang, H.H.; Liu, Z.; Quek, T.Q.; Poor, H.V. Scheduling policies for federated learning in wireless networks. IEEE Trans. Commun. 2019, 68, 317–333. [Google Scholar] [CrossRef]
- Wadu, M.M.; Samarakoon, S.; Bennis, M. Joint client scheduling and resource allocation under channel uncertainty in federated learning. IEEE Trans. Commun. 2021, 69, 5962–5974. [Google Scholar] [CrossRef]
- Hu, C.H.; Chen, Z.; Larsson, E.G. Device scheduling and update aggregation policies for asynchronous federated learning. In Proceedings of the 2021 IEEE 22nd International Workshop on Signal Processing Advances in Wireless Communications (SPAWC); 2021; pp. 281–285. Available online: https://ieeexplore.ieee.org/document/9593194 (accessed on 19 August 2023).
- Yang, Z.; Chen, M.; Saad, W.; Hong, C.S.; Shikh-Bahaei, M.; Poor, H.V.; Cui, S. Delay minimization for federated learning over wireless communication networks. arXiv 2020, arXiv:2007.03462. [Google Scholar]
- Sattler, F.; Wiedemann, S.; Müller, K.R.; Samek, W. Robust and communication-efficient federated learning from non-iid data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3400–3413. [Google Scholar] [CrossRef] [PubMed]
- Albasyoni, A.; Safaryan, M.; Condat, L.; Richtárik, P. Optimal gradient compression for distributed and federated learning. arXiv 2020, arXiv:2010.03246. [Google Scholar]
- Ozkara, K.; Singh, N.; Data, D.; Diggavi, S. Quped: Quantized personalization via distillation with applications to federated learning. Adv. Neural Inf. Process. Syst. 2021, 34, 3622–3634. [Google Scholar]
- Jiang, Y.; Wang, S.; Valls, V.; Ko, B.J.; Lee, W.H.; Leung, K.K.; Tassiulas, L. Model pruning enables efficient federated learning on edge devices. IEEE Trans. Neural Netw. Learn. Syst. 2022; early access. [Google Scholar]
- Prakash, P.; Ding, J.; Chen, R.; Qin, X.; Shu, M.; Cui, Q.; Guo, Y.; Pan, M. IoT Device Friendly and Communication-Efficient Federated Learning via Joint Model Pruning and Quantization. IEEE Internet Things J. 2022, 9, 13638–13650. [Google Scholar] [CrossRef]
- Jiang, Z.; Xu, Y.; Xu, H.; Wang, Z.; Qiao, C.; Zhao, Y. Fedmp: Federated learning through adaptive model pruning in heterogeneous edge computing. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE); 2022; pp. 767–779. Available online: https://ieeexplore.ieee.org/document/9835327 (accessed on 19 August 2023).
- Wu, C.; Wu, F.; Lyu, L.; Huang, Y.; Xie, X. Communication-efficient federated learning via knowledge distillation. Nat. Commun. 2022, 13, 2032. [Google Scholar] [CrossRef]
- Yuan, X.; Li, P. On convergence of FedProx: Local dissimilarity invariant bounds, non-smoothness and beyond. Adv. Neural Inf. Process. Syst. 2022, 35, 10752–10765. [Google Scholar]
- Pappas, C.; Chatzopoulos, D.; Lalis, S.; Vavalis, M. Ipls: A framework for decentralized federated learning. In Proceedings of the 2021 IFIP Networking Conference (IFIP Networking); 2021; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9472790/ (accessed on 19 August 2023).
- Das, A.; Patterson, S. Multi-tier federated learning for vertically partitioned data. In Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); 2021; pp. 3100–3104. Available online: https://ieeexplore.ieee.org/document/9415026 (accessed on 19 August 2023).
- Romanini, D.; Hall, A.J.; Papadopoulos, P.; Titcombe, T.; Ismail, A.; Cebere, T.; Sandmann, R.; Roehm, R.; Hoeh, M.A. Pyvertical: A vertical federated learning framework for multi-headed splitnn. arXiv 2021, arXiv:2104.00489. [Google Scholar]
- Huang, W.; Li, T.; Wang, D.; Du, S.; Zhang, J.; Huang, T. Fairness and accuracy in horizontal federated learning. Inf. Sci. 2022, 589, 170–185. [Google Scholar] [CrossRef]
- Su, L.; Lau, V.K. Hierarchical federated learning for hybrid data partitioning across multitype sensors. IEEE Internet Things J. 2021, 8, 10922–10939. [Google Scholar] [CrossRef]
- Zhang, X.; Yin, W.; Hong, M.; Chen, T. Hybrid federated learning: Algorithms and implementation. arXiv 2020, arXiv:2012.12420. [Google Scholar]
- Khan, L.U.; Pandey, S.R.; Tran, N.H.; Saad, W.; Han, Z.; Nguyen, M.N.; Hong, C.S. Federated learning for edge networks: Resource optimization and incentive mechanism. IEEE Commun. Mag. 2020, 58, 88–93. [Google Scholar] [CrossRef]
- Nguyen, V.D.; Sharma, S.K.; Vu, T.X.; Chatzinotas, S.; Ottersten, B. Efficient federated learning algorithm for resource allocation in wireless IoT networks. IEEE Internet Things J. 2020, 8, 3394–3409. [Google Scholar] [CrossRef]
- Cho, Y.J.; Wang, J.; Joshi, G. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv 2020, arXiv:2010.01243. [Google Scholar]
- AbdulRahman, S.; Tout, H.; Mourad, A.; Talhi, C. FedMCCS: Multicriteria client selection model for optimal IoT federated learning. IEEE Internet Things J. 2020, 8, 4723–4735. [Google Scholar] [CrossRef]
- Alferaidi, A.; Yadav, K.; Alharbi, Y.; Viriyasitavat, W.; Kautish, S.; Dhiman, G. Federated Learning Algorithms to Optimize the Client and Cost Selections. Math. Probl. Eng. 2022, 2022, 8514562. [Google Scholar] [CrossRef]
- Imteaj, A.; Amini, M.H. FedPARL: Client activity and resource-oriented lightweight federated learning model for resource-constrained heterogeneous IoT environment. Front. Commun. Netw. 2021, 2, 657653. [Google Scholar] [CrossRef]
- Xia, W.; Wen, W.; Wong, K.K.; Quek, T.Q.; Zhang, J.; Zhu, H. Federated-learning-based client scheduling for low-latency wireless communications. IEEE Wirel. Commun. 2021, 28, 32–38. [Google Scholar] [CrossRef]
- Wadu, M.M.; Samarakoon, S.; Bennis, M. Federated learning under channel uncertainty: Joint client scheduling and resource allocation. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC); 2020; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9120649/ (accessed on 19 August 2023).
- Asad, M.; Moustafa, A.; Ito, T. FedOpt: Towards communication efficiency and privacy preservation in federated learning. Appl. Sci. 2020, 10, 2864. [Google Scholar] [CrossRef]
- Yu, R.; Li, P. Toward resource-efficient federated learning in mobile edge computing. IEEE Netw. 2021, 35, 148–155. [Google Scholar] [CrossRef]
- Zhou, Y.; Pu, G.; Ma, X.; Li, X.; Wu, D. Distilled one-shot federated learning. arXiv 2020, arXiv:2009.07999. [Google Scholar]
- Lin, T.; Kong, L.; Stich, S.U.; Jaggi, M. Ensemble distillation for robust model fusion in federated learning. Adv. Neural Inf. Process. Syst. 2020, 33, 2351–2363. [Google Scholar]
- Zhu, J.; Li, S.; You, Y. Sky Computing: Accelerating Geo-distributed Computing in Federated Learning. arXiv 2022, arXiv:2202.11836. [Google Scholar]
- Guberović, E.; Lipić, T.; Čavrak, I. Dew intelligence: Federated learning perspective. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC); 2021; pp. 1819–1824. Available online: https://ieeexplore.ieee.org/document/9529852 (accessed on 19 August 2023).
- Qu, L.; Zhou, Y.; Liang, P.P.; Xia, Y.; Wang, F.; Adeli, E.; Fei-Fei, L.; Rubin, D. Rethinking architecture design for tackling data heterogeneity in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022; pp. 10061–10071. Available online: https://openaccess.thecvf.com/content/CVPR2022/html/Qu_Rethinking_Architecture_Design_for_Tackling_Data_Heterogeneity_in_Federated_Learning_CVPR_2022_paper.html (accessed on 19 August 2023).
- Luo, M.; Chen, F.; Hu, D.; Zhang, Y.; Liang, J.; Feng, J. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. Adv. Neural Inf. Process. Syst. 2021, 34, 5972–5984. [Google Scholar]
- Zeng, M.; Wang, X.; Pan, W.; Zhou, P. Heterogeneous Training Intensity for Federated Learning: A Deep Reinforcement Learning Approach. IEEE Trans. Netw. Sci. Eng. 2022, 10, 990–1002. [Google Scholar] [CrossRef]
- Mitra, A.; Jaafar, R.; Pappas, G.J.; Hassani, H. Linear convergence in federated learning: Tackling client heterogeneity and sparse gradients. Adv. Neural Inf. Process. Syst. 2021, 34, 14606–14619. [Google Scholar]
- Mendieta, M.; Yang, T.; Wang, P.; Lee, M.; Ding, Z.; Chen, C. Local learning matters: Rethinking data heterogeneity in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2022; pp. 8397–8406. Available online: https://openaccess.thecvf.com/content/CVPR2022/html/Mendieta_Local_Learning_Matters_Rethinking_Data_Heterogeneity_in_Federated_Learning_CVPR_2022_paper.html (accessed on 19 August 2023).
- Li, Y.; Zhou, W.; Wang, H.; Mi, H.; Hospedales, T.M. Fedh2l: Federated learning with model and statistical heterogeneity. arXiv 2021, arXiv:2101.11296. [Google Scholar]
- Ma, X.; Zhu, J.; Lin, Z.; Chen, S.; Qin, Y. A state-of-the-art survey on solving non-IID data in Federated Learning. Future Gener. Comput. Syst. 2022, 135, 244–258. [Google Scholar] [CrossRef]
- Huang, Y.; Chu, L.; Zhou, Z.; Wang, L.; Liu, J.; Pei, J.; Zhang, Y. Personalized cross-silo federated learning on non-iid data. In Proceedings of the AAAI Conference on Artificial Intelligence; 2021; Volume 35, pp. 7865–7873. Available online: https://ojs.aaai.org/index.php/AAAI/article/view/16960 (accessed on 19 August 2023).
- Yeganeh, Y.; Farshad, A.; Navab, N.; Albarqouni, S. Inverse distance aggregation for federated learning with non-iid data. In Proceedings of the Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning: Second MICCAI Workshop, DART 2020, and First MICCAI Workshop, DCL 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4–8 October 2020; pp. 150–159. [Google Scholar]
- Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-iid data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
- Li, Q.; Diao, Y.; Chen, Q.; He, B. Federated learning on non-iid data silos: An experimental study. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE); 2022; pp. 965–978. Available online: https://ieeexplore.ieee.org/document/9835537/ (accessed on 19 August 2023).
- Wang, D.; Shen, L.; Luo, Y.; Hu, H.; Su, K.; Wen, Y.; Tao, D. FedABC: Targeting Fair Competition in Personalized Federated Learning. arXiv 2023, arXiv:2302.07450. [Google Scholar] [CrossRef]
- Li, A.; Sun, J.; Wang, B.; Duan, L.; Li, S.; Chen, Y.; Li, H. Lotteryfl: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets. arXiv 2020, arXiv:2008.03371. [Google Scholar]
- Yu, S.; Nguyen, P.; Abebe, W.; Qian, W.; Anwar, A.; Jannesari, A. Spatl: Salient parameter aggregation and transfer learning for heterogeneous clients in federated learning. arXiv 2021, arXiv:2111.14345. [Google Scholar]
- Ruan, Y.; Zhang, X.; Liang, S.C.; Joe-Wong, C. Towards flexible device participation in federated learning. In Proceedings of the International Conference on Artificial Intelligence and Statistics; 2021; pp. 3403–3411. Available online: https://proceedings.mlr.press/v130/ruan21a.html (accessed on 19 August 2023).
- Zhang, M.; Sapra, K.; Fidler, S.; Yeung, S.; Alvarez, J.M. Personalized federated learning with first order model optimization. arXiv 2020, arXiv:2012.08565. [Google Scholar]
- Yu, L.; Albelaihi, R.; Sun, X.; Ansari, N.; Devetsikiotis, M. Jointly optimizing client selection and resource management in wireless federated learning for internet of things. IEEE Internet Things J. 2021, 9, 4385–4395. [Google Scholar] [CrossRef]
- Cheng, Y.; Lu, J.; Niyato, D.; Lyu, B.; Kang, J.; Zhu, S. Federated transfer learning with client selection for intrusion detection in mobile edge computing. IEEE Commun. Lett. 2022, 26, 552–556. [Google Scholar] [CrossRef]
- Pillutla, K.; Malik, K.; Mohamed, A.R.; Rabbat, M.; Sanjabi, M.; Xiao, L. Federated learning with partial model personalization. In Proceedings of the International Conference on Machine Learning; 2022; pp. 17716–17758. Available online: https://proceedings.mlr.press/v162/pillutla22a.html (accessed on 19 August 2023).
- Jiang, J.; Hu, L. Decentralised federated learning with adaptive partial gradient aggregation. CAAI Trans. Intell. Technol. 2020, 5, 230–236. [Google Scholar] [CrossRef]
- Asad, M.; Aslam, M.; Jilani, S.F.; Shaukat, S.; Tsukada, M. SHFL: K-Anonymity-Based Secure Hierarchical Federated Learning Framework for Smart Healthcare Systems. Future Internet 2022, 14, 338. [Google Scholar] [CrossRef]
- Zhan, Y.; Zhang, J.; Hong, Z.; Wu, L.; Li, P.; Guo, S. A survey of incentive mechanism design for federated learning. IEEE Trans. Emerg. Top. Comput. 2021, 10, 1035–1044. [Google Scholar] [CrossRef]
- Zeng, R.; Zeng, C.; Wang, X.; Li, B.; Chu, X. A comprehensive survey of incentive mechanism for federated learning. arXiv 2021, arXiv:2106.15406. [Google Scholar]
- Toyoda, K.; Zhang, A.N. Mechanism design for an incentive-aware blockchain-enabled federated learning platform. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data); 2019; pp. 395–403. Available online: https://ieeexplore.ieee.org/document/9006344 (accessed on 19 August 2023).
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Han, J.; Khan, A.F.; Zawad, S.; Anwar, A.; Angel, N.B.; Zhou, Y.; Yan, F.; Butt, A.R. Tiff: Tokenized incentive for federated learning. In Proceedings of the 2022 IEEE 15th International Conference on Cloud Computing (CLOUD); 2022; pp. 407–416. Available online: https://ieeexplore.ieee.org/document/9860652 (accessed on 19 August 2023).
- Zhao, Y.; Zhao, J.; Jiang, L.; Tan, R.; Niyato, D.; Li, Z.; Lyu, L.; Liu, Y. Privacy-preserving blockchain-based federated learning for IoT devices. IEEE Internet Things J. 2020, 8, 1817–1829. [Google Scholar] [CrossRef]
- Kim, S. Incentive design and differential privacy based federated learning: A mechanism design perspective. IEEE Access 2020, 8, 187317–187325. [Google Scholar] [CrossRef]
- Yu, H.; Liu, Z.; Liu, Y.; Chen, T.; Cong, M.; Weng, X.; Niyato, D.; Yang, Q. A fairness-aware incentive scheme for federated learning. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society; 2020; pp. 393–399. Available online: https://dl.acm.org/doi/abs/10.1145/3375627.3375840?casa_token=I7BkjRl2lTMAAAAA:j8480Q_PSQfIMpFVnzX5U2GZhlKKfihAgPMo8uq49Vr34IA0IUTMDoRVpXHY3AA_MF2qkzu5FD3Qew (accessed on 19 August 2023).
- Wang, X.; Zhao, Y.; Qiu, C.; Liu, Z.; Nie, J.; Leung, V.C. Infedge: A blockchain-based incentive mechanism in hierarchical federated learning for end-edge-cloud communications. IEEE J. Sel. Areas Commun. 2022, 40, 3325–3342. [Google Scholar] [CrossRef]
- Jayaram, K.; Muthusamy, V.; Thomas, G.; Verma, A.; Purcell, M. Adaptive Aggregation For Federated Learning. arXiv 2022, arXiv:2203.12163. [Google Scholar]
- Tan, L.; Zhang, X.; Zhou, Y.; Che, X.; Hu, M.; Chen, X.; Wu, D. AdaFed: Optimizing Participation-Aware Federated Learning with Adaptive Aggregation Weights. IEEE Trans. Netw. Sci. Eng. 2022, 9, 2708–2720. [Google Scholar] [CrossRef]
- Sun, W.; Lei, S.; Wang, L.; Liu, Z.; Zhang, Y. Adaptive federated learning and digital twin for industrial internet of things. IEEE Trans. Ind. Inform. 2020, 17, 5605–5614. [Google Scholar] [CrossRef]
- Wang, Y.; Lin, L.; Chen, J. Communication-efficient adaptive federated learning. In Proceedings of the International Conference on Machine Learning; 2022; pp. 22802–22838. Available online: https://proceedings.mlr.press/v162/wang22o.html (accessed on 19 August 2023).
- Zhou, P.; Fang, P.; Hui, P. Loss tolerant federated learning. arXiv 2021, arXiv:2105.03591. [Google Scholar]
- Andreina, S.; Marson, G.A.; Möllering, H.; Karame, G. Baffle: Backdoor detection via feedback-based federated learning. In Proceedings of the 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS); 2021; pp. 852–863. Available online: https://ieeexplore.ieee.org/document/9546463/ (accessed on 19 August 2023).
- Nguyen, N.H.; Nguyen, P.L.; Nguyen, T.D.; Nguyen, T.T.; Nguyen, D.L.; Nguyen, T.H.; Pham, H.H.; Truong, T.N. FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for Non-IID Data in Federated Learning. In Proceedings of the 51st International Conference on Parallel Processing; 2022; pp. 1–11. Available online: https://dl.acm.org/doi/abs/10.1145/3545008.3545085?casa_token=ki3sb1BKfhcAAAAA:G99Gr9CAcdW3uWG4JQaQbFQICM4J4jEkmr0swtY8VFPptSVZH-oRcGY6nJXZHDpw-10_5Aggh18o_w (accessed on 19 August 2023).
- Zhang, J.; Guo, S.; Qu, Z.; Zeng, D.; Zhan, Y.; Liu, Q.; Akerkar, R. Adaptive federated learning on non-iid data with resource constraint. IEEE Trans. Comput. 2021, 71, 1655–1667. [Google Scholar] [CrossRef]
- Buyukates, B.; Ulukus, S. Timely communication in federated learning. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS); 2021; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9484497/ (accessed on 19 August 2023).
- Sharma, I.; Sharma, A.; Gupta, S.K. Asynchronous and Synchronous Federated Learning-based UAVs. In Proceedings of the 2023 Third International Symposium on Instrumentation, Control, Artificial Intelligence, and Robotics (ICA-SYMP); 2023; pp. 105–109. Available online: https://ieeexplore.ieee.org/document/10044951 (accessed on 19 August 2023).
- Caldas, S.; Konečny, J.; McMahan, H.B.; Talwalkar, A. Expanding the reach of federated learning by reducing client resource requirements. arXiv 2018, arXiv:1812.07210. [Google Scholar]
- Oh, Y.; Lee, N.; Jeon, Y.S.; Poor, H.V. Communication-efficient federated learning via quantized compressed sensing. IEEE Trans. Wirel. Commun. 2022, 22, 1087–1100. [Google Scholar] [CrossRef]
- Moustafa, A.; Asad, M.; Shaukat, S.; Norta, A. Ppcsa: Partial participation-based compressed and secure aggregation in federated learning. In Proceedings of the Advanced Information Networking and Applications: Proceedings of the 35th International Conference on Advanced Information Networking and Applications (AINA-2021); 2021; Volume 2, pp. 345–357. Available online: https://link.springer.com/chapter/10.1007/978-3-030-75075-6_28 (accessed on 19 August 2023).
- Shah, S.M.; Lau, V.K. Model compression for communication efficient federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2021; early access. [Google Scholar]
- Li, Y.; He, Z.; Gu, X.; Xu, H.; Ren, S. AFedAvg: Communication-efficient federated learning aggregation with adaptive communication frequency and gradient sparse. J. Exp. Theor. Artif. Intell. 2022, 1–23. [Google Scholar] [CrossRef]
- Kumar, G.; Toshniwal, D. Neuron Specific Pruning for Communication Efficient Federated Learning. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management; 2022; pp. 4148–4152. Available online: https://dl.acm.org/doi/abs/10.1145/3511808.3557658?casa_token=ChA7OHSjH8wAAAAA:dBSDxTud31f78I4p9B4XmkEjqTcZf24lOL06M9I0UMFXIqUPx7VRHAYnyU-c5VmFWd_6rOiim8Dlew (accessed on 19 August 2023).
- Wu, X.; Yao, X.; Wang, C.L. FedSCR: Structure-based communication reduction for federated learning. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 1565–1577. [Google Scholar] [CrossRef]
- Qiu, X.; Fernandez-Marques, J.; Gusmao, P.P.; Gao, Y.; Parcollet, T.; Lane, N.D. ZeroFL: Efficient on-device training for federated learning with local sparsity. arXiv 2022, arXiv:2208.02507. [Google Scholar]
- Yao, D.; Pan, W.; O’Neill, M.J.; Dai, Y.; Wan, Y.; Jin, H.; Sun, L. Fedhm: Efficient federated learning for heterogeneous models via low-rank factorization. arXiv 2021, arXiv:2111.14655. [Google Scholar]
- Zhou, H.; Cheng, J.; Wang, X.; Jin, B. Low rank communication for federated learning. In Proceedings of the Database Systems for Advanced Applications. DASFAA 2020 International Workshops: BDMS, SeCoP, BDQM, GDMA, and AIDE, Jeju, Republic of Korea, 24–27 September 2020; pp. 1–16. [Google Scholar]
- Hartebrodt, A.; Röttger, R.; Blumenthal, D.B. Federated singular value decomposition for high dimensional data. arXiv 2022, arXiv:2205.12109. [Google Scholar]
- Hu, Y.; Sun, X.; Tian, Y.; Song, L.; Tan, K.C. Communication Efficient Federated Learning with Heterogeneous Structured Client Models. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 7, 753–767. [Google Scholar] [CrossRef]
- Huang, J.; Tong, Z.; Feng, Z. Geographical POI recommendation for Internet of Things: A federated learning approach using matrix factorization. Int. J. Commun. Syst. 2022, e5161. [Google Scholar] [CrossRef]
- Alsulaimawi, Z. A non-negative matrix factorization framework for privacy-preserving and federated learning. In Proceedings of the 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP); 2020; pp. 1–6. Available online: https://ieeexplore.ieee.org/document/9287113 (accessed on 19 August 2023).
- Li, M.; Andersen, D.G.; Smola, A.J.; Yu, K. Communication efficient distributed machine learning with the parameter server. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar]
- Asad, M.; Moustafa, A.; Aslam, M. CEEP-FL: A comprehensive approach for communication efficiency and enhanced privacy in federated learning. Appl. Soft Comput. 2021, 104, 107235. [Google Scholar] [CrossRef]
- Li, S.; Qi, Q.; Wang, J.; Sun, H.; Li, Y.; Yu, F.R. GGS: General gradient sparsification for federated learning in edge computing. In Proceedings of the ICC 2020-2020 IEEE International Conference on Communications (ICC); 2020; pp. 1–7. Available online: https://ieeexplore.ieee.org/document/9148987 (accessed on 19 August 2023).
- Xu, J.; Glicksberg, B.S.; Su, C.; Walker, P.; Bian, J.; Wang, F. Federated learning for healthcare informatics. J. Healthc. Inform. Res. 2021, 5, 1–19. [Google Scholar] [CrossRef]
- Qiao, Y.; Munir, M.S.; Adhikary, A.; Raha, A.D.; Hong, S.H.; Hong, C.S. A Framework for Multi-Prototype Based Federated Learning: Towards the Edge Intelligence. In Proceedings of the 2023 International Conference on Information Networking (ICOIN); 2023; pp. 134–139. Available online: https://ieeexplore.ieee.org/document/10048999 (accessed on 19 August 2023).
- Asad, M.; Shaukat, S.; Javanmardi, E.; Nakazato, J.; Tsukada, M. A Comprehensive Survey on Privacy-Preserving Techniques in Federated Recommendation Systems. Appl. Sci. 2023, 13, 6201. [Google Scholar] [CrossRef]
- Larasati, H.T.; Firdaus, M.; Kim, H. Quantum Federated Learning: Remarks and Challenges. In Proceedings of the 2022 IEEE 9th International Conference on Cyber Security and Cloud Computing (CSCloud)/2022 IEEE 8th International Conference on Edge Computing and Scalable Cloud (EdgeCom); 2022; pp. 1–5. Available online: https://ieeexplore.ieee.org/document/9842983 (accessed on 19 August 2023).
- Dai, S.; Meng, F. Addressing modern and practical challenges in machine learning: A survey of online federated and transfer learning. Appl. Intell. 2022, 53, 11045–11072. [Google Scholar] [CrossRef]
- Keçeci, C.; Shaqfeh, M.; Mbayed, H.; Serpedin, E. Multi-Task and Transfer Learning for Federated Learning Applications. arXiv 2022, arXiv:2207.08147. [Google Scholar]
- Tam, P.; Corrado, R.; Eang, C.; Kim, S. Applicability of Deep Reinforcement Learning for Efficient Federated Learning in Massive IoT Communications. Appl. Sci. 2023, 13, 3083. [Google Scholar] [CrossRef]
- Liu, B.; Lv, N.; Guo, Y.; Li, Y. Recent Advances on Federated Learning: A Systematic Survey. arXiv 2023, arXiv:2301.01299. [Google Scholar]
- Zhou, S.; Li, G.Y. FedGiA: An efficient hybrid algorithm for federated learning. IEEE Trans. Signal Process. 2023, 71, 1493–1508. [Google Scholar] [CrossRef]
- Yang, T.J.; Xiao, Y.; Motta, G.; Beaufays, F.; Mathews, R.; Chen, M. Online Model Compression for Federated Learning with Large Models. arXiv 2022, arXiv:2205.03494. [Google Scholar]
- Ahmed, S.T.; Kumar, V.V.; Singh, K.K.; Singh, A.; Muthukumaran, V.; Gupta, D. 6G enabled federated learning for secure IoMT resource recommendation and propagation analysis. Comput. Electr. Eng. 2022, 102, 108210. [Google Scholar] [CrossRef]
- Rajasekaran, A.S.; Maria, A.; Rajagopal, M.; Lorincz, J. Blockchain enabled anonymous privacy-preserving authentication scheme for internet of health things. Sensors 2022, 23, 240. [Google Scholar] [CrossRef]
Reference | Year | Focus | Communication Constraints | Challenges |
---|---|---|---|---|
[1] | 2021 | Characteristics and the current practical application of FL | Yes | Network heterogeneity |
[17] | 2023 | Threats and vulnerabilities of FL | No | Adversarial attacks |
[18] | 2021 | Categorization of FL | Partially discussed | Design factors |
[3] | 2020 | Comparison of different ML deployment architectures and in-depth investigation on FL | Partially discussed | Architectural robustness |
[19] | 2021 | Advances and open challenges of FL | No | Privacy and communication |
[20] | 2021 | Characteristics of edge FL | Yes | Security and privacy |
[21] | 2021 | Non-identical and non-independent data distribution in FL | Partially | Communication efficiency |
[22] | 2022 | FL in smart healthcare | No | Design factors |
[23] | 2023 | Blockchain empowered FL | No | Privacy and security |
[24] | 2022 | Security aspects of FL | No | Privacy and security |
[25] | 2022 | Implementation of FL in centralized, decentralized, and heterogeneous approach | Partially discussed | Network heterogeneity |
[26] | 2022 | Integration of FL with industrial IoT | No | Privacy preservation |
[27] | 2023 | FL in wireless networks | Yes | High communication costs |
[28] | 2023 | Review of existing studies on communication constraints in FL | Yes | Communication costs |
[29] | 2023 | Threats to and flaws in the FL strategy | No | Privacy and Security |
[30,31] | 2020 | FL in mobile edge computing | Partially discussed | Design factors |
[32] | 2020 | Personalization of FL | No | Client selection |
Category | Description |
---|---|
Definition | FL is a machine learning setting where the goal is to train a model across multiple decentralized edge devices or servers holding local data samples, without explicitly exchanging data samples. |
Key Components | The main elements of FL include the client devices holding local data, the central server that coordinates the learning process, and the machine learning models being trained. |
Workflow | The typical FL cycle is as follows: (1) The server initializes the model and sends it to the clients; (2) Each client trains the model locally using its data; (3) The clients send their locally updated models or gradients to the server; (4) The server aggregates the received models (typically by averaging); (5) Steps 2–4 are repeated until convergence. |
Advantages | The benefits of FL include (1) privacy preservation, as raw data remain on the client; (2) a reduction in bandwidth usage, as only model updates are transferred, not the data; (3) the potential for personalized models, as models can learn from local data patterns. |
Challenges | FL faces several challenges, including (1) communication efficiency; (2) heterogeneity in terms of computation and data distribution across clients; (3) statistical challenges due to non-iid data; (4) privacy and security concerns. |
Communication Efficiency Techniques | Communication efficiency can be improved using techniques, such as (1) federated averaging, which reduces the number of communication rounds; (2) model compression techniques, which reduce the size of model updates; (3) the use of parameter quantization or sparsification. |
Data Distribution | In FL, data are typically distributed in a non-iid manner across clients due to the nature of edge devices. This unique distribution can lead to statistical challenges and influence the final model’s performance. |
Evaluation Metrics | Evaluation of FL models considers several metrics: (1) global accuracy, measuring how well the model performs on the entire data distribution; (2) local accuracy, measuring performance on individual client’s data; (3) communication rounds, indicating the number of training iterations; (4) data efficiency, which considers the amount of data needed to reach a certain level of accuracy. |
Reference | Focus | Overview |
---|---|---|
[49] | Client selection | The algorithm recognizes the non-IID degrees of clients and chooses those with lower degrees of non-IID data to train the models with higher frequency. |
[50] | Client selection | Optimizes the trade-off between maximizing the number of selected clients and minimizes the energy drawn from batteries for the selected clients in FL. |
[51] | Resource management | The study uses cluster heads to communicate with the cloud server through edge aggregation, where clients upload their local models to their respective cluster heads. A joint communication and computation resource management scheme is also formulated through efficient client selection to achieve global cost minimization. |
[52] | Client selection | The study divides clients into tiers based on their training performance. It selects clients from the same tier in each training round to mitigate the straggler problem. It employs an adaptive tier selection approach to update the tiering on the fly based on the observed training performance and accuracy. |
[53] | Communication efficiency | The paper proposes the "In-Edge AI" framework that integrates deep reinforcement learning and FL with mobile edge systems in order to optimize mobile edge computing, caching, and communication. |
[54] | Edge resource management | The study proposes a DTWN model and designs an edge association problem armed with FL. A multi-agent deep reinforcement learning-based algorithm is proposed to solve the problem. In addition, the study considers an edge association and communication resource allocation problem to minimize communication costs. |
[55] | Edge resource management | The paper proposes a framework called concurrent federated reinforcement learning. Specifically, it protects the privacy of both the server and the edge node with the assistance of blockchain. |
[56] | Edge resource management | The paper proposed an FL framework, which can securely update the data with the help of parallel blockchains. It considers a two-phase commit protocol and defines an auction scheme based on ML for price optimization. |
[57] | Incentive mechanism | The paper considers a framework of a privacy-preserving incentive mechanism for encouraging the users to join the network. Specifically, the paper makes an extremely rigorous convergence analysis and derives a set of optimal contracts under the constraints of security demands and budget costs for each worker in the scenario. |
[58] | Structured updates | The study shows an FL framework for autonomous driving. With the help of MEC nodes and blockchain, the system can achieve a lower latency and more accurate results between the vehicles, even if there are malicious vehicles and MEC nodes. |
[59] | Incentive mechanism | The paper proposes an FL-based autonomous vehicle controller. To explain it deeper, the study uses a contract-theoretic incentive mechanism to speed up the process. It considers optimization methods to decrease the communication and computation cost for the system. |
[60] | Incentive mechanism | The paper proposes a coded FL method that is based on an evolutionary game and a deep learning method to allocate the resource intelligently. The results show that the study mitigates the overall system computation and communication latency. |
[61] | Optimization technique | The paper designs a client–edge–cloud hierarchical FL architecture. It develops an HierFAVG algorithm to allow edge servers to aggregate models partially to gain a higher efficiency. |
[62] | Client selection | The study proposes a two-level hierarchical FL framework and designs two incentive mechanisms for resource allocation. The cluster selection mechanism of workers is based on an evolutionary game, and one deep-learning-based auction mechanism is designed for the model owner’s selection of cluster heads. |
[63] | Resource management | The paper considers a maximum model accuracy problem of the wireless FL under the limited training time and latency constraint. It proposed a joint device scheduling and resource allocation policy. |
[64] | Client selection | The study presents a Clients’ Eligibility Protocol (CEP) to work with heterogeneous clients in practical industrial scenarios efficiently. The CEP uses a trusted authority to calculate the client’s eligibility score based on local computing resources, such as the bandwidth, memory, and battery life, and selects the resourceful clients for training. |
Resource | Edge Resource | Server Resource |
---|---|---|
Data Storage | Local Storage | Distributed Storage |
Data Aggregation | Local Aggregation | Distributed Aggregation |
Data Processing | Local Processing | Cloud Processing |
Data Security | Local Encryption | Cloud Encryption |
Device Heterogeneity | Device Adaptability | Incentive Mechanism | Adaptive Aggregation |
---|---|---|---|
Categorize devices | Assess device capability | Assign rewards | Aggregate according to device type |
Evaluate device resources | Monitor device performance | Balance rewards | Adjust aggregation strategy |
Consider device availability | Check device compatibility | Set rewards based on participation | Consider data privacy |
Analyze device specifications | Identify device limitations | Assign rewards based on data quality | Adapt to changes in data distribution |
Evaluate device trustworthiness | Assess device reliability | Offer rewards for data computation | Change aggregation frequency |
Consider device latency | Determine device storage capacity | Provide rewards for data transmission | Monitor device performance |
Check device battery level | Examine device memory usage | Create rewards for data accuracy | Adapt to changing device configurations |
Technique | Pros | Cons |
---|---|---|
Compression Schemes | ||
Quantization | Reduced communication | Information loss |
Sparsification | Lower bandwidth usage | Increased computation |
Low-rank factorization | Efficient storage | Complexity in updating |
Structured Updates | ||
Gradient sparsification | Reduced communication | Limited expressiveness |
Weight differencing | Low memory requirement | Sensitivity to noise |
Research Challenge | Brief Description |
---|---|
High Communication Overhead | FL requires transferring large amounts of data, which can lead to high communication costs. |
Data Heterogeneity | Differences in data distribution across devices can affect model performance and require efficient communication strategies. |
Latency | Variations in network conditions and device capabilities can cause latency issues, requiring efficient communication solutions. |
Bandwidth Limitations | A limited bandwidth can cause slow model training and update propagation. The efficient use of the available bandwidth is a challenge. |
Stragglers | Some devices may be slow to compute updates or fail to send updates, slowing down the learning process. The efficient handling of stragglers can improve communication efficiency. |
Scalability | As the number of participating devices increases, efficiently managing communications becomes more challenging. |
Security | Efficiently ensuring secure and privacy-preserving communication is a significant challenge. |
Device Failures | Devices may fail or drop out during the learning process, requiring robust communication protocols to handle these situations. |
Resource Constraints | Devices participating in FL may have different computational resources, which can create challenges for efficient communication. |
Data Synchronization | Ensuring all devices have the latest model updates for efficient learning can be a challenge, especially given the asynchronous nature of FL. |
Noise in Gradients | Due to the decentralized nature of FL, there can be a high level of noise in the gradient updates, affecting the overall communication efficiency. |
Compressed Communication | Due to bandwidth limitations, it may be necessary to compress data during transmission, which can lead to a loss of information and affect the learning process. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Asad, M.; Shaukat, S.; Hu, D.; Wang, Z.; Javanmardi, E.; Nakazato, J.; Tsukada, M. Limitations and Future Aspects of Communication Costs in Federated Learning: A Survey. Sensors 2023, 23, 7358. https://doi.org/10.3390/s23177358
Asad M, Shaukat S, Hu D, Wang Z, Javanmardi E, Nakazato J, Tsukada M. Limitations and Future Aspects of Communication Costs in Federated Learning: A Survey. Sensors. 2023; 23(17):7358. https://doi.org/10.3390/s23177358
Chicago/Turabian StyleAsad, Muhammad, Saima Shaukat, Dou Hu, Zekun Wang, Ehsan Javanmardi, Jin Nakazato, and Manabu Tsukada. 2023. "Limitations and Future Aspects of Communication Costs in Federated Learning: A Survey" Sensors 23, no. 17: 7358. https://doi.org/10.3390/s23177358
APA StyleAsad, M., Shaukat, S., Hu, D., Wang, Z., Javanmardi, E., Nakazato, J., & Tsukada, M. (2023). Limitations and Future Aspects of Communication Costs in Federated Learning: A Survey. Sensors, 23(17), 7358. https://doi.org/10.3390/s23177358