Applying Federated Learning in Software-Defined Networks: A Survey
Abstract
:1. Introduction
- We emphasize the importance of combining FL with SDNs as an important paradigm shift towards enabling collaborative ML model training with better data privacy and collaboration. We highlight the unique features of FL and SDNs, and discuss how they can be combined with each other. We discuss SDNs with a distributed control plane, consisting of multiple layered organized controllers, enabling scalable FL approaches with the combination of edge computing and cloud computing.
- We introduce three major challenges consisting of the incentive mechanism for participants, the privacy and security strategies for parameter communication, and aggregation algorithms for the generation of high performance global models. For each challenge, we present readers a comprehensive analysis of existing mechanisms, approaches, and solutions explored in current research.
- Future research issues, such as the evaluation of participants, anomaly detection, and the scalability of FL are also suggested and discussed.
2. Related Work
3. Federated Learning
3.1. Basic Concepts and Components
- Initializing the training task. The central server passes the training task and broadcasts the current global model to each client, as shown by (1) in Figure 1b.
- Local model training. Clients replace their local models with the received global model and continue to train new local models based on their local data, as shown by (2) in Figure 1b The training process continues until the number of iterations reaches the maximum.
- Local model uploading. Clients upload their local models to the central server, as shown by (3) in Figure 1b.
- Aggregating local models. The central server applies aggregation algorithms to generate a global model based on the uploaded local models, as shown by (4) in Figure 1b.
3.2. Classification of FL
- Horizontal federated learning. Horizontal FL approaches refer to the approaches that have data sets with numerous features overlapped but few similar users. As illustrated by Figure 2a, horizontal FL approaches extract the same features from different users in the original data set to train local models and aggregate global models in FL. For instance, we might apply a horizontal FL approach in two real banks. Each bank has various customers because they are located in two different cities. However, the types of information provided by each customer are similar, as the services provided by each bank are similar. This demonstrates a use-case in which the data shares the same feature space but varies in sample space. It meets the requirements for horizontal FL approaches.Many currently proposed FL approaches fall into the category of horizontal FL, and the most famous one among them was proposed by Google, wherein a global model was aggregated with the help of Android mobile phones that collected information from different clients [34]. Horizontal FL was also used in the healthcare domain [2]. Our previous work also applied horizontal FL to classify elephant flows over IoTs [35].
- Vertical federated learning. Contrary to horizontal FL, vertical FL is suitable for scenes where data sets have the same users but with few user features in common, as presented in Figure 2b. An intermediate third party is meaningful for the participants of vertical FL, as it can offer authentication and encryption for the training tasks. Consider applying vertical FL to a supermarket and a bank in the same region. Although the majority of users of the supermarket and the bank are the same due to the same region in which they are located, such users have widely various features because supermarkets and banks provide completely different services to customers. However, such two completely different institutions can contribute their data to produce a global model in a vertical manner, though they are prohibited from communicating and do not know what information each other provided. Cheng et al. [36] studied vertical FL and proposed a privacy-preserving system called SecureBoost. This system compiled information of multiple groups with common user samples but different feature sets to enhance model training. However, Yang et al. [37] reported their belief that third parties are not necessary.
- Federated transfer learning. When two data sets have very little overlap between the user features and the user samples, the training task can neither meet the requirements of horizontal FL or vertical FL. In this case, federated transfer learning can be involved. As illustrated in Figure 2c, given a supermarket in city A and a bank in city B, their customers and services have almost no overlap. Federated transfer learning will find similarities between these two different data sets when they participate in the model training. References [38,39] studied federated transfer learning.
4. Software-Defined Networks
5. Federal Learning in Software-Defined Networks
5.1. Combination of FL and SDNs
5.2. Challenges in Applying FL in SDNs
- Incentive mechanisms. Incentive mechanisms are used to encourage clients to participate in the training of FL and to make more contributions to global model aggregation. Although FL over SDNs can force all the controllers in the SDN control plane to be participants, incentive mechanisms may be involved to distribute rewards based on the contribution of participants, so that clients may be willing to contribute more and higher-quality data and resources to train local models and to improve the global model. Additionally, incentive mechanisms allow the central server and participants to balance the resources competed for by the FL and the other tasks running on SDN controllers.
- Security and privacy. Security and privacy have always been the key issues in the research of FL. Although the local controllers, as the participants in FL approaches, only need to transfer local models instead of local data, there is still a risk of privacy leakage. When applying FL in SDNs, the openness and programmability of SDNs, on the one hand, make FL more flexible and manageable; on the other hand, they also increase the security and privacy risks. In particular, the hidden security troubles in SDN architecture, controllers, and interfaces may completely destroy the aggregation, the related models, and the data.
- Global model aggregation. The performance of FL approaches is directly affected by the aggregation mechanisms involved. Although SDN controllers can use = builtin protocols and interfaces to alleviate this problem, such mechanisms’ speed and quality are the major concerns when deploying FL in SDNs.
5.3. Dealing with Asymmetry in FL and SDNs
6. Incentive Mechanisms
6.1. Game Theory
- The Stackelberg model is a classical game theory model that stipulates a leader and followers. In FL, the central server is the leader, and the clients are the followers. The followers may report their available resource and unit price to the leader, and the central server gives each follower unequal returns based on its budget to achieve the goal that minimizes or maximizes a particular objective. The leader and followers can be defined variously. While Sarikaya et al. [45] defined the model owner (server) as the leader and the workers (mobile devices) as the followers to improve FL efficiency; Khan et al. [46] set the base station as the leader and the participating users as followers to improve the accuracy of the global model; Feng et al. [47] specified the mobile devices as leaders and the model owner as the follower to determine the training data sizes for leaders and followers when applying FL in a communication framework; and Sun et al. [48] applied the Stackelberg model to adjust the number and the level of customers participating in the global model aggregation in FL. Although many researchers believed that the Stackelberg game existed between leaders and followers, Pandey et al. [49], Xiao et al. [50], and Hu et al. [51] reported their belief that the Stackelberg game should also exist between the followers. They divided the Stackelberg game into two stages: the leader’s releasing rewards stage and the followers’ maximizing their benefits stage. Zhan et al. [52] introduced deep reinforcement learning based on the two-stage Stackelberg model to determine the optimal pricing and training strategies for the central server and the participants (edge nodes), respectively. However, many researchers believed that it was difficult to find the best strategy based on game theory alone due to the complex networking environment.
Approach Reference Objectives Game Stackelberg [45,46,48,49,50,51,52] Leader: central server
Follower: participants[47] Leader: mobile devices
Follower: model ownerauction [53,54,55,56,57] Buyer: central server
Seller: clientsNC game [58,59,60] stimulus sharing
by tradingShapley [61,62] calculate contribution Nash equilibrium [63,64] collaboration strategy contract theory [65] decentralized incentive [66,67] based on data
and work[68,69] dynamic
renewal contract[70,71,72] multi-dimensional information [73] value-driven incentive [74] built on Ethereum fairness theory [75,76] reputation
incentive protocol[77,78,79] reporting
costs truthfully[76] rewards
and punishments - Auction theory is a type of game theory that includes auctioneers and bidders. When applying auction theory to encourage clients to participate in FL, the auctioneers release the task, the bidders bid for the necessary cost in FL training, and the auctioneers determine the winners. Le et al. [53,54] let the base station play the role of auctioneer and the mobile clients act as bidders. The base station released the task and the mobile clients sent bids to the base station for the necessary cost. The base station determined the winner from the bidders based on the relevant algorithm and paid the reward. Yu et al. [55] also attracted devices with high-quality data through dynamic payoff sharing based on auction theory to eliminate the mismatch between returns and contributions. However, bidding price is not the only element in auctions. More than 50% of results may be influenced by the other factors [80], and therefore, it is important to consider the multiple dimensions of the data. For instance, Zend et al. [56] brought a multi-dimensional incentive framework for FL to derive optimal strategies for edge devices, Jiao et al. [57] designed a reverse multi-dimension auction system to better evaluate the value of each participant.
- Non-cooperative game is a type of game in which individual players compete with each other. Applying non-cooperative gamed in FL implies the central server and clients do not cooperate. As Tang et al. [58] maximized the efficiency and balanced participants’ situations and budgets through a non-cooperative game, Pan et al. [59] proposed applying a non-cooperative game to determine the best security strategy for nodes and to attract participants, and Chai et al. [60] involved a non-cooperative game in internet of vehicles (IoVs) networks based on blockchain.
- The Shapley value is used in cooperative game theory to fairly distribute both gains and costs to several actors working in coalition. Cooperative game theory involves a coalition for all players in the game. The gains and costs are computed by the coalition. While Song et al. [61] calculated the Shapley value of data providers in FL to determine their returns, Sim et al. [62] proposed a reward scheme that computed the Shapley value to weigh the contributions of devices to allot rewards.
- The Nash equilibrium is a fundamental game theory model introduced by John Nash [81]. In this model, none of the players can unilateterally change their strategy to increase their payoff. It is a cooperative model in which all strategies are mutual best responses to each other, and no player has any motivation to deviate unilaterally from the given strategy. Gupta et al. [63,64] applied such a game model to enable FD over IoT networks.
6.2. Contract Theory
6.3. Fairness Theory
7. Security and Privacy
7.1. Information Encryption
- Differential privacy is a common encryption method that protects privacy by adding noise to sensitive data and parameters to cause deviations [84]. For instance, Wei et al. [85] added Gaussian noise and adjusted its variance to protect a global DP against FL differential attacks in a stochastic gradient descent; Geyer et al. [86] encrypted private information by adding Gaussian noise and proposed a differential privacy scheme to hide user contributions during FL training with less loss of model accuracy; and Triastcyn et al. [87] added noise after obtaining the privacy loss boundary determined by data distribution to provide a Bayesian differential privacy mechanism. Zhu et al. [88] studied hybrid DP and ML algorithms in FL.Some research showed that a trusted third party is needed to add noise to FL. This class of approach is called global differential privacy. Global differential privacy has the advantage that less noise is added and high accuracy global model can be achieved. It has been applied to FL in healthcare [89] and personal [90] scenarios.In contrast with global differential privacy, local differential privacy allows clients to encrypt data locally before uploading it. Cao et al. [91] reported their belief that local differential privacy was more credible than global privacy and developed a protection system with local differential privacy that could add Gaussian noise before model upload to enhance the security of a power system. While Lu et al. [92] applied local differential privacy in asynchronous FL in the field of the IoVs, Zhao et al. [93] designed four local differential privacy mechanisms to disrupt the gradient for IoVs away from threats. Some researchers reported believing that added noise is directly influenced by the dimensionality of the data and that each dimension does not have the same importance. Therefore, Liu et al. [94] proposed a Top-k concept that selects the most crucial dimensions based on contributions, Truex et al. [95] applied local differential privacy based on data with compressed dimensions, and Sun et al. [96] designed a parameter shuffling system for multi-dimensional data to mitigate privacy degradation caused by local differential privacy.
- Secure multi-party computing (SMC) is a sub-field of cryptography and can be used to solve the problem of collaborative computing between a group of untrusted parties under the premise of protecting private information. The application of SMC in FL allows multiple participants to jointly calculate the value of an objective function without revealing their own input. Zhu et al. [97] analyzed the relationship between FL and SMC, and Bonawitz et al. [98] designed a secure communication protocol that could specify a constant number of iterations and tolerate a number of clients exiting the FL. Kanagavelu et al. [99] proposed a two-phase MPC-enabled FL framework, wherein participants elected a committee to offer SMC services to reduce the overhead of SMC and maintain the scalability of FL. In [100], Sotthiwat et al. suggested encrypting the key parameters of multi-dimensional gradients to take advantage of SMC encryption while limiting communication cost.
- Homomorphic encryption (HE) is a cryptographic technique based on the computational complexity of mathematical problems. Using HE in FL firstly requires clients to encrypt their local models or parameters before uploading them to the central server, then clients offer the central server a scheme for the encrypted data. After the central server aggregates the data using the provided scheme and broadcasts it to the clients, clients finally decrypt the received data to get the final model.Gao et al. [39] proposed an end-to-end privacy protection scheme using HE for federated transfer learning, Zhou et al. in [101] combined secret sharing and HE to protect privacy. While Zhang et al. [102] introduced HE in Bayesian approaches to protect the privacy of vertical FL, references [103,104] added a coordinator, apart from clients and the central server, and let the coordinator take responsibility for specifying the security protocol including keys and processing functions. Stripelis et al. [105] proposed a HE mechanism based on CKKS to reduce the difficulty of encryption and decryption. CKKS is an approximation HE scheme, proposed in 2017 [106]. Ma et al. [107] designed a multi-key HE protocol on the basis of CKKS to allow collaborating participants to encrypt data with different keys and decryption.HE is secure, but its communication loss is also significant. Many researchers believe that communication is a bottleneck restricting the further development of HE in FL. In order to reduce such communication costs, Zhang et al. [108] proposed encrypting a batch of quantitative gradients at one time instead of encrypting a single gradient with full precision. This idea could effectively reduce the total amount of ciphertext and communication loss. Jiang et al. [109] devised a symmetric key system that appropriately introduced sparse technology according to the encrypted object.
- Encryption combinations combine multiple encryption methods to further improve the security of FL. The studies in [110,111,112] designed encryption mechanisms combining DP and SMC. They allowed local clients to encrypt their local models or parameters through local differential privacy before uploading, and let the central server complete the encryption task of secure aggregation based on SMC. They found that the noise added by local differential privacy was determined by the local clients. However, the study in [113] put forward an encryption combination wherein the added noise was generated by neighboring clients, and the local clients did not know the noise being added to their models. In addition to combining SMC and DP, the association of HE and DP is also an outstanding encryption approach. Hao et al. [114] proposed adding noise to each local gradient before encryption. A lightweight additive HE was used to reduce cost, and the DP was used to avoid the threat of collusion.
Approach Ref Objectives encryption DP [91,92,93,94,95] local
differential privacy[85,86,87,89,90] global
differential privacySMC [98,100] high-dimensional information [99] elect a committee HE [39] comparing with
secret sharing[101,102] Paillier HE [105] constructed
with CKKS[103,104] third-party encryption [107] Multiple key HE [108] quantify and unify [109] symmetric key DP+SMC [110,111,112,113] First stage: DP;
Second stage: SMCDP+HE [114] First stage: DP;
Second stage: DPdecentralized blockchain [115,116,117,118] manage
FL frameworks[119] committee consensus [120] incorporates
re-encryption.[121] based on
deferential data.other [36] tree-boosting system. [122] tree-based models intrusion detection [123] multiple
global models[124,125] based on classical ML [126] introduce teachers
and students.[127,128,129,130] deep
learning detection[131,132] edge
devices collaboration[133] decentralized
asynchronous FL
7.2. Decentralized Federated Learning
7.3. Intrusion Detection Mechanism
8. Global Model Aggregation
8.1. Aggregation Algorithms
8.2. Communication Efficiency
8.3. Nodes Selection
9. Future Research Direction
9.1. Estimating Participants
9.2. Anomaly Detection
9.3. Improving the Scalability of FL over SDN
10. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Boutaba, R.; Salahuddin, M.A.; Limam, N.; Ayoubi, S.; Shahriar, N.; Felipe, E.S.; Oscar, M.C. A comprehensive survey on machine learning for networking: Evolution, applications and research opportunities. J. Internet Serv. Appl. 2018, 9, 1–99. [Google Scholar] [CrossRef] [Green Version]
- Xu, J.; Glicksberg, B.S.; Su, C.; Walker, P.; Bian, J.; Wang, F. Federated learning for healthcare informatics. J. Healthc. Infor. Res. 2021, 5, 1–19. [Google Scholar] [CrossRef]
- Gupta, D.; Kayode, O.; Bhatt, S.; Gupta, M.; Tosun, A.S. Hierarchical Federated Learning based Anomaly Detection using Digital Twins for Smart Healthcare. arXiv 2021, arXiv:2111.12241. [Google Scholar]
- Jiang, J.C.; Kantarci, B.; Oktug, S.; Soyata, T. Federated learning in smart city sensing: Challenges and opportunities. Sensors 2020, 20, 6230. [Google Scholar] [CrossRef]
- Konečný, J.; McMahan, H.B.; Ramage, D.; Richtarik, P. Federated optimization: Distributed machine learning for on-device intelligence. arXiv 2016, arXiv:1610.02527. [Google Scholar]
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Guo, H.; Liu, A.; Lau, V.K.N. Analog gradient aggregation for federated learning over wireless networks: Customized design and convergence analysis. IEEE Internet Things J. 2020, 8, 197–210. [Google Scholar] [CrossRef]
- Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowl.-Based Syst. 2021, 216, 106775. [Google Scholar] [CrossRef]
- Li, Q.; Wen, Z.; Wu, Z.; Wang, N.; He, B. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. IEEE Trans. Knowl. Data Eng. 2021. [Google Scholar] [CrossRef]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. Acm Trans. Intell. Syst. Technol. 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Zhu, H.; Zhang, H.; Jin, Y. From federated learning to federated neural architecture search: A survey. Complex Intell. Syst. 2021, 7, 639–657. [Google Scholar] [CrossRef]
- Karasu, S.; Altan, A.; Saraç, Z.; Hacioğlu, R. Prediction of Bitcoin prices with machine learning methods using time series data. In Proceedings of the 2018 206th IEEE Signal Processing and Communications Applications Conference (SIU), Izmir, Turkey, 2–5 May 2018; pp. 1–4. [Google Scholar]
- Altan, A.; Karasu, S.; Bekiros, S. Digital currency forecasting with chaotic meta-heuristic bio-inspired signal processing techniques. Chaos Solitons Fractals 2019, 126, 325–336. [Google Scholar] [CrossRef]
- Savazzi, S.; Nicoli, M.; Rampa, V. Federated learning with cooperating devices: A consensus approach for massive IoT networks. IEEE Internet Things J. 2020, 7, 4641–4654. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Tuor, T.; Salonidis, T.; Leung, K.K.; Makaya, C.; He, T.; Chan, K. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun. 2019, 37, 1205–1221. [Google Scholar] [CrossRef] [Green Version]
- Hamdan, M.; Hassan, E.; Abdelaziz, A.; Elhigazi, A.; Mohammed, B.; Khan, S.; Vasilakos, A.V.; Marsono, M.N. A comprehensive survey of load balancing techniques in software-defined network. J. Netw. Comput. Appl. 2021, 174, 102856. [Google Scholar] [CrossRef]
- Foerster, K.T.; Schmid, S.; Vissicchio, S. Survey of consistent software-defined network updates. IEEE Commun. Surv. Tutor. 2018, 21, 1435–1461. [Google Scholar] [CrossRef] [Green Version]
- Huang, C.H.; Lee, T.H.; Chang, L.; Lin, J.R.; Horng, G. Adversarial attacks on SDN-based deep learning IDS system. In Proceedings of the 2019 International Conference on Mobile and Wireless Technology, Singapore, 25–27 June 2018; pp. 181–191. [Google Scholar]
- Balasubramanian, V.; Aloqaily, M.; Reisslein, M.; Scaglione, A. Intelligent Resource Management at the Edge for Ubiquitous IoT: An SDN-Based Federated Learning Approach. IEEE Netw. 2021, 35, 114–121. [Google Scholar] [CrossRef]
- Lyu, L.; Yu, H.; Ma, X.; Sun, L.; Zhao, J.; Yang, Q.; Yu, P.S. Privacy and robustness in federated learning: Attacks and defenses. arXiv 2020, arXiv:2012.06337. [Google Scholar]
- Yin, X.; Zhu, Y.; Hu, J. A comprehensive survey of privacy-preserving federated learning: A taxonomy, review, and future directions. Acm Comput. Surv. (Csur) 2021, 54, 1–36. [Google Scholar] [CrossRef]
- Mothukuri, V.; Parizi, R.M.; Pouriyeh, S.; Huang, Y.; Dehghantanha, A.; Srivastava, G. A survey on security and privacy of federated learning. Future Gener. Comput. Syst. 2021, 15, 619–640. [Google Scholar] [CrossRef]
- Khan, L.U.; Saad, W.; Zhu, H.; Hossain, E.; Hong, C.S. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. IEEE Commun. Surv. Tutor. 2021, 23, 1759–1799. [Google Scholar] [CrossRef]
- Pham, Q.V.; Dev, K.; Maddikunta, P.K.R.; Gadekallu, T.R.; Huynh-The, T. Fusion of federated learning and industrial internet of things: A survey. arXiv 2021, arXiv:2101.00798. [Google Scholar]
- Imteaj, A.; Thakker, U.; Wang, S.; Li, J.; Amini, M.H. A survey on federated learning for resource-constrained iot devices. Internet Things J. 2021, 9, 1–24. [Google Scholar] [CrossRef]
- Gadekallu, T.R.; Pham, Q.V.; Huynh-The, T.; Bhattacharya, S.; Maddikunta, P.K.R.; Liyanage, M. Federated Learning for Big Data: A Survey on Opportunities, Applications, and Future Directions. arXiv 2021, arXiv:2110.04160. [Google Scholar]
- Xia, Q.; Ye, W.; Tao, Z.; Wu, J.; Li, Q. A Survey of Federated Learning for Edge Computing: Research Problems and Solutions. High-Confid. Comput. 2021, 1, 100008. [Google Scholar] [CrossRef]
- Nguyen, D.C.; Ding, M.; Pham, Q.V.; Pathirana, P.N.; Le, L.B.; Seneviratne, A.; Li, J.; Niyato, D.; Poor, H.V. Federated learning meets blockchain in edge computing: Opportunities and challenges. IEEE Internet Things J. 2021, 8, 12806–12825. [Google Scholar] [CrossRef]
- Liu, Y.; Yuan, X.; Xiong, Z.; Kang, J.; Wang, X.; Niyato, D. Federated learning for 6G communications: Challenges, methods, and future directions. China Commun. 2020, 17, 105–118. [Google Scholar] [CrossRef]
- Yang, Z.; Chen, M.; Wong, K.K.; Poor, H.V.; Cui, S. Federated learning for 6G: Applications, challenges, and opportunities. arXiv 2021, arXiv:2101.01338. [Google Scholar] [CrossRef]
- Niknam, S.; Dhillon, H.S.; Reed, J.H. Federated learning for wireless communications: Motivation, opportunities, and challenges. IEEE Commun. Mag. 2020, 58, 46–51. [Google Scholar] [CrossRef]
- Shahid, O.; Pouriyeh, S.; Parizi, R.M.; Sheng, Q.Z.; Srivastava, G.; Zhao, L. Communication Efficiency in Federated Learning: Achievements and Challenges. arXiv 2017, arXiv:2107.10996. [Google Scholar]
- Zhan, Y.; Zhang, J.; Hong, Z.; Wu, L.; Li, P.; Guo, S. A Survey of Incentive Mechanism Design for Federated Learning. IEEE Trans. Emerg. Top. Comput. 2021. [Google Scholar] [CrossRef]
- McMahan, H.B.; Moore, E.; Ramage, D.; Arcas, B.A.Y. Federated learning of deep networks using model averaging. arXiv 2016, arXiv:1602.05629. [Google Scholar]
- Ma, X.; Liao, L.X.; Li, Z.; Chao, H. Asynchronous Federated Learning for Elephant Flow Detection in Software Defined Networking Systems. In Proceedings of the 2021 International Conference on Robotics, Intelligent Control and Artificial Intelligence, Guilin, China, 3–5 December 2021. in press. [Google Scholar]
- Cheng, K.; Fan, T.; Jin, Y.; Chen, T.; Papadopoulos, D.; Yang, Q. Secureboost: A lossless federated learning framework. IEEE Intell. Syst. 2021, 36, 87–98. [Google Scholar] [CrossRef]
- Yang, S.; Ren, B.; Zhou, X.; Liu, L. Parallel distributed logistic regression for vertical federated learning without third-party coordinator. arXiv 2019, arXiv:1911.09824. [Google Scholar]
- Liu, Y.; Kang, Y.; Xing, C.; Chen, T.; Yang, Q. A secure federated transfer learning framework. IEEE Intell. Syst. 2020, 35, 70–82. [Google Scholar] [CrossRef]
- Gao, D.; Liu, Y.; Huang, A.; Ju, C.; Yu, H.; Yang, Q. Privacy-preserving heterogeneous federated transfer learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2552–2559. [Google Scholar]
- Liao, L.X.; Chao, H.C.; Chen, M.Y. Intelligently modeling, detecting, and scheduling elephant flows in software defined energy cloud: A survey. J. Parallel Distrib. Comput. 2020, 146, 64–78. [Google Scholar] [CrossRef]
- Bannour, F.; Souihi, S.; Mellouk, A. Distributed SDN control: Survey, taxonomy, and challenges. IEEE Commun. Surv. Tutor. 2017, 20, 333–354. [Google Scholar] [CrossRef]
- Liu, Y.; Xu, C.; Zhan, Y.; Liu, Z.; Guan, J.; Zhang, H. Incentive mechanism for computation offloading using edge computing: A Stackelberg game approach. Comput. Netw. 2017, 129, 399–409. [Google Scholar] [CrossRef]
- Li, P.; Guo, S. Incentive mechanisms for device-to-device communications. IEEE Netw. 2015, 29, 75–79. [Google Scholar] [CrossRef]
- Yang, D.; Xue, G.; Fang, X.; Tang, J. Crowdsourcing to smartphones: Incentive mechanism design for mobile phone sensing. In Proceedings of the 18th Annual International Conference on Mobile Computing and Networking, Istanbul, Turkey, 22–26 August 2012; pp. 173–184. [Google Scholar]
- Sarikaya, Y.; Ercetin, O. Motivating workers in federated learning: A stackelberg game perspective. IEEE Netw. Lett. 2019, 2, 23–27. [Google Scholar] [CrossRef] [Green Version]
- Khan, L.U.; Pandey, S.R.; Tran, N.H.; Saad, W.; Han, Z.; Nguyen, M.N.H.; Hong, C.S. Federated learning for edge networks: Resource optimization and incentive mechanism. IEEE Commun. Mag. 2020, 58, 88–93. [Google Scholar] [CrossRef]
- Feng, S.; Niyato, D.; Wang, P.; Kim, D.I.; Liang, Y.C. Joint service pricing and cooperative relay communication for federated learning. In Proceedings of the 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Atlanta, GA, USA, 14–17 July 2019; pp. 815–820. [Google Scholar]
- Sun, W.; Xu, N.; Wang, L.; Zhang, H.; Zhang, Y. Dynamic digital twin and federated learning with incentives for air-ground networks. IEEE Trans. Netw. Sci. Eng. 2020, 9, 321–333. [Google Scholar] [CrossRef]
- Pandey, S.R.; Tran, N.H.; Bennis, M.; Tun, Y.K.; Manzoor, A.; Hong, C.S. A crowdsourcing framework for on-device federated learning. IEEE Trans. Wirel. Commun. 2020, 19, 3241–3256. [Google Scholar] [CrossRef] [Green Version]
- Xiao, G.; Xiao, M.; Gao, G.; Zhang, S.; Zhao, H.; Zou, X. Incentive Mechanism Design for Federated Learning: A Two-stage Stackelberg Game Approach. In Proceedings of the 2020 IEEE 26th International Conference on Parallel and Distributed Systems (ICPADS), Beijing, China, 2–4 December 2020; pp. 148–155. [Google Scholar]
- Hu, R.; Gong, Y. Trading Data For Learning: Incentive Mechanism For On-Device Federated Learning. In Proceedings of the GLOBECOM 2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar]
- Zhan, Y.; Li, P.; Qu, Z.; Zeng, D.; Guo, D. A learning-based incentive mechanism for federated learning. IEEE Internet Things J. 2020, 7, 6360–6368. [Google Scholar] [CrossRef]
- Le, T.H.T.; Tran, N.H.; Tun, Y.K.; Han, Z.; Hong, S.C. Auction based incentive design for efficient federated learning in cellular wireless networks. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Korea, 6–10 April 2020; pp. 1–6. [Google Scholar]
- Le, T.H.T.; Tran, N.H.; Tun, Y.K.; Nguyen, M.N.H.; Pandey, S.R.; Han, Z.; Hong, S.C. An incentive mechanism for federated learning in wireless cellular network: An auction approach. IEEE Trans. Wirel. Commun. 2020, 7, 6360–6368. [Google Scholar]
- Yu, H.; Liu, Z.; Liu, Y.; Chen, T.; Cong, M.; Weng, X.; Niyato, D.; Yang, Q. A sustainable incentive scheme for federated learning. IEEE Intell. Syst. 2021, 35, 58–69. [Google Scholar] [CrossRef]
- Zeng, R.; Zhang, S.; Wang, J.; Chu, X. Fmore: An incentive scheme of multi-dimensional auction for federated learning in mec. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 278–288. [Google Scholar]
- Jiao, Y.; Wang, P.; Niyato, D.; Lin, B.; Kim, D.I. Toward an automated auction framework for wireless federated learning services market. IEEE Trans. Mob. Comput. 2020, 20, 3034–3048. [Google Scholar] [CrossRef]
- Tang, M.; Wong, V.W.S. An Incentive Mechanism for Cross-Silo Federated Learning: A Public Goods Perspective. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
- Pan, Q.; Wu, J.; Bashir, A.K.; Li, J.; Yang, W.; Al-Otaibi, Y.D. Joint Protection of Energy Security and Information Privacy for Energy Harvesting: An Incentive Federated Learning Approach. IEEE Trans. Ind. Infor. 2021. [Google Scholar] [CrossRef]
- Chai, H.; Leng, S.; Chen, Y.; Zhang, K. A hierarchical blockchain-enabled federated learning algorithm for knowledge sharing in internet of vehicles. IEEE Trans. Intell. Transp. Syst. 2021, 22, 3975–3986. [Google Scholar] [CrossRef]
- Song, T.; Tong, Y.; Wei, S. Profit allocation for federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2577–2586. [Google Scholar]
- Sim, R.H.L.; Zhang, Y.; Chan, M.C.; Low, B.K.H. Collaborative machine learning with incentive-aware model rewards. Int. Conf. Mach. Learn. Pmlr 2020, 119, 8927–8936. [Google Scholar]
- Gupta, D.; Kayode, O.; Bhatt, S.; Gupta, M.; Tosun, A.S. Learner’s Dilemma: IoT Devices Training Strategies in Collaborative Deep Learning. In Proceedings of the IEEE 6th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 2–16 June 2020; pp. 1–6. [Google Scholar]
- Gupta, D.; Bhatt, S.; Bhatt, P.; Gupta, M.; Tosun, A.S. Game Theory Based Privacy Preserving Approach for Collaborative Deep Learning in IoT. arXiv 2021, arXiv:2103.15245. [Google Scholar]
- Bao, X.; Su, C.; Xiong, Y.; Hu, Y.; Huang, W. Flchain: A blockchain for auditable federated learning with trust and incentive. In Proceedings of the 2019 5th International Conference on Big Data Computing and Communications (BIGCOM), Qingdao, China, 9–11 August 2019; pp. 151–159. [Google Scholar]
- Tian, M.; Chen, Y.; Liu, Y.; Xiong, Z.; Leung, C.; Miao, C. A Contract Theory based Incentive Mechanism for Federated Learning. arXiv 2021, arXiv:2108.05568. [Google Scholar]
- Lim, W.Y.B.; Xiong, Z.; Miao, C.; Niyato, D.; Yang, Q.; Leung, C.; Poor, H.V. Hierarchical incentive mechanism design for federated machine learning in mobile networks. IEEE Internet Things J. 2020, 7, 9575–9588. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Yu, H.; Liang, Y.C.; Kim, D.I. Incentive design for efficient federated learning in mobile networks: A contract theory approach. In Proceedings of the 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), Singapore, 28–30 August 2019; pp. 1–5. [Google Scholar]
- Lim, W.Y.B.; Garg, S.; Xiong, Z.; Niyato, D.; Leung, C.; Miao, C.; Guizani, M. Dynamic contract design for federated learning in smart healthcare applications. IEEE Internet Things J. 2020, 8, 16853–16862. [Google Scholar] [CrossRef]
- Ding, N.; Fang, Z.; Huang, J. Incentive mechanism design for federated learning with multi-dimensional private information. In Proceedings of the 2020 18th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOPT), Philadelphia, PA, USA, 16 July–18 October 2020; pp. 1–8. [Google Scholar]
- Ding, N.; Fang, Z.; Huang, J. Optimal contract design for efficient federated learning with multi-dimensional private information. IEEE J. Sel. Areas Commun. 2020, 39, 186–200. [Google Scholar] [CrossRef]
- Wu, M.; Ye, D.; Ding, J.; Guo, Y.; Yu, R.; Pan, M. Incentivizing differentially private federated learning: A multi-dimensional contract approach. IEEE Internet Things J. 2021, 8, 10639–10651. [Google Scholar] [CrossRef]
- Weng, J.; Weng, J.; Zhang, J.; Li, M.; Zhang, Y.; Luo, W. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans. Dependable Secur. Comput. 2021, 18, 2438–2455. [Google Scholar] [CrossRef]
- Zhang, Z.; Dong, D.; Ma, Y.; Ying, Y.; Jiang, D. Refiner: A reliable incentive-driven federated learning system powered by blockchain. Proc. Vldb Endow. 2021, 14, 2659–2662. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, J.; Jiang, L.; Tan, R.; Niyato, D.; Li, Z.; Lyu, L.; Liu, Y. Privacy-preserving blockchain-based federated learning for IoT devices. IEEE Internet Things J. 2020, 8, 1817–1829. [Google Scholar] [CrossRef]
- Gao, L.; Li, L.; Chen, Y.; Zheng, W.; Xu, C.Z. FIFL: A Fair Incentive Mechanism for Federated Learning. In Proceedings of the 50th International Conference on Parallel Processing, Lemont, IL, USA, 9–12 August 2021; pp. 1–10. [Google Scholar]
- Cong, M.; Yu, H.; Weng, X.; Qu, J.; Liu, Y.; Yiu, S.M. A VCG-based Fair Incentive Mechanism for Federated Learning. arXiv 2020, arXiv:2008.06680. [Google Scholar]
- Wang, G.; Dang, C.X.; Zhou, Z. Measure contribution of participants in federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2597–2604. [Google Scholar]
- Rehman, M.H.; Salah, K.; Damiani, E.; Svetinovic, D. Towards blockchain-based reputation-aware federated learning. In Proceedings of the 2020 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Toronto, ON, Canada, 6–9 July 2020; pp. 183–188. [Google Scholar]
- Che, Y.K. Design competition through multidimensional auctions. Rand J. Econ. 1993, 24, 668–680. [Google Scholar] [CrossRef]
- Nash, J. Non-cooperative games. Ann. Math. 1951, 54, 286–295. [Google Scholar] [CrossRef]
- Luo, X.; Wu, Y.; Xiao, X.; Ooi, B.C. Feature inference attack on model predictions in vertical federated learning. In Proceedings of the 2021 International Conference on Data Engineering (ICDE), Chania, Greece, 19–22 April 2021; pp. 181–192. [Google Scholar]
- Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In Proceedings of the 2019 IEEE symposium on security and privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 739–753. [Google Scholar]
- Xiong, P.; Zhu, T.Q.; Wang, X.F. A survey on differential privacy and applications. Jisuanji Xuebao/Chin. J. Comput. 2014, 37, 101–122. [Google Scholar]
- Wei, K.; Li, J.; Ding, M.; Chuan, M.; Yang, H.H.; Farokhi, F.; Jin, S.; Quek, T.Q.S.; Poor, H.V. Federated learning with differential privacy: Algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 2020, 15, 3454–3469. [Google Scholar] [CrossRef] [Green Version]
- Geyer, R.C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
- Triastcyn, A.; Faltings, B. Federated learning with bayesian differential privacy. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2587–2596. [Google Scholar]
- Zhu, T.; Philip, S.Y. Applying differential privacy mechanism in artificial intelligence. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019; pp. 1601–1609. [Google Scholar]
- Choudhury, O.; Gkoulalas-Divanis, A.; Salonidis, T.; Sylla, I.; Park, Y.; Hsu, G.; Das, A. Differential privacy-enabled federated learning for sensitive health data. arXiv 2019, arXiv:1910.02578. [Google Scholar]
- Hu, R.; Guo, Y.; Li, H.; Pei, Q.; Gong, Y. Personalized federated learning with differential privacy. IEEE Internet Things J. 2020, 7, 9530–9539. [Google Scholar] [CrossRef]
- Cao, H.; Liu, S.; Zhao, R.; Xiong, X. IFed: A novel federated learning framework for local differential privacy in Power Internet of Things. Int. J. Distrib. Sens. Netw. 2020, 16, 1550147720919698. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Differentially private asynchronous federated learning for mobile edge computing in urban informatics. IEEE Trans. Ind. Infor. 2019, 16, 2134–2143. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhao, J.; Yang, M.; Wang, N.; Lyu, L.; Niyato, D.; Lam, K.Y. Local differential privacy-based federated learning for internet of things. IEEE Internet Things J. 2020, 8, 8836–8853. [Google Scholar] [CrossRef]
- Liu, R.; Cao, Y.; Yoshikawa, M.; Yoshikawa, M.; Chen, H. Fedsel: Federated sgd under local differential privacy with top-k dimension selection. In Proceedings of the DASFAA 2020: Database Systems for Advanced Applications, Jeju, Korea, 21–24 May 2020; pp. 485–501. [Google Scholar]
- Truex, S.; Liu, L.; Chow, K.H.; Gursoy, M.E.; Wei, W. LDP-Fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, New York, NY, USA, 27 April 2020; pp. 61–66. [Google Scholar]
- Sun, L.; Qian, J.; Chen, X. Ldp-fl: Practical private aggregation in federated learning with local differential privacy. arXiv 2020, arXiv:2007.15789. [Google Scholar]
- Zhu, H. On the relationship between (secure) multi-party computation and (secure) federated learning. arXiv 2020, arXiv:2008.02609. [Google Scholar]
- Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; Seth, K. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
- Kanagavelu, R.; Li, Z.; Samsudin, J.; Yang, Y.; Yang, F.; Goh, R.S.M.; Cheah, M.; Wiwatphonthana, P.; Akkarajitsakul, K.; Wang, S. Two-phase multi-party computation enabled privacy-preserving federated learning. In Proceedings of the 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), Melbourne, Australia, 11–14 May 2020; pp. 410–419. [Google Scholar]
- Sotthiwat, E.; Zhen, L.; Li, Z.; Zhang, C. Partially Encrypted Multi-Party Computation for Federated Learning. In Proceedings of the 2021 IEEE/ACM 21st International Symposium on Cluster, Cloud and Internet Computing (CCGRID), Melbourne, Australia, 10–13 May 2021; pp. 828–835. [Google Scholar]
- Zhou, Z.; Tian, Y.; Peng, C. Privacy-Preserving Federated Learning Framework with General Aggregation and Multiparty Entity Matching. Iwireless Commun. Mob. Comput. 2021, 2021, 14. [Google Scholar] [CrossRef]
- Zhang, J.; Chen, B.; Yu, S.; Deng, H. PEFL: A privacy-enhanced federated learning scheme for big data analytics. In Proceedings of the 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa Village, HI, USA, 9–13 December 2019; pp. 1–6. [Google Scholar]
- Zhang, X.; Fu, A.; Wang, H.; Zhou, C.; Chen, Z. A privacy-preserving and verifiable federated learning scheme. In Proceedings of the 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar]
- Hardy, S.; Henecka, W.; Ivey-Law, H.; Nock, R.; Patrini, G.; Smith, G.; Thorne, B. Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption. arXiv 2017, arXiv:1711.10677. [Google Scholar]
- Stripelis, D.; Saleem, H.; Ghai, T.; Dhinagar, N.; Gupta, U.; Anastasiou, C.; Steeg, G.V.; Ravi, S.; Naveed, M.; Thompson, P.M. Secure Neuroimaging Analysis using Federated Learning with Homomorphic Encryption. arXiv 2021, arXiv:2108.03437. [Google Scholar]
- Cheon, J.H.; Kim, A.; Kim, M.; Song, Y. Homomorphic encryption for arithmetic of approximate numbers. In Proceedings of the International Conference on the Theory and Application of Cryptology and Information Security, Hong Kong, China, 19 May–9 July 2017; pp. 409–437. [Google Scholar]
- Ma, J.; Naas, S.A.; Sigg, S.; Lyu, X. Privacy-preserving Federated Learning based on Multi-key Homomorphic Encryption. arXiv 2021, arXiv:2104.06824. [Google Scholar] [CrossRef]
- Zhang, C.; Li, S.; Xia, J.; Wang, W. Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning. In Proceedings of the 2020 USENIX Annual Technical Conference (USENIXATC 20), Online, 15–17 July 2020; pp. 493–506. [Google Scholar]
- Jiang, Z.; Wang, W.; Liu, Y. FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo Federated Learning. arXiv 2021, arXiv:2109.00675. [Google Scholar]
- Mou, W.; Fu, C.; Lei, Y.; Hu, C. A Verifiable Federated Learning Scheme Based on Secure Multi-party Computation. In Proceedings of the International Conference on Wireless Algorithms, Systems, and Applications, Nanjing, China, 5 August–22 November 2021; pp. 198–209. [Google Scholar]
- Truex, S.; Baracaldo, N.; Anwar, A.; Steinke, T.; Ludwig, H.; Zhang, R.; Zhou, Y. A hybrid approach to privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, New York, NY, USA, 11–15 November 2019; pp. 1–11. [Google Scholar]
- Xu, R.; Baracaldo, N.; Zhou, Y.; Ali, A.; Heiko, L. Hybridalpha: An efficient approach for privacy-preserving federated learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, New York, NY, USA, 11–15 November 2019; pp. 13–23. [Google Scholar]
- Mugunthan, V.; Polychroniadou, A.; Byrd, D.; Balch, T.H. SMPAI: Secure Multi-Party Computation for Federated Learning. 2019. Available online: https://www.jpmorgan.com/content/dam/jpm/cib/complex/content/technology/ai-research-publications/pdf-9.pdf (accessed on 13 January 2022).
- Hao, M.; Li, H.; Xu, G.; Liu, S.; Yang, H. Towards efficient and privacy-preserving federated deep learning. In Proceedings of the 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–6. [Google Scholar]
- Rahman, M.A.; Hossain, M.S.; Islam, M.S.; Liu, S.; Yang, H. Secure and provenance enhanced Internet of health things framework: A blockchain managed federated learning approach. IEEE Access 2020, 8, 205071–205087. [Google Scholar] [CrossRef] [PubMed]
- Majeed, U.; Hong, C.S. FLchain: Federated learning via MEC-enabled blockchain network. In Proceedings of the 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS), Matsue, Japan, 18–20 September 2019; pp. 1–4. [Google Scholar]
- Qi, Y.; Hossain, M.S.; Nie, J.; Li, X. Privacy-preserving blockchain-based federated learning for traffic flow prediction. Future Gener. Comput. Syst. 2021, 117, 328–337. [Google Scholar] [CrossRef]
- Mugunthan, V.; Rahman, R.; Kagal, L. BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning. arXiv 2020, arXiv:2007.03856. [Google Scholar]
- Li, Y.; Chen, C.; Liu, N.; Huang, H.; Zheng, Z.; Yan, Q. A blockchain-based decentralized federated learning framework with committee consensus. IEEE Netw. 2020, 35, 234–241. [Google Scholar] [CrossRef]
- Li, Z.; Liu, J.; Hao, J.; Wang, H.; Xian, M. CrowdSFL: A secure crowd computing framework based on blockchain and federated learning. Electronics 2020, 9, 773. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Dai, Y.; Maharjan, S.; Zhang, Y. Blockchain and federated learning for privacy-preserved data sharing in industrial IoT. IEEE Trans. Ind. Inform. 2019, 16, 4177–4186. [Google Scholar] [CrossRef]
- Wu, Y.; Cai, S.; Xiao, X.; Chen, G.; Ooi, B.C. Privacy preserving vertical federated learning for tree-based models. arXiv 2020, arXiv:2008.06170. [Google Scholar] [CrossRef]
- Sun, Y.; Ochiai, H.; Esaki, H. Intrusion Detection with Segmented Federated Learning for Large-Scale Multiple LANs. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
- Schneble, W.; Thamilarasu, G. Attack detection using federated learning in medical cyber-physical systems. In Proceedings of the 28th International Conference on Computer Communications and Networks (ICCCN), Valencia, Spain, 29 July–1 August 2019; pp. 1–8. [Google Scholar]
- Ren, T.; Jin, R.; Lou, Y. Network Intrusion Detection Algorithm Integrating Blockchain and Federated Learning. Netinfo Secur. 2021, 21, 27–34. [Google Scholar]
- Wittkopp, T.; Acker, A. Decentralized federated learning preserves model and data privacy. In Proceedings of the 2020 International Conference on Service-Oriented Computing, Dubai, United Arab Emirates, 20 September–22 November 2019; pp. 176–187. [Google Scholar]
- Li, B.; Wu, Y.; Song, J.; Lu, R.; Li, T.; Zhao, L. DeepFed: Federated Deep Learning for Intrusion Detection in Industrial Cyber–Physical Systems. IEEE Trans. Ind. Inform. 2020, 17, 5615–5624. [Google Scholar] [CrossRef]
- Zhao, R.; Yin, Y.; Shi, Y.; Xue, Z. Intelligent intrusion detection based on federated learning aided long short-term memory. Phys. Commun. 2020, 42, 101157. [Google Scholar] [CrossRef]
- Mothukuri, V.; Khare, P.; Parizi, R.M.; Xue, Z. Federated Learning-based Anomaly Detection for IoT Security Attacks. IEEE Internet Things J. 2021. [Google Scholar] [CrossRef]
- Preuveneers, D.; Rimmer, V.; Tsingenopoulos, I.; Spooren, J.; Joosen, W.; Ilie-Zudor, E. Chained anomaly detection models for federated learning: An intrusion detection case study. Appl. Sci. 2018, 8, 2663. [Google Scholar] [CrossRef] [Green Version]
- Liu, H.; Zhang, S.; Zhang, P.; Zhou, X.; Shao, X.; Pu, G.; Zhang, Y. Blockchain and Federated Learning for Collaborative Intrusion Detection in Vehicular Edge Computing. IEEE Trans. Veh. Technol. 2021, 70, 6073–6084. [Google Scholar] [CrossRef]
- Attota, D.C.; Mothukuri, V. Parizi, R.M.; Pouriyeh, S. An ensemble multi-view federated learning intrusion detection for iot. IEEE Access 2021, 9, 117734–117745. [Google Scholar] [CrossRef]
- Cui, L.; Qu, Y.; Xie, G.; Zeng, D.; Li, R.; Shen, S.; Yu, S. Security and Privacy-Enhanced Federated Learning for Anomaly Detection in IoT Infrastructures. IEEE Trans. Ind. Infor. 2021. [Google Scholar] [CrossRef]
- Wang, Y.; Kantarci, B. Reputation-enabled Federated Learning Model Aggregation in Mobile Platforms. In Proceedings of the ICC 2021-IEEE International Conference on Communications, Chongqing, China, 5 September–15 October 2021; pp. 1–6. [Google Scholar]
- Deng, Y.; Lyu, F.; Ren, J.; Chen, Y.C.; Yang, P.; Zhou, Y.; Zhang, Y. FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation. In Proceedings of the IEEE INFOCOM 2021-IEEE Conference on Computer Communications, Colombia, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Seattle, WA, USA, 9–11 May 2017; pp. 1273–1282. [Google Scholar]
- Chen, Y.; Sun, X.; Jin, Y. Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 4229–4238. [Google Scholar] [CrossRef] [PubMed]
- Ma, Z.; Zhao, M.; Cai, X.; Jia, Z. Fast-convergent federated learning with class-weighted aggregation. J. Syst. Archit. 2021, 117, 102125. [Google Scholar] [CrossRef]
- Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Federated learning with matched averaging. arXiv 2020, arXiv:2002.06440. [Google Scholar]
- Sannara, E.K.; Portet, F.; Lalanda, P.; Vega, G. A Federated Learning Aggregation Algorithm for Pervasive Computing: Evaluation and Comparison. In Proceedings of the 2021 IEEE International Conference on Pervasive Computing and Communications (PerCom), Kassel, Germany, 22–26 March 2021; pp. 1–10. [Google Scholar]
- Ji, S.; Pan, S.; Long, G.; Li, X.; Jiang, J.; Huang, Z. Learning private neural language modeling with attentive aggregation. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Konečný, J.; McMahan, H.B.; Yu, F.X.; Richtarik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
- Reisizadeh, A.; Mokhtari, A.; Hassani, H.; Jadbabaie, A.; Pedarsani, R. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In Proceedings of the 2020 International Conference on Artificial Intelligence and Statistics, Tianjing, China, 26–28 June 2020; pp. 2021–2031. [Google Scholar]
- Mills, J.; Hu, J.; Min, G. Communication-efficient federated learning for wireless edge intelligence in IoT. IEEE Internet Things J. 2019, 7, 5986–5994. [Google Scholar] [CrossRef] [Green Version]
- Sun, J.; Chen, T.; Giannakis, G.B.; Yang, Q.; Yang, Z. Lazily aggregated quantized gradient innovation for communication-efficient federated learning. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Garg, S.; Nie, J.; Zhang, Y.; Xiong, Z.; Kang, K.; Hossain, M.S. Deep anomaly detection for time-series data in industrial iot: A communication-efficient on-device federated learning approach. IEEE Internet Things J. 2020, 8, 6348–6358. [Google Scholar] [CrossRef]
- Sattler, F.; Wiedemann, S.; Müller, K.R.; Samek, W. Robust and communication-efficient federated learning from non-iid data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3400–3413. [Google Scholar] [CrossRef] [Green Version]
- Han, P.; Wang, S.; Leung, K.K. Adaptive gradient sparsification for efficient federated learning: An online learning approach. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 300–310. [Google Scholar]
- Li, S.; Qi, Q.; Wang, J.; Sun, H.; Li, Y.; Yu, F.R. Ggs: General gradient sparsification for federated learning in edge computing. In Proceedings of the 2020 IEEE International Conference on Communications(ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–7. [Google Scholar]
- Caldas, S.; Konečny, J.; McMahan, H.B.; Talwalkar, A. Expanding the reach of federated learning by reducing client resource requirements. arXiv 2018, arXiv:1812.07210. [Google Scholar]
- Yang, K.; Jiang, T.; Shi, Y.; Ding, Z. Federated learning via over-the-air computation. IEEE Trans. Wirel. Commun. 2020, 19, 2022–2035. [Google Scholar] [CrossRef] [Green Version]
- Yao, X.; Huang, C.; Sun, L. Two-stream federated learning: Reduce the communication costs. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 December 2018; pp. 1–4. [Google Scholar]
- Chen, M.; Poor, H.V.; Saad, W.; Cui, S. Convergence time optimization for federated learning over wireless networks. IEEE Trans. Wirel. Commun. 2020, 20, 2457–2471. [Google Scholar] [CrossRef]
- AbdulRahman, S.; Tout, H.; Mourad, A.; Talhi, C. FedMCCS: Multicriteria client selection model for optimal IoT federated learning. IEEE Internet Things J. 2020, 8, 4723–4735. [Google Scholar] [CrossRef]
- Gupta, M.; Goyal, P.; Verma, R.; Shorey, R.; Saran, H. FedFm: Towards a Robust Federated Learning Approach For Fault Mitigation at the Edge Nodes. arXiv 2021, arXiv:2111.01074. [Google Scholar]
- Chen, Z.; Liao, W.; Hua, K.; Lu, C.; Yu, W. Towards asynchronous federated learning for heterogeneous edge-powered internet of things. Digit. Commun. Netw. 2021, 7, 317–326. [Google Scholar] [CrossRef]
- Zhao, Y.; Gong, X. Quality-aware distributed computation and user selection for cost-effective federated learning. In Proceedings of the 2021 IEEE Conference on Computer Communications Workshops(INFOCOM WKSHPS), Vancouver, BC, Canada, 10–13 May 2021; pp. 1–6. [Google Scholar]
- Nishio, T.; Yonetani, R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proceedings of the 2019 IEEE International Conference on Communications(ICC), Shanghai, China, 20–24 May 2019; pp. 1–7. [Google Scholar]
- Lai, F.; Zhu, X.; Madhyastha, H.V.; Chowdhury, M. Oort: Informed participant selection for scalable federated learning. arXiv 2020, arXiv:2010.06081. [Google Scholar]
- He, W.; Guo, S.; Qiu, X.; Chen, L.; Zhang, S. Node selection method in federated learning based on deep reinforcement learning. J. Commun. 2021, 42, 62–71. [Google Scholar]
- Kang, J.; Xiong, Z.; Niyato, D.; Zou, Y.; Zhang, Y.; Guizani, M. Reliable federated learning for mobile networks. IEEE Wirel. Commun. 2020, 27, 72–80. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Kantarci, B. A Novel Reputation-Aware Client Selection Scheme for Federated Learning within Mobile Environments. In Proceedings of the 2020 IEEE 25th International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), Online, 14–16 September 2020; pp. 1–6. [Google Scholar]
- Li, S.; Cheng, Y.; Wang, W.; Liu, Y.; Chen, T. Learning to detect malicious clients for robust federated learning. arXiv 2020, arXiv:2002.00211. [Google Scholar]
- Kochenderfer, M.J.; Wheeler, T.A. Algorithms for Optimization; Mit Press: London, UK, 2019. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Song, Z.; Sun, H.; Yang, H.H.; Wang, X.; Zhang, Y.; Quek, T.Q.S. Reputation-based Federated Learning for Secure Wireless Networks. IEEE Internet Things J. 2021, 9, 1212–1226. [Google Scholar] [CrossRef]
- Chandola, V.; Banerjee, A.; Kumar, V. Anomaly detection: A survey. Acm Comput. Surv. (Csur) 2009, 41, 1–58. [Google Scholar] [CrossRef]
- Kimmell, J.C.; Abdelsalam, M.; Gupta, M. Analyzing Machine Learning Approaches for Online Malware Detection in Cloud. arXiv 2021, arXiv:2105.09268. [Google Scholar]
- Pang, G.; Shen, C.; Cao, L.; Hengel, A.V.D. Deep learning for anomaly detection: A review. Acm Comput. Surv. (Csur) 2021, 54, 1–38. [Google Scholar] [CrossRef]
- Chica, J.C.C.; Imbachi, J.C.; Vega, J.F.B. Security in SDN: A comprehensive survey. J. Netw. Comput. Appl. 2020, 159, 102595. [Google Scholar] [CrossRef]
- Sharma, N.; Batra, U. Performance analysis of compression algorithms for information security: A Review. Eai Endorsed Trans. Scalable Inf. Syst. 2020, 7, 163503. [Google Scholar] [CrossRef]
- Xu, J.; Du, W.; Jin, Y.; He, W.; Cheng, R. Ternary compression for communication-efficient federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2020. [Google Scholar] [CrossRef] [PubMed]
Scenario | Reference | Year | Objective |
---|---|---|---|
IoT | [23,25] | 2021 | security, resources, incentive |
IIoT | [24] | 2021 | privacy, resource, data management |
big data | [26] | 2021 | privacy and security, communication |
smart city | [4] | 2020 | |
healthcare | [2] | 2021 | |
edge computing | [27,28] | 2019-2021 | communication, security, resources |
6G | [29,30] | 2020–2021 | communication, security |
wireless network | [31] | 2020 | computing, spectrum management |
Others | [20,21,22] | 2020–2021 | threats and defense |
[32] | 2021 | communication | |
[33] | 2021 | incentive mechanisms |
Approach | Ref | Description | |
---|---|---|---|
aggregation | Reputation | [134,135] | calculate clients reputation |
FedAvg | [136] | counting data amount | |
timeliness | [137] | time-weighted aggregation | |
FedCA-TDD | [138] | data class on clients | |
FedCA-VSA | accuracy on validation | ||
FedMA | [139] | model distance | |
FedDist | [140] | ||
FedAtt | [141] | ||
communication | quantization | [142,143,144,145] | quantized before upload |
sparseness | [146,147,148] | Top-k | |
[149] | correct errors in aggregation | ||
other | [150] | federated dropout | |
[151] | signal superposition | ||
[152] | maximum mean discrepancy | ||
nodes selection | Probability | [143,153] | probabilistic selection |
capacity | [154] | client CUP, memory | |
[155] | data volume | ||
[156,157,158,159] | communication, computing | ||
[160] | training delay | ||
reputation | [161,162] | local model performance | |
malicious | [163] | malicious behavior |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ma, X.; Liao, L.; Li, Z.; Lai, R.X.; Zhang, M. Applying Federated Learning in Software-Defined Networks: A Survey. Symmetry 2022, 14, 195. https://doi.org/10.3390/sym14020195
Ma X, Liao L, Li Z, Lai RX, Zhang M. Applying Federated Learning in Software-Defined Networks: A Survey. Symmetry. 2022; 14(2):195. https://doi.org/10.3390/sym14020195
Chicago/Turabian StyleMa, Xiaohang, Lingxia Liao, Zhi Li, Roy Xiaorong Lai, and Miao Zhang. 2022. "Applying Federated Learning in Software-Defined Networks: A Survey" Symmetry 14, no. 2: 195. https://doi.org/10.3390/sym14020195
APA StyleMa, X., Liao, L., Li, Z., Lai, R. X., & Zhang, M. (2022). Applying Federated Learning in Software-Defined Networks: A Survey. Symmetry, 14(2), 195. https://doi.org/10.3390/sym14020195