RingFFL: A Ring-Architecture-Based Fair Federated Learning Framework
Abstract
:1. Introduction
- To ensure the fairness of FL training, we propose a penalty mechanism where clients need to pay deposits in a given order before participating in the training. If all clients perform honestly, they will all obtain the final global model, otherwise, deposits of dishonest clients will be compensated to the honest clients. In other words, clients either obtain the final global model or are compensated, thus ensuring fairness.
- To enhance the security of FL training, we propose the HASH comparison mechanism and introduce the blockchain. Through HASH comparison, the correctness of the models transmitted by the clients during the training process is guaranteed. With the immutability of the blockchain, the security of the client’s deposit payment and compensation operations is guaranteed.
- To evaluate the performance of our mechanism, we perform a security analysis of the proposed mechanism and conduct experiments using MNIST and CIFAR10 data sets. The security analysis and experimental results show that our mechanism ensures security and fairness while maintaining high accuracy.
2. Related Works
2.1. Challenges of FL
2.2. Distributed FL Framework
3. Framework Overview
3.1. The Scenario of RingFFL
3.2. Threat and Adversary Model
3.3. The Work Flow of RingFFL
- Step 1
- Initialization
- Step 2
- Local Training
- Step 3
- Transactions on the blockchain
- (3.1)
- Roof deposits
- (3.2)
- Ladder deposits
- (3.3)
- Acknowledgement
- Step 4
- Output
4. Penalty Mechanism in RingFFL
- Step 1
- Initialization
- Step 2
- Local training
- Step 3
- Transactions on the blockchain
- (3.1)
- Roof deposits
- (a)
- Normal situation.
- (b)
- Abnormal situation.
- (3.2)
- Ladder deposits
- (a)
- Normal situation.
- (b)
- Abnormal situation.
- (3.3)
- Acknowledgment
- (a)
- Normal situation.
- (b)
- Abnormal situation.
- Step 4
- Output
- (a)
- Normal situation.
- (b)
- Abnormal situation.
Algorithm 1 The proposed RingFFL |
|
5. Security Analysis and Numerical Results
5.1. Security Analysis
- (1)
- Guaranteeing the data security of the clients: We use FL with ring architecture to train the model, and the local model parameters instead of the raw data are transmitted during the training process. The raw data are stored locally from start to finish, thus ensuring the security of the clients’ data.
- (2)
- Guaranteeing the security of the clients’ deposits: In the process of deposit payment and deposit refund, we use blockchain to guarantee the security of the transactions. The security mechanism based on consensus can avoid security attacks such as double spending, thus ensuring the security of clients’ deposits during the training process.
- (3)
- Guaranteeing the correctness of the transmitted local model parameters: To prevent the dishonest clients from transmitting false local model parameters, the acknowledgment process needs to validate the hash values of the model parameters, and only the correct ones will pass the validation. The client will then receive the corresponding amount of digital currencies.
- (4)
- Guaranteeing the fairness of the FL: To avoid dishonest clients from aborting during the FL process, the penalty mechanism can deduct the digital currency deposits of the aborting clients to compensate clients with additional information loss. In this way, the fairness of FL is guaranteed.
5.2. Numerical Results
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Firouzi, F.; Farahani, B.; Marinšek, A. The convergence and interplay of edge, fog, and cloud in the AI-driven Internet of Things (IoT). Inf. Syst. 2022, 107, 101840. [Google Scholar] [CrossRef]
- Ignatov, A.; Malivenko, G.; Plowman, D.; Shukla, S.; Timofte, R. Fast and accurate single-image depth estimation on mobile devices, mobile ai 2021 challenge: Report. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2545–2557. [Google Scholar]
- Hard, A.; Rao, K.; Mathews, R.; Ramaswamy, S.; Beaufays, F.; Augenstein, S.; Eichner, H.; Kiddon, C.; Ramage, D. Federated learning for mobile keyboard prediction. arXiv 2018, arXiv:1811.03604. [Google Scholar]
- Alam, T.; Gupta, R. Federated Learning and Its Role in the Privacy Preservation of IoT Devices. Future Internet 2022, 14, 246. [Google Scholar] [CrossRef]
- Li, L.; Fan, Y.; Tse, M.; Lin, K.Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854. [Google Scholar] [CrossRef]
- Al-Saedi, A.A.; Boeva, V.; Casalicchio, E. FedCO: Communication-Efficient Federated Learning via Clustering Optimization. Future Internet 2022, 14, 377. [Google Scholar] [CrossRef]
- Banabilah, S.; Aloqaily, M.; Alsayed, E.; Malik, N.; Jararweh, Y. Federated learning review: Fundamentals, enabling technologies, and future applications. Inf. Process. Manag. 2022, 59, 103061. [Google Scholar] [CrossRef]
- Asad, M.; Aslam, M.; Jilani, S.F.; Shaukat, S.; Tsukada, M. SHFL: K-Anonymity-Based Secure Hierarchical Federated Learning Framework for Smart Healthcare Systems. Future Internet 2022, 14, 338. [Google Scholar] [CrossRef]
- Yang, Q.; Liu, Y.; Cheng, Y.; Kang, Y.; Chen, T.; Yu, H. Federated learning. Synth. Lect. Artif. Intell. Mach. Learn. 2019, 13, 1–207. [Google Scholar] [CrossRef]
- Chen, Z.; Liao, W.; Tian, P.; Wang, Q.; Yu, W. A Fairness-Aware Peer-to-Peer Decentralized Learning Framework with Heterogeneous Devices. Future Internet 2022, 14, 138. [Google Scholar] [CrossRef]
- Niknam, S.; Dhillon, H.S.; Reed, J.H. Federated learning for wireless communications: Motivation, opportunities, and challenges. IEEE Commun. Mag. 2020, 58, 46–51. [Google Scholar] [CrossRef]
- Hamer, J.; Mohri, M.; Suresh, A.T. FedBoost: A Communication-Efficient Algorithm for Federated Learning. In Proceedings of the 37th International Conference on Machine Learning, Virtual Event, 13–18 July 2020; Volume 119, pp. 3973–3983. [Google Scholar]
- Mothukuri, V.; Parizi, R.M.; Pouriyeh, S.; Huang, Y.; Dehghantanha, A.; Srivastava, G. A survey on security and privacy of federated learning. Future Gener. Comput. Syst. 2021, 115, 619–640. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- AbdulRahman, S.; Tout, H.; Ould-Slimane, H.; Mourad, A.; Talhi, C.; Guizani, M. A survey on federated learning: The journey from centralized to distributed on-site learning and beyond. IEEE Internet Things J. 2020, 8, 5476–5497. [Google Scholar] [CrossRef]
- Wang, T.; Liu, Y.; Zheng, X.; Dai, H.N.; Jia, W.; Xie, M. Edge-based communication optimization for distributed federated learning. IEEE Trans. Netw. Sci. Eng. 2021, 9, 2015–2024. [Google Scholar] [CrossRef]
- Samarakoon, S.; Bennis, M.; Saad, W.; Debbah, M. Distributed federated learning for ultra-reliable low-latency vehicular communications. IEEE Trans. Commun. 2019, 68, 1146–1159. [Google Scholar] [CrossRef]
- Li, Y.; Chen, C.; Liu, N.; Huang, H.; Zheng, Z.; Yan, Q. A blockchain-based decentralized federated learning framework with committee consensus. IEEE Netw. 2020, 35, 234–241. [Google Scholar] [CrossRef]
- Zhang, X.; Liu, Y.; Liu, J.; Argyriou, A.; Han, Y. D2D-assisted federated learning in mobile edge computing networks. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–7. [Google Scholar] [CrossRef]
- Xu, Z.; Tian, W.; Liu, Y.; Ning, W.; Wu, J. A Ring Topology-based Optimization Approach for Federated Learning in D2D Wireless Networks. arXiv 2022, arXiv:2212.02830. [Google Scholar]
- Wang, Z.; Hu, Y.; Yan, S.; Wang, Z.; Hou, R.; Wu, C. Efficient Ring-Topology Decentralized Federated Learning with Deep Generative Models for Medical Data in eHealthcare Systems. Electronics 2022, 11, 1548. [Google Scholar] [CrossRef]
- Lee, Y.; Park, S.; Kang, J. Fast-Convergent Federated Learning via Cyclic Aggregation. arXiv 2022, arXiv:2210.16520. [Google Scholar]
- Lee, J.w.; Oh, J.; Lim, S.; Yun, S.Y.; Lee, J.G. Tornadoaggregate: Accurate and scalable federated learning via the ring-based architecture. arXiv 2020, arXiv:2012.03214. [Google Scholar]
- Qu, Y.; Uddin, M.P.; Gan, C.; Xiang, Y.; Gao, L.; Yearwood, J. Blockchain-enabled federated learning: A survey. ACM Comput. Surv. 2022, 55, 1–35. [Google Scholar] [CrossRef]
- Boenisch, F.; Dziedzic, A.; Schuster, R.; Shamsabadi, A.S.; Shumailov, I.; Papernot, N. When the curious abandon honesty: Federated learning is not private. arXiv 2021, arXiv:2112.02918. [Google Scholar]
- Lyu, L.; Yu, H.; Yang, Q. Threats to federated learning: A survey. arXiv 2020, arXiv:2003.02133. [Google Scholar]
- Śmietanka, M.; Pithadia, H.; Treleaven, P. Federated Learning for Privacy-Preserving Data Access. 2020. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3696609 (accessed on 15 September 2020).
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Paragliola, G. Evaluation of the trade-off between performance and communication costs in federated learning scenario. Future Gener. Comput. Syst. 2022, 136, 282–293. [Google Scholar] [CrossRef]
- Lin, T.; Kong, L.; Stich, S.U.; Jaggi, M. Ensemble distillation for robust model fusion in federated learning. Adv. Neural Inf. Process. Syst. 2020, 33, 2351–2363. [Google Scholar]
- Liu, L.; Zhang, J.; Song, S.; Letaief, K.B. Client-Edge-Cloud Hierarchical Federated Learning. In Proceedings of the ICC 2020-2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Paragliola, G. A federated learning-based approach to recognize subjects at a high risk of hypertension in a non-stationary scenario. Inf. Sci. 2023, 622, 16–33. [Google Scholar] [CrossRef]
- Huang, Y.; Hu, C. Toward Data Heterogeneity of Federated Learning. arXiv 2022, arXiv:2212.08944. [Google Scholar]
- Li, T.; Sanjabi, M.; Beirami, A.; Smith, V. Fair resource allocation in federated learning. arXiv 2019, arXiv:1905.10497. [Google Scholar]
- Fang, H.; Qian, Q. Privacy preserving machine learning with homomorphic encryption and federated learning. Future Internet 2021, 13, 94. [Google Scholar] [CrossRef]
- Qu, Y.; Gao, L.; Luan, T.H.; Xiang, Y.; Yu, S.; Li, B.; Zheng, G. Decentralized privacy using blockchain-enabled federated learning in fog computing. IEEE Internet Things J. 2020, 7, 5171–5183. [Google Scholar] [CrossRef]
- Xing, H.; Simeone, O.; Bi, S. Decentralized federated learning via SGD over wireless D2D networks. In Proceedings of the 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Atlanta, GA, USA, 26–29 May 2020; pp. 1–5. [Google Scholar] [CrossRef]
- Kumaresan, R.; Vaikuntanathan, V.; Vasudevan, P.N. Improvements to Secure Computation with Penalties. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; ACM: New York, NY, USA, 2016; pp. 406–417. [Google Scholar] [CrossRef]
- Desai, H.B.; Ozdayi, M.S.; Kantarcioglu, M. Blockfla: Accountable federated learning via hybrid blockchain architecture. In Proceedings of the Eleventh ACM Conference on Data and Application Security and Privacy, Virtual Event, 26–28 April 2021; pp. 101–112. [Google Scholar] [CrossRef]
- Courtois, N.T.; Grajek, M.; Naik, R. Optimizing sha256 in bitcoin mining. In Proceedings of the International Conference on Cryptography and Security Systems, Lublin, Poland, 29 August 2014; pp. 131–144. [Google Scholar] [CrossRef]
- Hankerson, D.; Menezes, A. Elliptic Curve Cryptography; Springer: Berlin, Germany, 2011. [Google Scholar]
- Suhaili, S.; Watanabeb, T. Design of Optimized Pipelined RIPEMD-160 with High Frequency and Throughput. J. Adv. Res. Comput. Appl. 2016, 3, 17–27. [Google Scholar]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; Volume 54, pp. 1273–1282. [Google Scholar]
- Deng, L. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web]. IEEE Signal Process. Mag. 2012, 29, 141–142. [Google Scholar] [CrossRef]
- Ho-Phuoc, T. CIFAR10 to Compare Visual Recognition Performance between Deep Neural Networks and Humans. arXiv 2018, arXiv:1811.07270. [Google Scholar]
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, B.; et al. Towards Federated Learning at Scale: System Design. In Proceedings of the Machine Learning and Systems, Stanford, CA, USA, 31 March–2 April 2019; Volume 1, pp. 374–388. [Google Scholar]
- Hsieh, K.; Phanishayee, A.; Mutlu, O.; Gibbons, P.B. The Non-IID Data Quagmire of Decentralized Machine Learning. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019. [Google Scholar]
Parameters | Description |
---|---|
The i-th client | |
N | The number of all clients |
Client i’s private data set | |
Client i’s local model parameters | |
C | Conditions for obtaining digital currencies |
E | Evidence needed to obtain digital currencies |
Initial model of FL | |
Maximum communication rounds of FL |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Han, L.; Huang, X.; Li, D.; Zhang, Y. RingFFL: A Ring-Architecture-Based Fair Federated Learning Framework. Future Internet 2023, 15, 68. https://doi.org/10.3390/fi15020068
Han L, Huang X, Li D, Zhang Y. RingFFL: A Ring-Architecture-Based Fair Federated Learning Framework. Future Internet. 2023; 15(2):68. https://doi.org/10.3390/fi15020068
Chicago/Turabian StyleHan, Lu, Xiaohong Huang, Dandan Li, and Yong Zhang. 2023. "RingFFL: A Ring-Architecture-Based Fair Federated Learning Framework" Future Internet 15, no. 2: 68. https://doi.org/10.3390/fi15020068
APA StyleHan, L., Huang, X., Li, D., & Zhang, Y. (2023). RingFFL: A Ring-Architecture-Based Fair Federated Learning Framework. Future Internet, 15(2), 68. https://doi.org/10.3390/fi15020068