Personalized Fair Split Learning for Resource-Constrained Internet of Things
Abstract
:1. Introduction
- (1)
- This paper proposes a personalized U-shaped split architecture, designed to fulfil specific individualized requirements of different clients in complex IoT environments while effectively training sophisticated models on resource-constrained client devices;
- (2)
- This paper introduces a model-aggregation method based on contribution estimation. By estimating the contribution size of each client, the aggregation weights of the model are reasonably assigned to ensure a fairer distribution of benefits among different clients;
- (3)
- The experimental results demonstrate that our proposed framework not only exhibits high accuracy in its personalized training approach under different data heterogeneity scenarios, it also ensures fairness in collaboration and promotes more balanced and sustainable cooperation among clients.
2. Background and Related Work
2.1. Distributed Learning in the IoT
2.1.1. Federated Learning
2.1.2. Split Learning and Its Variants
2.2. Distributed Learning Challenges and Strategies in IoT
2.2.1. Challenges to Distributed Learning in the IoT
- Data Heterogeneity
- Uneven Benefit Distribution
2.2.2. Strategies to Distributed Learning in the IoT
- Personalized Processing
- Fairness Design
3. SplitLPF Frame
3.1. Model Split
3.2. Personalized Training
3.3. Fairness Mechanism
Algorithm 1 ClientUpdate |
|
Algorithm 2 ServerUpdate |
|
3.4. SplitLPF Algorithm Analysis
Algorithm 3 SplitLPF |
|
3.4.1. Complexity Analysis
3.4.2. Total Cost Analysis
3.5. Fairness Theory Analysis
4. Experiment and Result Analysis
4.1. Datasets
4.2. Baseline and Experimental Setup
4.3. Evaluation Index
4.4. Analysis of Experimental Results
4.5. Extended Discussion
Privacy Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
IoT | Internet of Things |
IIoT | Industrial Internet of Things |
FL | Federated learning |
SL | Split learning |
SplitLPF | Split Learning with Personalized Fairness |
PSL | Parallel split learning |
IID | Identically and independently distributed |
Non-IID | Non-independent and identically distributed |
SGD | Stochastic gradient descent |
FMNIST | Fashion-MNIST |
EMNIST | Extended MNIST |
UDD | Unbalanced Data Distributed |
UDS | Unbalanced Data Size |
UDDS | Unbalanced Data Distributed and Size |
PC | Pearson Correlation |
JFI | Jain’s Fairness Index |
Std | Standard deviation |
CNN | Convolutional neural network |
Appendix A
Appendix B
References
- Tang, J.; Ding, X.; Hu, D.; Guo, B.; Shen, Y.; Ma, P.; Jiang, Y. FedRAD: Heterogeneous Federated Learning via Relational Adaptive Distillation. Sensors 2023, 23, 6518. [Google Scholar] [CrossRef] [PubMed]
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečnỳ, J.; Mazzocchi, S.; McMahan, B.; et al. Towards federated learning at scale: System design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
- Poirot, M.; Vepakomma, P.; Chang, K.; Kalpathy-Cramer, J.; Gupta, R.; Raskar, R. Split Learning for collaborative deep learning in healthcare. arXiv 2019, arXiv:1912.12115. [Google Scholar]
- Mendieta, M.; Yang, T.; Wang, P.; Lee, M.; Ding, Z.; Chen, C. Local learning matters: Rethinking data heterogeneity in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8397–8406. [Google Scholar]
- Xu, X.; Lyu, L. A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning. arXiv 2020, arXiv:2011.10464. [Google Scholar]
- Gao, Y.; Kim, M.; Abuadbba, S.; Kim, Y.; Thapa, C.; Kim, K.; Camtep, S.A.; Kim, H.; Nepal, S. End-to-End Evaluation of Federated Learning and Split Learning for Internet of Things. In Proceedings of the 2020 International Symposium on Reliable Distributed Systems (SRDS), Shanghai, China, 21–24 September 2020. [Google Scholar]
- Yan, S.; Zhang, P.; Huang, S.; Wang, J.; Sun, H.; Zhang, Y.; Tolba, A. Node Selection Algorithm for Federated Learning Based on Deep Reinforcement Learning for Edge Computing in IoT. Electronics 2023, 12, 2478. [Google Scholar] [CrossRef]
- Sun, Y.; Ochiai, H.; Esaki, H. Decentralized deep learning for multi-access edge computing: A survey on communication efficiency and trustworthiness. IEEE Trans. Artif. Intell. 2021, 3, 963–972. [Google Scholar] [CrossRef]
- Xia, T.; Deng, Y.; Yue, S.; He, J.; Ren, J.; Zhang, Y. HSFL: An Efficient Split Federated Learning Framework via Hierarchical Organization. In Proceedings of the 2022 18th International Conference on Network and Service Management (CNSM), Thessaloniki, Greece, 31 October–4 November 2022; pp. 1–9. [Google Scholar]
- Duan, Q.; Hu, S.; Deng, R.; Lu, Z. Combined Federated and Split Learning in Edge Computing for Ubiquitous Intelligence in Internet of Things: State-of-the-Art and Future Directions. Sensors 2022, 22, 5983. [Google Scholar] [CrossRef] [PubMed]
- Shin, J.; Ahn, J.; Kang, H.; Kang, J. FedSplitX: Federated Split Learning for Computationally-Constrained Heterogeneous Clients. arXiv 2023, arXiv:2310.14579. [Google Scholar]
- Gawali, M.; Arvind, C.; Suryavanshi, S.; Madaan, H.; Gaikwad, A.; Bhanu Prakash, K.; Kulkarni, V.; Pant, A. Comparison of privacy-preserving distributed deep learning methods in healthcare. In Proceedings of the 25th Annual Conference on Medical Image Understanding and Analysis (MIUA 2021), Oxford, UK, 12–14 July 2021; pp. 457–471. [Google Scholar]
- Thapa, C.; Arachchige, P.C.M.; Camtepe, S.; Sun, L. Splitfed: When federated learning meets split learning. Proc. Aaai Conf. Artif. Intell. 2022, 36, 8485–8493. [Google Scholar] [CrossRef]
- Joshi, P.; Thapa, C.; Hasanuzzaman, M.; Scully, T.; Afli, H. Federated Split Learning with Only Positive Labels for resource-constrained IoT environment. arXiv 2023, arXiv:2307.13266. [Google Scholar]
- Pal, S.; Uniyal, M.; Park, J.; Vepakomma, P.; Raskar, R.; Bennis, M.; Jeon, M.; Choi, J. Server-side local gradient averaging and learning rate acceleration for scalable split learning. arXiv 2021, arXiv:2112.05929. [Google Scholar]
- Zhou, T.; Hu, Z.; Wu, B.; Chen, C. SLPerf: A Unified Framework for Benchmarking Split Learning. arXiv 2023, arXiv:2304.01502. [Google Scholar]
- Chen, X.; Li, J.; Chakrabarti, C. Communication and Computation Reduction for Split Learning using Asynchronous Training. In Proceedings of the 2021 IEEE Workshop on Signal Processing Systems (SiPS), Coimbra, Portugal, 19–21 October 2021. [Google Scholar]
- Oh, Y.N.; Lee, J.; Brinton, C.G.; Jeon, Y.S. Communication-Efficient Split Learning via Adaptive Feature-Wise Compression. arXiv 2023, arXiv:2307.10805. [Google Scholar]
- Lin, Z.; Zhu, G.; Deng, Y.; Chen, X.; Gao, Y.; Huang, K.; Fang, Y. Efficient Parallel Split Learning over Resource-constrained Wireless Edge Networks. arXiv 2023, arXiv:2303.15991. [Google Scholar]
- Xiong, Y.; Wang, R.; Cheng, M.; Yu, F.; Hsieh, C.J. Feddm: Iterative distribution matching for communication-efficient federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16323–16332. [Google Scholar]
- Tan, A.Z.; Yu, H.; Cui, L.; Yang, Q. Towards Personalized Federated Learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 9587–9603. [Google Scholar] [CrossRef]
- Duan, M.; Liu, D.; Chen, X.; Liu, R.; Tan, Y.; Liang, L. Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 59–71. [Google Scholar] [CrossRef]
- Yang, M.; Wang, X.; Zhu, H.; Wang, H.; Qian, H. Federated learning with class imbalance reduction. In Proceedings of the 2021 29th European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 2174–2178. [Google Scholar]
- Zhang, J.; Hua, Y.; Wang, H.; Song, T.; Xue, Z.; Ma, R.; Cao, J.; Guan, H. GPFL: Simultaneously Learning Global and Personalized Feature Information for Personalized Federated Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Marfoq, O.; Neglia, G.; Kameni, L.; Vidal, R. Personalized Federated Learning through Local Memorization. In Proceedings of the 39th International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022. [Google Scholar]
- Liu, Y.; Kang, Y.; Xing, C.; Chen, T.; Yang, Q. A secure federated transfer learning framework. IEEE Intell. Syst. 2020, 35, 70–82. [Google Scholar] [CrossRef]
- Su, R.; Pang, X.; Wang, H. A novel parameter decoupling approach of personalised federated learning for image analysis. IET Comput. Vis. 2023, 17, 913–924. [Google Scholar] [CrossRef]
- Zhu, Z.; Hong, J.; Zhou, J. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 12878–12889. [Google Scholar]
- Wadhwa, M.; Gupta, G.R.; Sahu, A.; Saini, R.; Mittal, V. PFSL: Personalized & Fair Split Learning with Data & Label Privacy for thin clients. arXiv 2023, arXiv:2303.10624. [Google Scholar]
- Han, D.J.; Kim, D.Y.; Choi, M.; Brinton, C.G.; Moon, J. Splitgp: Achieving both generalization and personalization in federated learning. In Proceedings of the IEEE INFOCOM 2023—IEEE Conference on Computer Communications, New York City, NY, USA, 17–20 May 2023; pp. 1–10. [Google Scholar]
- Chu, L.; Wang, L.; Dong, Y.; Pei, J.; Zhou, Z.; Zhang, Y. Fedfair: Training fair models in cross-silo federated learning. arXiv 2021, arXiv:2109.05662. [Google Scholar]
- Lin, X.; Xu, X.; Ng, S.K.; Foo, C.S.; Low, B. Fair yet Asymptotically Equal Collaborative Learning. In Proceedings of the 40th International Conference on Machine Learning (ICML 2023), Honolulu, HI, USA, 23–29 July 2023. [Google Scholar]
- Hu, Z.; Shaloudegi, K.; Zhang, G.; Yu, Y. Federated Learning Meets Multi-Objective Optimization. IEEE Trans. Netw. Sci. Eng. 2022, 9, 2039–2051. [Google Scholar] [CrossRef]
- Cui, S.; Pan, W.; Liang, J.; Zhang, C.; Wang, F. Addressing algorithmic disparity and performance inconsistency in federated learning. In Proceedings of the Thirty-Fifth Annual Conference on Neural Information Processing Systems (NeurIPS 2021), Virtual, 6–14 December 2021; Volume 34, pp. 26091–26102. [Google Scholar]
- Li, T.; Hu, S.; Beirami, A.; Smith, V. Ditto: Fair and robust federated learning through personalization. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 6357–6368. [Google Scholar]
- Lyu, L.; Xu, X.; Wang, Q.; Yu, H. Collaborative fairness in federated learning. In Federated Learning: Privacy and Incentive; Springer: Berlin/Heidelberg, Germany, 2020; pp. 189–204. [Google Scholar]
- Xu, X.; Lyu, L.; Ma, X.; Miao, C.; Foo, C.S.; Low, B.K.H. Gradient driven rewards to guarantee fairness in collaborative machine learning. In Proceedings of the Thirty-Fifth Annual Conference on Neural Information Processing Systems (NeurIPS 2021), Virtual, 6–14 December 2021; Volume 34, pp. 16104–16117. [Google Scholar]
- Fan, Z.; Fang, H.; Zhou, Z.; Pei, J.; Friedlander, M.P.; Zhang, Y. Fair and efficient contribution valuation for vertical federated learning. arXiv 2022, arXiv:2201.02658. [Google Scholar]
- Sattler, F.; Wiedemann, S.; Muller, K.R.; Samek, W. Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 3400–3413. [Google Scholar] [CrossRef] [PubMed]
- Hu, S.; Goetz, J.; Malik, K.; Zhan, H.; Liu, Z.; Liu, Y. Fedsynth: Gradient compression via synthetic data in federated learning. arXiv 2022, arXiv:2204.01273. [Google Scholar]
- Goetz, J.; Tewari, A. Federated learning via synthetic data. arXiv 2020, arXiv:2008.04489. [Google Scholar]
- Zhou, Y.; Shi, M.; Li, Y.; Sun, Y.; Ye, Q.; Lv, J. Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Vancouver, BC, Canada, 17–24 June 2023; pp. 5031–5040. [Google Scholar]
- Dankar, F.K.; Madathil, N. Using Synthetic Data to Reduce Model Convergence Time in Federated Learning. In Proceedings of the 2022 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Istanbul, Turkey, 10–13 November 2022; pp. 293–297. [Google Scholar]
- Avdiukhin, D.; Kasiviswanathan, S. Federated learning under arbitrary communication patterns. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 425–435. [Google Scholar]
- Yu, H.; Jin, R.; Yang, S. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 7184–7193. [Google Scholar]
- Murata, T.; Suzuki, T. Bias-variance reduced local sgd for less heterogeneous federated learning. arXiv 2021, arXiv:2102.03198. [Google Scholar]
- Ye, R.; Xu, M.; Wang, J.; Xu, C.; Chen, S.; Wang, Y. FedDisco: Federated Learning with Discrepancy-Aware Collaboration. arXiv 2023, arXiv:2305.19229. [Google Scholar]
- Zhang, J.; Li, Z.; Li, B.; Xu, J.; Wu, S.; Ding, S.; Wu, C. Federated learning with label distribution skew via logits calibration. In Proceedings of the International Conference on Machine Learning, Virtual, 25–22 July 2022; pp. 26311–26329. [Google Scholar]
- Arivazhagan, M.G.; Aggarwal, V.; Singh, A.K.; Choudhary, S. Federated learning with personalization layers. arXiv 2019, arXiv:1912.00818. [Google Scholar]
- Xu, J.; Tong, X.Y.; Huang, S.L. Personalized Federated Learning with Feature Alignment and Classifier Collaboration. arXiv 2023, arXiv:2306.11867. [Google Scholar]
- Song, T.; Tong, Y.; Wei, S. Profit allocation for federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2577–2586. [Google Scholar]
- Shi, Y.; Yu, H.; Leung, C. Towards fairness-aware federated learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–17. [Google Scholar] [CrossRef]
- Jiang, M.; Roth, H.R.; Li, W.; Yang, D.; Zhao, C.; Nath, V.; Xu, D.; Dou, Q.; Xu, Z. Fair Federated Medical Image Segmentation via Client Contribution Estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16302–16311. [Google Scholar]
- Song, R.; Liu, D.; Chen, D.Z.; Festag, A.; Trinitis, C.; Schulz, M.; Knoll, A. Federated learning via decentralized dataset distillation in resource-constrained edge environments. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–10. [Google Scholar]
Methods | Communication Cost Per Client | Total Communication Cost | Total Model Training Time |
---|---|---|---|
FL | |||
SL | |||
SplitFed | |||
SplitLPF |
Datasets | FMNIST | EMNIST | CIFAR-10 | ||||||
---|---|---|---|---|---|---|---|---|---|
Method | UDD | UDS | UDDS | UDD | UDS | UDDS | UDD | UDS | UDDS |
FedAvg | 86.14 | 80.69 | 91.71 | 75.43 | 69.50 | 83.47 | 61.40 | 40.82 | 71.56 |
FedPer | 86.19 | 81.98 | 91.72 | 76.12 | 70.24 | 82.92 | 61.97 | 40.94 | 71.72 |
FedPAC | 86.78 | 82.37 | 92.14 | 76.38 | 70.58 | 85.64 | 62.73 | 43.77 | 72.16 |
FedCI | 86.96 | 82.22 | 92.09 | 76.39 | 70.79 | 85.68 | 62.05 | 43.11 | 72.12 |
Ditto | 86.70 | 81.28 | 91.60 | 75.82 | 70.23 | 85.48 | 61.76 | 42.92 | 72.11 |
SplitFed | 85.67 | 78.77 | 88.44 | 67.63 | 66.23 | 81.52 | 59.61 | 40.02 | 70.84 |
SplitGP | 86.30 | 82.13 | 91.93 | 76.23 | 70.72 | 85.09 | 61.66 | 42.59 | 71.67 |
PFSL | 87.40 | 82.43 | 92.32 | 76.43 | 70.98 | 86.14 | 62.22 | 46.14 | 72.48 |
SplitLPF | 87.28 | 82.95 | 92.85 | 76.88 | 71.86 | 87.15 | 62.96 | 46.97 | 72.88 |
Datasets | FMNIST | EMNIST | CIFAR-10 | ||||||
---|---|---|---|---|---|---|---|---|---|
Method | PC | JFI | Std | PC | JFI | Std | PC | JFI | Std |
FedAvg | 85.15 | 93.92 | 17.47 | 87.69 | 97.54 | 7.91 | 73.87 | 93.89 | 6.47 |
FedPer | 85.94 | 94.47 | 17.02 | 90.53 | 97.73 | 7.88 | 73.57 | 94.93 | 6.30 |
FedPAC | 85.31 | 94.05 | 16.85 | 89.81 | 97.20 | 6.74 | 73.43 | 93.37 | 6.19 |
FedCI | 88.35 | 95.20 | 16.15 | 92.07 | 97.72 | 5.24 | 75.20 | 95.18 | 4.66 |
Ditto | 87.98 | 95.49 | 16.46 | 91.56 | 97.95 | 6.20 | 74.54 | 96.88 | 4.98 |
SplitFed | 72.72 | 90.96 | 20.36 | 55.44 | 95.00 | 33.19 | 69.64 | 89.79 | 8.37 |
SplitGP | 78.74 | 91.55 | 17.55 | 77.46 | 96.15 | 15.15 | 72.59 | 92.96 | 7.33 |
PFSL | 89.43 | 96.16 | 15.03 | 94.51 | 97.96 | 3.06 | 76.45 | 95.45 | 3.27 |
SplitLPF | 91.69 | 98.76 | 14.73 | 91.21 | 98.32 | 2.64 | 78.57 | 98.39 | 3.01 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, H.; Chen, X.; Peng, L.; Bai, Y. Personalized Fair Split Learning for Resource-Constrained Internet of Things. Sensors 2024, 24, 88. https://doi.org/10.3390/s24010088
Chen H, Chen X, Peng L, Bai Y. Personalized Fair Split Learning for Resource-Constrained Internet of Things. Sensors. 2024; 24(1):88. https://doi.org/10.3390/s24010088
Chicago/Turabian StyleChen, Haitian, Xuebin Chen, Lulu Peng, and Yuntian Bai. 2024. "Personalized Fair Split Learning for Resource-Constrained Internet of Things" Sensors 24, no. 1: 88. https://doi.org/10.3390/s24010088
APA StyleChen, H., Chen, X., Peng, L., & Bai, Y. (2024). Personalized Fair Split Learning for Resource-Constrained Internet of Things. Sensors, 24(1), 88. https://doi.org/10.3390/s24010088