Next Article in Journal
Ranking of Autonomous Technologies for Sustainable Logistics Activities in the Confectionery Industry
Previous Article in Journal
Uncertain Numbers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Convergence Analysis for Differentially Private Federated Averaging in Heterogeneous Settings †

1
Fujian Key Laboratory of Communication Network and Information Processing, Xiamen University of Technology, Xiamen 361024, China
2
National Key Laboratory of Wireless Communications, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
This article is an expanded version of a paper entitled “Secure federated averaging algorithm with differential privacy”, which was presented at IEEE International Workshop on Machine Learning for Signal Processing (MLSP), Espoo, Finland, 17–20 September 2020.
Mathematics 2025, 13(3), 497; https://doi.org/10.3390/math13030497
Submission received: 15 January 2025 / Revised: 26 January 2025 / Accepted: 27 January 2025 / Published: 2 February 2025
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

Federated learning (FL) has emerged as a prominent approach for distributed machine learning, enabling collaborative model training while preserving data privacy. However, the presence of non-i.i.d. data and the need for robust privacy protection introduce significant challenges in theoretically analyzing the performance of FL algorithms. In this paper, we present novel theoretical analysis on typical differentially private federated averaging (DP-FedAvg) by judiciously considering the impact of non-i.i.d. data on convergence and privacy guarantees. Our contributions are threefold: (i) We introduce a theoretical framework for analyzing the convergence of DP-FedAvg algorithm by considering different client sampling and data sampling strategies, privacy amplification and non-i.i.d. data. (ii) We explore the privacy–utility tradeoff and demonstrate how client strategies interact with differential privacy to affect learning performance. (iii) We provide extensive experimental validation using real-world datasets to verify our theoretical findings.
Keywords: federated learning; convergence analysis; privacy analysis; data heterogeneity federated learning; convergence analysis; privacy analysis; data heterogeneity

Share and Cite

MDPI and ACS Style

Li, Y.; Wang, S.; Wu, Q. Convergence Analysis for Differentially Private Federated Averaging in Heterogeneous Settings. Mathematics 2025, 13, 497. https://doi.org/10.3390/math13030497

AMA Style

Li Y, Wang S, Wu Q. Convergence Analysis for Differentially Private Federated Averaging in Heterogeneous Settings. Mathematics. 2025; 13(3):497. https://doi.org/10.3390/math13030497

Chicago/Turabian Style

Li, Yiwei, Shuai Wang, and Qilong Wu. 2025. "Convergence Analysis for Differentially Private Federated Averaging in Heterogeneous Settings" Mathematics 13, no. 3: 497. https://doi.org/10.3390/math13030497

APA Style

Li, Y., Wang, S., & Wu, Q. (2025). Convergence Analysis for Differentially Private Federated Averaging in Heterogeneous Settings. Mathematics, 13(3), 497. https://doi.org/10.3390/math13030497

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop