Next Article in Journal
Multiagent Coordination and Teamwork: A Case Study for Large-Scale Dynamic Ready-Mixed Concrete Delivery Problem
Next Article in Special Issue
AGCN-Domain: Detecting Malicious Domains with Graph Convolutional Network and Attention Mechanism
Previous Article in Journal
Modeling and Verification of Uncertain Cyber-Physical System Based on Decision Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Privacy-Enhanced Federated Learning for Non-IID Data

School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(19), 4123; https://doi.org/10.3390/math11194123
Submission received: 16 August 2023 / Revised: 15 September 2023 / Accepted: 23 September 2023 / Published: 29 September 2023
(This article belongs to the Special Issue Advanced Research on Information System Security and Privacy)

Abstract

:
Federated learning (FL) allows the collaborative training of a collective model by a vast number of decentralized clients while ensuring that these clients’ data remain private and are not shared. In practical situations, the training data utilized in FL often exhibit non-IID characteristics, hence diminishing the efficacy of FL. Our study presents a novel privacy-preserving FL algorithm, HW-DPFL, which leverages data label distribution similarity as a basis for its design. Our proposed approach achieves this objective without incurring any additional overhead communication. In this study, we provide evidence to support the assertion that our approach improves the privacy guarantee and convergence of FL both theoretically and empirically.

1. Introduction

Federated learning (FL) enables the collaborative training of a shared model by multiple decentralized clients, eliminating the demand for direct data exchange between clients. The utilization of this paradigm guarantees the localization of the training data and the protection of clients’ privacy [1]. Consequently, FL have gained significant traction in addressing numerous practical challenges across diverse fields, including the medical arena [2]. Nevertheless, the training data disseminated among numerous participating clients typically exhibit non-IID characteristics [3,4,5], which is a vital issue in the field of FL, as highlighted in reference [1]. The impact of label distributions in clients’ training data on the overall performance of classification tasks has been observed [6]. Non-IID data significantly impact FL from two distinct perspectives: two primary factors contribute to the divergence of local models. One is that the data distributions vary significantly among various clients, and another is that the local data are imbalanced. The non-IID distributed data might result in a phenomenon known as “weight divergence” during a model’s training process. Furthermore, this could have a detrimental effect on the efficiency of the global model [7,8].
One approach to address the aforementioned issue is mitigating the impact of data category imbalance on FL model by employing data augmentation techniques in situations where there exists a substantial disparity in data categories across various client datasets [9]. Nevertheless, the predominant obstacle in practical implementation is the inefficient utilization of communication resources resulting from the disproportionate allocation and dissemination of client data. The FedAvg algorithm, introduced by McMahan et al. [10], is widely acknowledged in this context. The system efficiently combines the model updates from several clients by utilizing a weighted averaging technique on the parameters. The system efficiently combines the model updates from several clients by employing a weighted averaging technique on the model parameters. This study examines the variation in client data while making the assumption that the global data follow the IID assumption. Furthermore, it is observed that there has been limited progress in enhancing the performance of the algorithm. Based on this premise, researchers initiated investigations into enhanced FL methodologies, including FedProx [11], SCAFFOLD [12], and MOON [13], with the aim of enhancing the performance of FedAvg in non-IID data scenarios through refining local training procedures. Nevertheless, in this particular situation, it is worth noting that the enhancements achieved by FedProx may have been more gratifying, whereas the SCAFFOLD and MOON approaches imposed a considerable amount of supplementary communication overhead. The enhanced FL algorithm for non-IID data aims to optimize the aggregation weight in order to enhance the performance of FedAvg. The enhancement of aggregation weight places greater emphasis on the computation of similarity between the local and the global model [14,15]. But this approach has significant storage and time costs and does not effectively merge variations in client data distribution. Hence, the primary focus of our research paper is centered in the non-IID data scenario. Our objective is to assess the similarity of the distribution of clients’ data and subsequently modify the aggregate weight based on these findings. The objective is to mitigate the communication bottleneck.
While FL can provide a certain level of privacy protection by sharing model parameters like gradients, it is vital to consider the non-IID scenario. In such cases, attackers can deduce model parameter information, hence posing a potential danger of privacy breach [16]. To augment the privacy protection capacity of the model during transmission, current approaches integrate FL with additional privacy protection technologies. These technologies include Differential Privacy (DP) [17], Homomorphic Encryption [18], and Secure Multi-Party Computation [19]. DP, in particular, possesses both a rigorous mathematical foundation and the ability to quantify the level of data privacy protection through the concept of privacy budget. Currently, DP has emerged as a highly effective method for safeguarding data privacy in the context of FL. Two primary application approaches of DP are commonly utilized: centralized DP and localized DP. The conventional approach to distributed processing involves the centralization of data processing and storage on the central server. However, this strategy is susceptible to single-point failures and potential breaches of privacy [20]. The concept of localized data processing entails the distribution of data processing and protection tasks over multiple local devices, hence enhancing privacy safeguards. In [21], the authors offer a privacy protection strategy called FedProx at the client level. However, they do not provide sufficient evidence to establish that the scheme fully satisfies the notion of DP. The study in [22] offers a theoretical demonstration; however, it fails to consider the trade-off between privacy parameters and model utility. Hence, for the non-IID scenario, the pressing issue at hand pertains to the reduction of communication costs while simultaneously guaranteeing the privacy of FL. Hence, this study presents a privacy-preserving FL technique that leverages the similarity in distribution of client data.
The primary contributions of our study are as follows:
(1)
To address the issue of suboptimal FL algorithm models resulting from non-IID data, we have put out a proposed scheme, which involves utilizing the Hellinger distance to quantify the disparity between the local data distributions of clients and the ideal balanced distribution. By doing so, we aim to alleviate the divergence in the model;
(2)
To address the issue of excessive communication usage in FL while dealing with non-IID data, we propose an aggregation technique that incorporates similarity weighting. This method leverages the similarity results obtained from analyzing the data distribution of each client, allowing for fast transfer of local model information to the Parameter Server (PS);
(3)
To address the privacy disclosure issue in FL, we employ DP as a solution. During the training process, Gaussian noise is incorporated into the client’s output in order to enhance privacy and security measures.
The remainder of our paper is organized as follows. Following this introduction, the relevant preliminary is presented in Section 2 and the proposed system model for the privacy-preserving FL algorithm the context of non-IID is presented in Section 3. The findings pertaining to privacy theory are presented in Section 4, whereas the results concerning convergence theory can be found in Section 5. In Section 6, the discussion is given. Finally, the concluding remarks are presented in Section 7.

2. Preliminary

This section primarily outlines the fundamental framework of FL, elucidates the notion of DP mechanism, and examines the influence of non-IID data on model optimization.

2.1. Federated Learning

FL refers to a collaborative training procedure that involves the interaction between local clients and PS [19]. Supposing a standard FL system with N local clients and a PS, each client k 1 , 2 , , N has its private training dataset D k , and the dataset size is n k ; here, D k = u i k , v i k i = 1 n k , u i k represents data point i of client k, and v i k indicates the label of the data point i of client k. The client communicates with the PS to train the global model cooperatively without transmitting the original data. Therefore, the optimization problem of FL could be described as
min w F ( w ) k = 1 N p k F k ( w ) ,
where F ( w ) denotes a global objective function, w R d stands for a model parameter vector, p k = n k / k = 1 N n k refers to aggregate weights, and F k ( w ) denotes the local target function of client k. Specifically, we assume that the n k training data of the client k is D k = u 1 ( k ) , v 1 ( k ) , u 2 ( k ) , v 2 ( k ) , u n k ( k ) , v n k ( k ) ; then, the local objective function F k ( w ) can be defined as
F k ( w ) 1 n k i = 1 n k l ( w ; u i ( k ) , v i ( k ) ) ,
where l ( ) refers to the loss function specified by the client. Cross-entropy is often used as a loss function in image recognition tasks.
The FedAvg algorithm [21] is a commonly employed approach in federated optimization. The PS computes the average of the local model parameters submitted by individual clients and thereafter distributes the aggregated outcomes to each client. In the conventional FedAvg algorithm, the initial step involves the client retrieving the latest global model parameters from the PS and subsequently initializing the local model. In the scenario where clients are chosen at random for training, the selected clients engage in training the local data update model. This is achieved by individually executing E rounds of random gradient descent steps and afterwards reporting the results to the PS. Ultimately, the PS obtains the model that has been modified locally and proceeds to aggregate it through an averaging process.

2.2. Differential Privacy

Dwork et al. proposed DP [23] to solve the privacy protection problem in databases. As a proven privacy protection technology, DP can ensure that the impact of a single sample on the whole is always lower than a certain threshold when outputting information, which makes it impossible for attackers to analyze the situation of a single sample from the change in output.
Definition 1. 
( ( ε , δ ) -DP [23]): Consider any two neighboring datasets  D  and  D , which differ in only one data sample. A randomized mechanism  M : D  with domain  D  and range   guarantees  ( ε , δ ) -DP ( ( ε , δ ) -DP), and, for any subsets of outputs  S , it holds that
Pr [ M ( D ) S ] e ε Pr [ M ( D ) S ] + δ .
This ensures that the output of  ( ε , δ ) -DP is indistinguishable, regardless of differences in a single record. The parameter  ε > 0  is known as the privacy budget; smaller  ε  indicates stronger privacy protection level, and  δ [ 0 , 1 ]  represents the probability to break the  ε -DP.
Typically,  ε -DP and  ( ε , δ ) -DP assurance can be achieved through the Laplace Mechanism and Gaussian Mechanism, but this paper focuses on guaranteeing  ( ε , δ ) -DP by adding random noise that conforms to the Gaussian distribution  N ( 0 , σ 2 I d )  to the output function. To meet the requirements of DP, this mechanism controls the noise variance within a certain range to ensure that it meets the following conditions:
σ 2 2 ( Δ f ) 2 log ( 1.25 / δ ) ε 2 ,
Here, the notation  Δ f m a x D , D f D f D 2  stands for the  L 2 -norm sensitivity of the function.
Sampling processes are frequently employed in machine learning algorithms. The privacy amplification characteristic, as suggested by differential privacy (DP) [24], demonstrates that the DP mechanism, when applied to a randomly chosen subset of the dataset, provides superior privacy safeguards compared to when it is applied to the entire dataset.
Lemma 1. 
(Privacy amplification by subsampling [25]): If  M  is  ( ε , δ ) -DP, then  M   S u b s a m p l i n g  obeys  ( ε , δ ) -DP, with  ε = log ( 1 + γ ( e ε 1 ) )  and  δ = γ δ .
The privacy amplification theorem demonstrates that, by sub-sampling the client, it is possible to effectively decrease the noise variance needed to attain the desired level of privacy protection, as specified by DP. In a broader sense, the lemma suggests that it is vital to exploit the randomness in sub-sampling because, if M is ( ε , δ ) -DP, then a sub-sampled mechanism with probability γ < 1 obeys ( O ( γ ε ) , γ δ ) -DP for a sufficiently small ε .

2.3. Impact of Non-IID Data

In every global iteration of FL, each client aims to reduce their loss function based on its local data. The existence of non-IID attributes in the local dataset can result in significant discrepancies between the local and the global model. In certain instances, it has been shown that the gradient of local models may exhibit a contrasting direction compared to that of the global model, leading to a phenomenon known as drift inside the local model [12,26]. Put differently, the revised local model exhibits a bias towards the local optimum and deviates from the global optimum state. Assume that the parameters of these local models are uploaded to the PS for the purpose of aggregation. The precision of the global model will be impacted, and there will also be a significant utilization of network capacity, resulting in a decrease in communication efficiency.
Figure 1 illustrates the FedAvg problem in both IID and non-IID scenarios. In IID scenarios, it can be observed that the global optimal value exhibits a strong proximity to the local ideal value. In other words, the global average model converges towards the global optimum. In non-IID scenarios, the discrepancy between the global optimal value and the local ideal value results in a considerable distance between the averaged global model and the global optimal state. Hence, it is imperative to investigate the methodologies for developing a proficient FL in non-IID scenarios.

3. System Model

In this section, we introduce the privacy-preserving FL algorithm (HW-DPFL), which is designed on the basis of the concept of probability distribution similarity of data labels. Subsequently, the method’s specific process is described.
Firstly, it is vital to note that, in the FedAvg algorithm, the PS is responsible for aggregating and averaging the local model parameters. Thus, the effectiveness of FedAvg is greatly influenced by the weighting method employed. Typically, the weight assigned to each local dataset is determined by calculating the ratio of that dataset to the entire dataset. Nevertheless, in non-IID cases, this approach can have an impact on the rate of convergence and potentially compromise privacy. Hence, it is imperative to choose a more suitable approach for determining the weight. To address the problems at hand, this section presents a privacy-preserving FL approach called HW-DPFL, which leverages the similarity of probability distributions of data labels. The flow of the algorithm is depicted in Figure 2. During the process of model aggregation, the algorithm computes the Hellinger distance of the label distribution for each client’s dataset. It then extracts the local model information from this calculation and aggregates it using an updated weighting approach. The proposed approach mitigates the challenges associated with training non-IID data and enhances the efficiency of model training.
In each iteration t , the label distribution of the client k dataset can be represented by the label vector G k :
G k = [ n k , 1 , n k , 2 , , n k , C ]
where C denotes the total number of label types, n k , C indicates the number of C- type labels possessed by the client k .
The Hellinger distance is computed based on the label distribution G k of the client k local dataset and the standard balanced data label distribution S:
h t = H ( G , S ) = 1 2 G S 2 = 1 2 i = 1 n ( G i S i ) 2
Hellinger distance is a metric employed in the field of probability and statistics to quantify the degree of similarity between two probability distributions [27]. In the context of non-IID data, the Hellinger distance could be employed as a metric to assess the similarity between two classes, hence enabling algorithmic enhancements. Hence, the measure of similarity between each client’s local dataset and the designated standard balanced dataset can be determined by computing the Hellinger distance.
The parameters for updating the weight of the model vary depending on the number of iterations:
w t = k = 1 M τ k w t k ,
τ k = h t k j = 1 M h t j .
Furthermore, given the PS’s inclination towards honesty and curiosity while adhering to the FL protocol, it demonstrates a greater interest in the client’s data information. Simultaneously, the system is susceptible to additional external attacks during the transmission of model parameters. To address this issue, we propose the incorporation of noise that adheres to a Gaussian distribution, thereby ensuring DP. Algorithm 1 provides a concise representation of the privacy-preserving FL method suggested in this research, which is founded on the concept of data label distribution similarity (HW-DPFL).
Algorithm 1: HW-DPFL
Input: K denotes the number of terminals; B denotes the local batch size; E denotes the local training times of the terminal model; F denotes the proportion of clients participating in training; η denotes learning rate; S denotes standard balanced data label distribution.
Output: model parameter
The PS doesInitialize global model parameters
  for each round t = 1, 2, ···, do
     M max ( C K , 1 ) // Determine the number of clients for this round of communication
     S t r a n d o m s e t o f M c l i e n t // Randomly select M clients to participate in training
  for each client k S t in parallel, do
       w t k C l i e n t U p d a t e k , w t 1
       h t G e t W e i g h t ( k )
     w t H W D P F L w ˜ t k
def GetWeight( k ): // Get the aggregation weight of user k
     G (Get local dataset label distribution)
     h t W ( G , S ) // Calculate the distribution similarity of user data labels
  return h t
def HW-DPFL( w , h ): // Weighted aggregation
     w t k = 1 K w ˜ t k h t k k = 1 K h t k
  return w t
def ClientUpdate k , w : // Model update
     B (Batch Local Datasets)
   for each local epoch from 1 to E do
      for batch b B do // Train each batch of data
         w ˜ t k ( w t k η F t k w ; b ) + Z t k , Z t k N ( 0 , σ t , k 2 I d )
  return to server
During the process of model iteration, the method introduces noise to the model parameter information, thereby perturbing the data in a manner that significantly hinders the attacker’s ability to extract meaningful information from it. The determination of noise parameters and privacy budget in DP is contingent upon the specific requirements for privacy protection. The combination theorem enables clients to effectively compute the privacy loss incurred throughout each iteration of the training process. In order to enhance clarity, the tth round, on the basis of the HW-DPFL algorithm, might be denoted as follows:
Client   k : w t k = C l i e n t U p d a t e k , w t 1 w ˜ t k = w t k + Z t k , Z t k ~ N ( 0 , σ 2 I d )
Server :   w t = k = 1 M τ k w ˜ t k ,
where w t is the global model parameter of round t, w t k denotes the local model parameter of client k in round t, C l i e n t U p d a t e k , w t 1 means the local random gradient descent process of client k, and d is the dimension of model parameters.

4. Privacy Analysis

In this section, we focus on the analysis of the privacy guarantees offered by the HW-DPFL algorithm. We begin by analyzing the sensitivity of the local parameter update function in relation to the L 2 -norm. Following this, we proceed to assess the level of privacy guarantee in each subsequent iteration. Finally, we calculate the total privacy budget after the conclusion of all T iterations.

4.1. L 2 -Norm Sensitivity

To achieve DP, we incorporate the Gaussian technique with L 2 -norm sensitivity by introducing noise. Thus, we elucidate the sensitivity towards the local parameter updating function.
Assumption 1. 
Suppose  ζ t k  is a uniform random sampling from the local data of client  k  in the iteration  t . The squared norm of gradients for all clients is uniformly bounded, so  | | F k ( w t k , e ; ζ t k , e ) | | 2 G 2  for  k = 1 , , N ,  e = 1 , , E , and  t = 1 , , T .
Paper [21] has successfully used Assumption 1 for DP-based research proof, as evidenced by the application of a gradient clipping methodology [28].
Lemma 2. 
If Assumption 1 holds, then the  L 2 -norm sensitivity of the local update parameters for user k in the iteration t is
Δ f t k max D , D | | w t k ( D k ) w t k ( D k ) | | 2 = 2 η E G
The proof of Lemma 2 is shown in Appendix A.

4.2. Privacy Guarantee in Round T

Subsequently, a sub-sampling privacy amplification lemma is employed to mitigate the noise variance, ensuring that each client adheres to the noise variance constraint in every iteration.
Theorem 1. 
Without replacement sampling in mini-batches, given that the noise level  σ t , k 2  and the added noise  Z t k  are obtained from sampling from a Gaussian distribution  N ( 0 , σ t , k 2 I d ) , then we have
σ t , k 2 32 γ 2 η 2 E 2 G 2 log ( 1.25 γ / δ ) ε 2 ,
where the sampling probability is  γ = E b / n k .
Proof of Theorem 1. 
According to the privacy amplification by sub-sampling, the Gaussian noise level in fact can describe log ( 1 + γ ( e ε 1 ) ) , γ δ - D P . Since
log ( 1 + γ ( e ε 1 ) ) γ ( e ε 1 ) 2 γ ε ,
we can then obtain that the Gaussian noise level achieves at least 2 γ ε , γ δ - D P . Specifically, in the iteration t, in order to satisfy the ( ε , δ ) - D P guarantee of client k, the Gaussian noise level can be decreased to
σ t , k 2 8 Δ f t k 2 γ 2 log ( 1.25 γ / δ ) ε 2 = 32 γ 2 η 2 E 2 G 2 log ( 1.25 γ / δ ) ε 2
The proof is finished; the text continues here. □

4.3. The Total Privacy Loss

In this paper, we employ the moment accountant approach to quantify the cumulative privacy loss across T rounds. Our proposed methodology offers a more stringent constraint for quantifying the overall extent of privacy compromise compared to prior research efforts.
Theorem 2. 
Assume that the noise  Z t k  obeys a Gaussian distribution  N ( 0 , σ t , k 2 I d ) ; then, the HW-DPFL algorithm guarantees  ( ε ^ , δ ) - D P . We have
ε ^ = ε T log ( 1 / δ ) 2 log ( 1.25 γ / δ ) 1 2 .
Proof of Theorem 2. 
According to [28], we define the log of the moment-generating function evaluated at e for client k in iteration t as
α t k ( e ) = log E w ˜ t k Pr [ w ˜ t k | D k ] Pr [ w ˜ t k | D k ] e .
Suppose that u 0 and u 1 stand for the probability density function of N ( 0 , σ t , k 2 I d ) and N Δ f t k , σ t , k 2 I d , respectively. Let u denotes the mixture of two Gaussian distributions as u = 1 γ u 0 + γ u 1 . Therefore, we have
α t k e = l o g m a x ( E 1 , E 2 )
where
E 1 = E z ~ u 0 u 0 ( z ) u ( z ) e , E 2 = E z ~ u u ( z ) u 0 ( z ) e .
According to composability for moment accountant method and Lemma 3 in [27], we have
α t k ( e ) T γ 2 ( Δ f t k ) 2 e 2 σ t , k 2 = T ε 2 e 2 8 log ( 1.25 γ / δ ) .
Next, following Theorem 2.2 in [28], the HW-DPFL algorithm satisfies ε ^ , δ ^ - D P . Here,
δ ^ = min e Z + exp ( α k ( e ) e ε ^ ) = min e Z + exp T ε 2 e 2 8 log ( 1.25 γ / δ ) e ε ^
Since the above formula is a quadratic function of e , we assume that θ x = T ε 2 e 2 / 8 l o g ( 1.25 γ / δ ) ε ^ e , e = 1 , E . Then,
δ ^ < exp ( θ ( e * + 1 ) ) ,
where e * is the minimum point of the function θ ( x ) .
To make the HW-DPFL algorithm satisfy ( ε ^ , δ ^ ) - D P , let
θ ( e * + 1 ) = T ε 2 8 log ( 1.25 γ / δ ) 2 log ( 1.25 γ / δ ) ε ^ 2 T ε 2 log ( δ ) .
Thus, we have
log ( 1 / δ ) T ε 2 8 log ( 1.25 γ / δ ) + 2 log ( 1.25 γ / δ ) ε ^ 2 T ε 2 2 log ( 1.25 γ / δ ) ε ^ 2 T ε 2 .
and
ε ^ ε T log ( 1 / δ ) 2 log ( 1.25 γ / δ ) 1 2 .
The proof is finished, the text continues here. □
The coexistence of b and E adds to the acceleration of the convergence of Stochastic Gradient Descent (SGD) [29]. Moreover, as stated in Theorem 1, in cases when both b and E exhibit substantial magnitudes, it becomes imperative to provide a higher level of noise in order to guarantee differential privacy. However, this increased noise may potentially hinder the convergence of the algorithm. This suggests that there is a trade-off between the speed at which the algorithm reaches convergence and the degree of privacy protection. The aforementioned trade-off is subjected to further analysis in the future section.

5. Convergence Analysis

This section primarily focuses on the analysis of the convergence of the HW-DPFL algorithm described herein. Let us commence by establishing certain assumptions.
Assumption 2. 
For all  k [ N ]  , each  F k  is L-smooth, i.e., for all x and y,  F k ( x ) F k ( y ) + ( x y ) T F k ( y ) + L x y 2 / 2 .
Assumption 3. 
For all  k [ N ]  , each  F k  is  μ  strong convex, i.e., for all x and y,  F k ( x ) F k ( y ) + ( x y ) T F k ( y ) + μ x y 2 / 2 .
Assumption 4. 
For all  k [ N ]  , the stochastic gradients for each client satisfy  E [ F k ( w t k ; ζ t k , b ) ] = F k ( w t k )  and  E [ F k ( w t k ; ζ t k , b ) F k ( w t k ) 2 ] ρ k 2 .
Let F * and F k * denote the optimal values of total objective functions and objective function of the client k, respectively. According to [30], we assume that the degree of data heterogeneity can be expressed as Γ = F * k = 1 N p k F k * . It can be observed that, when the client data are IID, Γ = 0 . The more heterogeneous the data, the greater the value of | Γ | .
Suppose w t k is the model parameter of the client k in the round t , and E is the total number of local epochs. The command set Ω E = n E | n = 1 , 2 , represents the times of the client communicates with the PS. Considering that a subset of clients is randomly selected to participate the training according to the sampling scheme, at this time, if t + 1 Ω E , it means that the PS aggregates the local models’ parameters to obtain the global model and sends the latest model parameters to each client, if t + 1 Ω E , the client updates the local model parameters with its local data. Because clients participating in the training have to perform multiple rounds of iterations locally, we use an intermediate variable υ t + 1 k to represent the results of the one-step SGD, and the updated results can be expressed as
υ t + 1 k = w t k η F k ( w t k , ζ t k )
w t + 1 k = υ t + 1 k , t + 1 Ω E k = 1 M τ k υ t + 1 k , t + 1 Ω E .
In order to enhance the comprehensibility of the proof, we shall introduce the subsequent lemma.
Lemma 3. 
(Results for each round t) In iteration t, suppose that Assumptions 1 to 4 hold. Then,
E | | υ ^ t + 1 w * | | 2 2 1 μ η E | | w ^ t w * | | 2 2 + Ψ + T Λ ,
where
Ψ = 2 ( E 1 ) 2 η 2 G 2 + 2 M + 2 η 2 L Γ + η 2 k = 1 M τ k 2 ρ k 2
Λ = d k = 1 M τ k 2 σ t , k 2 = d k = 1 M τ k 2 32 γ 2 η 2 E 2 G 2 log ( 1.25 γ / δ ) ε 2 ,
where  w *  stands for the global optimal solution.
The proof of Lemma 3 is shown in Appendix B.
Theorem 3. 
Suppose Assumptions 1 to 4 hold; then, the convergence rate of the HW-DPFL algorithm satisfies
E F ( w ^ T ) F ( w * ) L 2 E w ^ T w * 2 2 L ( 1 μ η ) T 2 E w ^ 0 w * 2 2 + L μ η Ψ + T Λ 2 + N M N 1 2 M η 2 E 2 G 2
Proof of Theorem 3. 
If t + 1 Ω E , it can be observed that w ^ t + 1 = υ ^ t + 1 . And if t + 1 Ω E , the two are not equal. Assuming that there is no communication loss among the selected clients in each round, we hope that the model parameters obtained after sub-sampling and average aggregation are unbiased; thus in the HW-DPFL algorithm, when t + 1 Ω E , we have
E S t w ^ t + 1 = υ ^ t + 1 .
Here, it is used to express the expectation of the set S t of randomly selected partial clients.
Lemma 4. 
(Bounding the variance of w ^ t [29]). If PS samples  S t  uniformly without replacement, then the variance of  w ^ t  is bounded by
E S t w ^ t + 1 υ ^ t + 1 2 2 N M N 1 4 M η 2 E 2 G 2 .
Note that
E w ^ t + 1 w * 2 2 = E w ^ t + 1 υ ^ t + 1 + υ ^ t + 1 w * 2 2 = E w ^ t + 1 υ ^ t + 1 2 2 R 1 + E υ ^ t + 1 w * 2 2 R 2 + 2 E w ^ t + 1 υ ^ t + 1 , υ ^ t + 1 w * R 3 .
Then, we use  E S t w ^ t + 1 = υ ^ t + 1 , and the term  R 3 = 0 .
Case 1. If t + 1 Ω E , then R 1 = 0 because w ^ t + 1 = υ ^ t + 1 . According to Lemma 3, we have
E w ^ t + 1 w * 2 2 = E υ ^ t + 1 w * 2 2 ( 1 μ η ) E w ^ t w * 2 2 + Ψ + T Λ .
Case 2. If t + 1 Ω E , according to Lemmas 3 and 4, it follows that
E w ^ t + 1 w * 2 2 = E υ ^ t + 1 w * 2 2 + E w ^ t + 1 υ ^ t + 1 2 2 ( 1 μ η ) E w ^ t w * 2 2 + Ψ + T Λ + N M N 1 4 M η 2 E 2 G 2 .
Unrolling the recursion, we can obtain
E w ^ T w * 2 2 ( 1 μ η ) T E w ^ 0 w * 2 2 + t = 1 T 1 ( 1 μ η ) t Ψ + T Λ + N M N 1 4 M η 2 E 2 G 2 ( 1 μ η ) T E w ^ 0 w * 2 2 + μ η Ψ + T Λ + N M N 1 4 M η 2 E 2 G 2 .
Since F k ( . ) is L -smooth, we have
E F ( w ^ T ) F ( w * ) L 2 E w ^ T w * 2 2 L ( 1 μ η ) T 2 E w ^ 0 w * 2 2 + L μ η Ψ + T Λ 2 + N M N 1 2 M η 2 E 2 G 2
The proof is finished; the text continues here. □
By Theorem 3, the convergence upper bound of the HW-DPFL algorithm is affected by several factors, namely, the number of transmission rounds T , the mini-batch size b , the noise level σ , and the number of local update steps E . It is important to recognize that an increase in E has the potential to enhance the algorithm’s convergence rate. The potential for enhancing convergence rates exists when the mini batch size b is increasing at the local level. Nevertheless, the algorithm’s convergence rate may be impeded by the significant magnitudes of E and b . Increasing the degree of noise σ has the potential to improve the effectiveness of privacy measures. However, this may lead to a decrease in the rate of convergence.

6. Experiment

In this section, we assess the efficacy of the HW-DPFL. The experiments primarily employ Convolutional Neural Networks for the purpose of classifying the MNIST dataset.
MNIST dataset: The dataset was publicly provided by the National Institute of Standards and Technology. It is a binary image dataset, which consists of 70,000 grayscale images that have been manually scribbled. Each image is associated with a numerical designation ranging from 0 to 9. The resolution of the image is fixed at 28 × 28 pixels. For the MNIST dataset, a resolution of 28 × 28 has been considered a relatively low resolution, which has been widely accepted and effectively applied in practice. Some image examples from the MNIST dataset are in Figure 3.
A total of 60,000 images were designated as the training dataset, while the remaining 10,000 images were allocated for testing the model. During the model training process, it is necessary to specify the overall number of clients and ensure an equitable distribution of 60,000 images among them. This allocation ensures that each client receives an equal share of 600 photographs. The proportion of customers whose data are dependent and identically distributed is established at 0.8.
Parameter setting: We set δ = 10 5 and the maximal local gradient norm to 1. It should be noted that the loss function is defined as cross-entropy and represents a highly convex optimization issue:
(1)
Impact of local mini-batch size b : The impact of varying local mini-batch sizes on the training loss of the HW-DPFL algorithm is depicted in Figure 4. We set the values of local mini-batch size b = 10 , 15 , 20 , 50 . Based on the findings from the experimental results, it is not difficult to see that an optimal state is present in two distinct contexts. In the IID case, it is observed that an increase b results in accelerated convergence and greater reduction in training loss. Nevertheless, the outcome is detrimental when the magnitude is above a specific threshold. The decrease in training loss is more pronounced when handling non-IID data, and the disparity in convergence between distinct b values is more noticeable;
(2)
Impact of the number of local update steps E : We also analyze the performance of the HW-DPFL algorithm with different local update steps. In the experiment, we set the number of local update steps as E = { 1 , 10 , 20 , 50 } . The outcomes are depicted in Figure 5. For a fixed ε = 1 , there is an optimal E value which makes the HW-DPFL perform the best in two scenarios. Moreover, increasing the value of E can result in expedited algorithm convergence. Nevertheless, the rate of convergence decelerates significantly when E is too large. In addition, for non-IID, an excessively large E results in a higher degree of variability in the training loss. The presence of larger E can result in more significant variations in weights among clients, hence impeding the convergence of the HW-DPFL;
(3)
Impact of the noise level σ : The experience results of the HW-DPFL with different noise levels σ are presented in Figure 6. We set the noise level σ = { 0.2 , 0.5 , 1 } . The results indicate a steady decrease in training loss as the noise level increases. This can be attributed to the detrimental impact of high noise levels on the model’s convergence performance, leading to a substantial increase in training loss. In both IID and non-IID cases, the training loss of the HW-DPFL exhibits an initial steep decline. In non-IID scenarios, the training loss experiences a greater reduction. Furthermore, the HW-DPFL has the potential to enhance the resilience of the training model in the face of DP injection noise;
The above experiments examines the impact of various factors on the efficacy of the HW-DPFL algorithm. It is evident that the HW-DPFL algorithm demonstrates superior performance across several data features. When the data follow the IID assumption, correctly raising the local mini-batch size b and the number of local update steps E enhances the convergence speed and decreases training losses. However, surpassing a particular threshold would result in the reverse effect. When dealing with non-IID data, it has been seen that increasing the value of b can effectively decrease training losses. However, it should be noted that, as the value of E increases, there is a corresponding increase in the variability of training losses. Furthermore, it is vital to consider the trade-off between utility and privacy when dealing with both IID and non-IID scenarios. The excessive noise level significantly impacts the convergence performance of the model. In circumstances where the data are non-IID distributed, the HW-DPFL algorithm exhibits reduced training losses and demonstrates enhanced capacity for improving the robustness of the model. Hence, the performance of the HW-DPFL algorithm can be enhanced through the adjustment of parameters such as the local mini-batch size b , the number of local update steps E , and the high noise level σ . In relation to the b value, it is imperative to select a suitable magnitude that aligns with the IID characteristics of the data. Regarding the E value, it is crucial to maintain control within a moderate range to prevent fluctuations and mitigate the adverse impact on the rate of convergence that may arise from an excessively large size. As for the σ value, it is essential to strike a balance between utility and privacy considerations, thereby opting for an appropriate level of noise that guarantees privacy while ensuring the desired level of utility. Implementing these modifications will enhance the training efficacy of the HW-DPFL algorithm and bolster the resilience of the model;
(4)
Algorithm performance comparison: In IID and non-IID scenarios, HW-DPFL exhibits a greater level of accuracy compared to both the DP-FedAvg [8] and DP-FL [19]. Simultaneously, HW-DPFL demonstrates comparable accuracy to the DP-FL algorithm in the non-IID case in Table 1, thereby confirming the practicality and efficacy of the HW-DPFL algorithm in non-IID data scenarios.

7. Discussion

This section examines three key aspects: data dissemination, privacy protection, and training time. In this study, we evaluate the efficacy of three techniques in the context of non-IID data, using heterogeneous and homogeneous models.
The primary focus of HW-DPFL lies in training non-IID data inside both homogeneous and heterogeneous models while emphasizing the implementation of robust privacy protection measures. The process of fine-tuning primarily takes place during the training stage and relies on weight aggregation. The Hellinger distance metric is also utilized to quantify the similarity between two probability distributions [26]. The performance of a system is influenced by the configuration of its models and the distribution of its data, both at the local level throughout numerous iterations and at the global level during aggregations. The substantial variance in updates leads to a departure of the global model from the genuine optimization outcomes.
Models are commonly perceived as entities that serve as repositories for storing knowledge derived from diverse datasets. The complexity of a model is influenced by various factors, including its structural design, dimensions, the distribution of data, and the size of the dataset. The augmentation of hidden units or parameters results in generalization mistakes. When various techniques are utilized to train the model under identical conditions, such as measuring the model complexity using CNN, it is noted that the accuracy of the model surpasses that of a shallow model. However, it is worth noting that the training time is extended.
During the preparation of this paper, it has come to our attention that a study conducted by [3] in the IEEE Internet of Things Journal in January 2023 explores an issue closely related to our research. It also investigates the application of FL to non-IID datasets, using DP techniques, yielding promising outcomes. However, it fails to address the content of our study adequately. Four distinct points of divergence exist between our paper and the work mentioned above: (1) The optimization of the gradient in FL was enhanced by [3] by utilizing historical gradient information. In contrast, our approach focuses on optimizing the gradient by adjusting the server-side aggregation strategy of parameters; (2) The reference [3] employs the K-means algorithm to cluster the label distribution of user data, aiming to address the issue of non-IID. In our work, we utilize the Hamming distance as a metric to quantify the difference between the IID and non-IID distribution; (3) Regarding the DP mechanism, [3] employs Laplace noise and a simple combination theorem to calculate privacy loss. In contrast, we introduce Gaussian noise and utilize the moment accountant method to calculate privacy loss; (4) The reference [3] solely presents empirical experiments to demonstrate their results, while we provide theoretical proof of privacy and convergence in our work.
The HW-DPFL is configured with three distinct levels of noise, which are afterward linked to DP privacy protection. The hyper-parameters indicate the privacy protection level for both data and models.

8. Conclusions

This paper has studied an FL framework toward non-IID data, and a novel approach called HW-DPFL is proposed based on weighted aggregation of data distribution, which aims to improve FL’s efficiency and protect the FL’s privacy in non-IID data scenarios. Based on Hellinger distance, the algorithm quantifies the distribution balance degree of the clients’ local privacy data labels to readjust the weight information of FL aggregation on PS so that the algorithm can converge faster while ensuring that the client information is fully trained. To effectively deal with the problem of information leakage, we add Gaussian noise to the shared parameters before uploading the parameters to PS. The algorithm can obtain local differential privacy with adjustable noise in FL architectures. Theoretical guarantees on the privacy protection capabilities and convergence of HW-DPFL were derived. The HW-DPFL algorithm was subsequently assessed using the MNIST dataset. The experimental findings exhibited the enhancement of HW-DPFL about non-IID data across several dimensions. The findings also suggest that HW-DPFL demonstrates potential usefulness and robust convergence in the face of non-IID data. Moreover, DP is incorporated into the upgraded FL framework to ensure the scheme’s privacy.
Further research can be conducted to explore additional examinations of the theorems and the efficacy of HW-DPFL in future endeavors. Additionally, it is important to address various non-IID settings, such as feature-based non-IID scenarios. The potential strengths of DP-shuffle can be enhanced through the manipulation of various levels of noise. Furthermore, due to its nature as a local sample federated scheme, HW-DPFL has the potential for seamless integration into many upcoming federated learning frameworks as a fundamental operational component.

Author Contributions

Conceptualization, Q.T. and S.W.; methodology, Q.T. and S.W.; software, Q.T.; validation, Q.T., S.W. and Y.T.; formal analysis, Q.T.; investigation, Q.T.; data curation, Q.T.; writing—original draft preparation, Q.T.; writing—review and editing, Q.T., S.W. and Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Zhejiang University of Science and Technology Postgraduate Research and Innovation Fund, grant number 2022yjskc24.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Lemma 2. 
During each iteration t, each client will initialize their local model with w t 1 and perform E steps of local SGD to obtain w t k , starting from w t E ,
w t k = w t 1 e = 1 E η g t k , e ,
where g t k , e is the local gradient vector on the basis of the local datasets. Thus,
| | w t k ( D k ) w t k ( D k ) | | 2 = | | e = 1 E η g t k , e ( D k ) e = 1 E g t k , e ( D k ) | | 2 = η | | e = 1 E η g t k , e ( D k ) g t k , e ( D k ) | | 2 = η e = 1 E | | g t k , e ( D k ) g t k , e ( D k ) | | 2 2 η E G
where the last inequality is obtained from Assumption 1.
The proof is finished. □

Appendix B

Proof of Lemma 3. 
First, let υ ^ t = k = 1 M τ k υ t k , w ^ t = k = 1 M τ k w t k , g ^ t = k = 1 M τ k F k ( w t k ) , and g t = k = 1 M τ k F k ( w t k ; ζ t k , b ) . Then, g ^ t = E g t . Notice that υ ^ t + 1 = w ^ t η g t + Z t ; then, we have
E υ ^ t + 1 w * 2 2 = E w ^ t η g t + Z t w * 2 2 = E w ^ t w * η g t + Z t 2 2 = E w ^ t w * η g t 2 2 A 1 + E Z t 2 2 A 2 2 η w ^ t w * η g ^ t , Z t A 3
Because the added noise obeys the Gaussian distribution N ( 0 , σ t , k 2 I d ) , we have A 3 = 0 . Next, we consider the bounding A 1 . Note that
E w ^ t w * η g t 2 2 = E w ^ t w * η g t + η g ^ t η g ^ t 2 2 = E w ^ t w * η g ^ t 2 2 B 1 + η 2 E g ^ t g t 2 2 B 2 + 2 η E w ^ t w * η g ^ t , g ^ t g t B 3
Then, B 3 = 0 , according to g ^ t = E g t . Next, we prove that the term B 1 is bounded. We have
B 1 = E w ^ t w * 2 2 + η 2 E g ^ t 2 2 C 1 2 η E w ^ t w * , g ^ t C 2 .
Since F k ( . ) is L-smooth, it follows that
k ( w t k ) 2 2 2 L F k ( w t k ) F k * .
According to the convexity of . 2 2 , it follows that
k ( w t k ) 2 2 2 L F k ( w t k ) F k * .
If any non-negative constants τ k satisfy k = 1 M τ k = 1 , then
k = 1 M τ k ( w t k ) 2 2 k = 1 M τ k w t k 2 2 , k = 1 M w t k 2 2 M k = 1 M w t k 2 2
Applying that the above formulas, we have
C 1 = η 2 E g ^ t 2 2 = η 2 E k = 1 M τ k F k ( w t k ) 2 2 η 2 M E k = 1 M τ k F k ( w t k ) 2 2 2 M L η 2 E k = 1 M τ k F k ( w t k ) F k * .
And we bound the term C 2 as follows:
C 2 = 2 η E w ^ t w * , g ^ t = 2 η E k = 1 M τ k w ^ t w * , F k ( w t k ) = 2 η E k = 1 M τ k w ^ t w t k , F k ( w t k ) + k = 1 M τ k w t k w * , F k ( w t k ) .
Since the μ -strong convexity of F k ( . ) is true, it follows that
w t k w * , F k ( w t k ) F k ( w t k ) F k ( w * ) μ 2 w t k w * 2 2 .
According to AM–GM inequality and Cauchy–Schwarz inequality, we have
2 w ^ t w t k , F k ( w t k ) , F k ( w t k ) 1 η w ^ t w t k 2 2 + η F k ( w t k ) 2 2 .
Then,
C 2 = 2 η E k = 1 M τ k w ^ t w t k , F k ( w t k ) 2 η E k = 1 M τ k w t k w * , F k ( w t k ) η k = 1 M τ k E 1 η w ^ t w t k 2 2 + η F k ( w t k ) 2 2 2 η k = 1 M τ k E μ 2 w t k w * 2 2 + F k ( w t k ) F k ( w * )
Combining (A5), (A9), and (A13), we have
B 1 E w ^ t w * 2 2 + 2 ML η 2 k = 1 M τ k E F k ( w t k ) F k * + η k = 1 M τ k E 1 η w ^ t w t k 2 2 + η F k ( w t k ) 2 2 2 η k = 1 M τ k E μ 2 w t k w * 2 2 + F k ( w t k ) F k ( w * ) E w ^ t w * 2 2 μ η k = 1 M τ k E w t k w * 2 2 + k = 1 M τ k E w ^ t w t k 2 2 + 2 L η 2 ( M + 1 ) k = 1 M τ k E F k ( w t k ) F k * 2 η k = 1 M τ k E F k ( w t k ) F k ( w * ) ( 1 μ η ) E w ^ t w * 2 2 + k = 1 M τ k E w ^ t w t k 2 2 + 2 L η 2 ( M + 1 ) k = 1 M τ k E F k ( w t k ) F k * 2 η k = 1 M τ k E F k ( w t k ) F k ( w * ) D
where the third inequality comes from w ^ t = k = 1 M τ t k w t k , and, from k = 1 M τ k ( w t k ) 2 2 k = 1 M τ k w t k 2 2 , it follows that k = 1 M E w t k w * 2 2 E w ^ t w * 2 2 .
We next aim to bound D . Let φ = 2 η ( 1 η L ( M + 1 ) ) . Note that η < 1 / L ( M + 1 ) . We have 0 < φ < 2 η . Then,
D = 2 L η 2 ( M + 1 ) k = 1 M τ k E F k ( w t k ) F k * 2 η k = 1 M τ k E F k ( w t k ) F k ( w * ) = φ k = 1 M τ k E F k ( w * ) F k ( w t k ) + ( 2 η φ ) k = 1 M τ k E F k ( w * ) F k * = φ k = 1 M τ k E F k ( w t k ) F k ( w * ) J + 2 η 2 L ( M + 1 ) Γ ,
where Γ = F ( w * ) k = 1 M τ k F k * .
For the term J , according to the convexity of F k ( . ) and AM–GM inequality, we find
J = φ k = 1 M τ k E F k ( w t k ) F k ( w * ) = φ k = 1 M τ k E F k ( w t k ) F k ( w ^ t ) φ F ( w ^ t ) F ( w * ) φ k = 1 M τ k E F k ( w ^ t ) , w t k w t - 1 φ F ( w ^ t ) F ( w * ) φ k = 1 M τ k E η L F k ( w ^ t ) F k * + 1 2 η w t k w ^ t 2 2 φ F ( w ^ t ) F ( w * )
Thus, we can obtain
D φ k = 1 M τ k E η L F k ( w ^ t ) F k * + 1 2 η w t k w ^ t 2 2 φ F ( w ^ t ) F ( w * ) + 2 η 2 L ( M + 1 ) Γ = φ k = 1 M τ k E η L F k ( w ^ t ) F ( w * ) + η L F ( w * ) F k * + 1 2 η w t k w ^ t 2 2 φ F ( w ^ t ) F ( w * ) + 2 η 2 L ( M + 1 ) Γ = φ η L k = 1 M τ k E F k ( w ^ t ) F ( w * ) φ F ( w ^ t ) F ( w * ) + φ 2 η k = 1 M τ k E w t k w ^ t 2 2 + η L Γ 2 η ( M + 1 ) + φ = φ η L 1 F ( w ^ t ) F ( w * ) + φ 2 η k = 1 M τ k E w t k w ^ t 2 2 + η L Γ 2 η ( M + 1 ) + φ k = 1 M τ k E w t k w ^ t 2 2 + 2 M + 2 η 2 L Γ
where the last inequality results from η L 1 < 0 , 0 < φ < 2 η , and k = 1 M E F k ( w ^ t ) F ( w * ) = F ( w ^ t ) F ( w * ) 0 .
Recalling the expression of B 1 , we have
B 1 ( 1 μ η ) E w ^ t w * 2 2 + 2 k = 1 M τ k E w ^ t w t k 2 2 Q + 2 M + 2 η 2 L Γ
For the term Q , we analyze that HW-DPFL requires communication every E steps; then, we can bound the divergence of w t k :
Q = k = 1 M τ k E w ^ t w t k 2 2 = k = 1 M τ k E ( w t k w ^ t 0 ) ( w ^ t w ^ t 0 ) 2 2 k = 1 M τ k E ( w t k w ^ t 0 ) 2 2 k = 1 M τ k t = t 0 t 1 ( E 1 ) η 2 F k ( w t k ; ζ t k , b ) 2 2 ( E 1 ) 2 η 2 G 2
To sum up, we can have
B 1 ( 1 μ η ) E w ^ t w * 2 2 + 2 ( E 1 ) 2 η 2 G 2 + 2 M + 2 η 2 L Γ ,
where, in the first inequality, we use E X E X 2 2 E X 2 2 and X = w t k w ^ t 0 , with probability τ k . For any t 0 , there is a t 0 < t < t 0 + E such that t t 0 E 1 and w t 0 k = w ^ t 0 . Note that we use Jensen inequality in the second inequality; it follows that
w t k w ^ t 0 2 2 = t = t 0 t 1 η F k ( w t k ; ζ t k , b ) 2 2 ( t t 0 ) t = t 0 t 1 η 2 F k ( w t k ; ζ t k , b ) 2 2 .
We next focus on bounding the term B 2 :
B 2 = η 2 E g ^ t g t 2 2 = η 2 E k = 1 M τ k F k ( w t k ; ζ t k , b ) F k ( w t k ) 2 2 η 2 k = 1 M τ k 2 E F k ( w t k ; ζ t k , b ) F k ( w t k ) 2 2 η 2 k = 1 M τ k 2 ρ k 2
Here in the last inequality, we use the variance of the stochastic gradients for each client, satisfying E [ F k ( w t k ; ζ t k , b ) F k ( w t k ) 2 ] ρ k 2 .
Combining, (A20) and (A22), we have
A 1 ( 1 μ η ) E w ^ t w * 2 2 + 2 ( E 1 ) 2 η 2 G 2 + 2 M + 2 η 2 L Γ + η 2 k = 1 M τ k 2 ρ k 2
where Ψ = 2 ( E 1 ) 2 η 2 G 2 + 2 M + 2 η 2 L Γ + η 2 k = 1 M τ k 2 ρ k 2 .
Since the HW-DPFL algorithm guarantees ( ε ^ , δ ) D P , the Gaussian noise level can be represented as
σ t , k 2 = 32 γ 2 η 2 E 2 G 2 log ( 1.25 γ / δ ) ε 2
For the term A 2 ,we can have
A 2 = E Z t 2 2 = d k = 1 M τ k 2 σ t , k 2 < T Λ
where Λ = d k = 1 M τ k 2 σ t , k 2 = d k = 1 M τ k 2 32 γ 2 η 2 E 2 G 2 log ( 1.25 γ / δ ) ε 2 .
Combining (A23) and (A25), results for each iteration t can be expressed as
E υ ^ t + 1 w * 2 2 ( 1 μ η ) E w ^ t w * 2 2 + Ψ + T Λ .
The proof is finished. □

References

  1. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and open problems in federated learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  2. Cazzato, G.; Massaro, A.; Colagrande, A.; Lettini, T.; Cicco, S.; Parente, P.; Nacchiero, E.; Lospalluti, L.; Cascardi, E.; Giudice, G.; et al. Dermatopathology of Malignant Melanoma in the Era of Artificial Intelligence: A Single Institutional Experience. Diagnostics 2022, 12, 1972. [Google Scholar] [CrossRef] [PubMed]
  3. Li, Q.; Wen, Z.; Wu, Z.; Hu, S.; Wang, N.; Li, Y.; Liu, X.; He, B. A survey on federated learning systems: Vision, hype and reality for data privacy and protection. IEEE Trans. Knowl. Data Eng. 2023, 35, 3347–3366. [Google Scholar] [CrossRef]
  4. You, X.; Liu, X.; Jiang, N.; Cai, J.; Ying, Z. Reschedule Gradients: Temporal Non-IID Resilient Federated Learning. IEEE Internet Things J. 2023, 10, 747–762. [Google Scholar] [CrossRef]
  5. Ma, X.; Zhu, J.; Lin, Z.; Chen, S.; Qin, Y. A state-of-the-art survey on solving non-IID data in Federated Learning. Future Generation Comput Syst. 2022, 135, 244–258. [Google Scholar] [CrossRef]
  6. Bassily, R.; Smith, A.; Thakurta, A. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proceedings of the 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, Philadelphia, PA, USA, 18–21 October 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 464–473. [Google Scholar]
  7. Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-IID data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
  8. Wang, H.; Yurochkin, M.; Sun, Y.; Dimitris Papailiopoulos, D.; Khazaeni, Y. Federated learning with matched averaging. arXiv 2020, arXiv:2002.06440. [Google Scholar]
  9. Duan, M.; Liu, D.; Chen, X.; Tan, Y.; Ren, J.; Qiao, L.; Liang, L. Astraea: Self-balancing federated learning for improving classification accuracy of mobile deep learning applications. In Proceedings of the 2019 IEEE 37th International Conference on Computer Design (ICCD), Abu Dhabi, United Arab Emirates, 17–20 November 2019; pp. 246–254. [Google Scholar]
  10. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Aguera, B. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  11. Li, T.; Sahu, K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
  12. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 5132–5143. [Google Scholar]
  13. Li, Q.; He, B.; Song, D. Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10713–10722. [Google Scholar]
  14. Wu, H.; Wang, P. Fast-convergent federated learning with adaptive weighting. IEEE Trans. Cogn. Commun. Netw. 2021, 7, 1078–1088. [Google Scholar] [CrossRef]
  15. Sattler, F.; Müller, K.R.; Samek, W. Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 3710–3722. [Google Scholar] [CrossRef]
  16. Geiping, J.; Bauermeister, H.; Dröge, H.; Michael Moeller, M. Inverting gradients how easy is it to break privacy in federated learning? In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Virtual, 6–12 December 2020; pp. 16937–16947. [Google Scholar]
  17. Liu, D.; Simeone, O. Privacy for free: Wireless federated learning via uncoded transmission with adaptive power control. IEEE J. Sel. Areas Commun. 2021, 39, 170–185. [Google Scholar] [CrossRef]
  18. Ma, J.; Naas, A.; Sigg, S.; Lyu, X. Privacy-preserving federated learning based on multi-key homomorphic encryption. Int. J. Intell. Syst. 2022, 37, 5880–5893. [Google Scholar] [CrossRef]
  19. Byrd, D.; Polychroniadou, A. Differentially private secure multi-party computation for federated learning in financial applications. In Proceedings of the First ACM International Conference on AI in Finance, New York, NY, USA, 15–16 October 2020; pp. 1–9. [Google Scholar]
  20. Zhang, Y.; Huang, K.; Yang, J. Federated learning with privacy protection: A survey. J. Syst. Eng. Electron. 2021, 32, 797–809. [Google Scholar]
  21. Geyer, C.; Klein, T.; Nabi, M. Differentially private federated learning: A client level perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
  22. Shen, X.; Liu, Y.; Zhang, Z. Performance-enhanced federated learning with differential privacy for internet of things. IEEE Internet Things J. 2022, 9, 24079–24094. [Google Scholar] [CrossRef]
  23. Dwork, C.; Roth, A. The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 2014, 9, 211–407. [Google Scholar] [CrossRef]
  24. Huang, Z.; Hu, R.; Guo, Y.; Chan-Tin, E.; Gong, Y. DP-ADMM: ADMM-based distributed learning with differential privacy. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1002–1012. [Google Scholar] [CrossRef]
  25. Balle, B.; Barthe, G.; Gaboardi, M. Privacy amplification by subsampling: Tight analyses via couplings and divergences. arXiv 2018. [Google Scholar] [CrossRef]
  26. Li, Q.; Diao, Y.; Chen, Q.; He, B. Federated learning on non-iid data silos: An experimental study. In Proceedings of the 2022 IEEE 38th International Conference on Data Engineering (ICDE), Kuala Lumpur, Malaysia, 9–12 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 965–978. [Google Scholar]
  27. Wu, J.; Karunamuni, J. Profile Hellinger distance estimation. Statistics 2015, 49, 711–740. [Google Scholar] [CrossRef]
  28. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
  29. Stich, U.; Cordonnier, B.; Jaggi, M. Sparsified SGD with memory. arXiv 2018. [Google Scholar] [CrossRef]
  30. Li, X.; Huang, K.; Yang, W.; Wang, S.; Zhang, Z. On the convergence of fedavg on non-iid data. arXiv 2019, arXiv:1907.02189. [Google Scholar]
Figure 1. The FedAvg problem in IID and non-IID data.
Figure 1. The FedAvg problem in IID and non-IID data.
Mathematics 11 04123 g001
Figure 2. Schematic diagram of the HW-DPFL algorithm process.
Figure 2. Schematic diagram of the HW-DPFL algorithm process.
Mathematics 11 04123 g002
Figure 3. Size-normalized examples from the MNIST database.
Figure 3. Size-normalized examples from the MNIST database.
Mathematics 11 04123 g003
Figure 4. Comparing the impact of local mini-batch size on training loss in different data scenarios: (a) IID data. (b) Non-IID data.
Figure 4. Comparing the impact of local mini-batch size on training loss in different data scenarios: (a) IID data. (b) Non-IID data.
Mathematics 11 04123 g004
Figure 5. Comparing the impact of the number of local update steps on training loss in different data scenarios: (a) IID data. (b) Non-IID data.
Figure 5. Comparing the impact of the number of local update steps on training loss in different data scenarios: (a) IID data. (b) Non-IID data.
Mathematics 11 04123 g005
Figure 6. Comparing the impact of the noise level on training loss in different data scenarios: (a) IID data. (b) Non-IID data.
Figure 6. Comparing the impact of the noise level on training loss in different data scenarios: (a) IID data. (b) Non-IID data.
Mathematics 11 04123 g006
Table 1. Comparison accuracy of DP-FedAvg, DP-FL, and HW-DPFL.
Table 1. Comparison accuracy of DP-FedAvg, DP-FL, and HW-DPFL.
ClientsAcc onAcc onAcc on
DP-FedAvgDP-FLHW-DPFL
IID data10096.41%94.20%96.67%
Non-IID data10090.03%93.90%95.21%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tan, Q.; Wu, S.; Tao, Y. Privacy-Enhanced Federated Learning for Non-IID Data. Mathematics 2023, 11, 4123. https://doi.org/10.3390/math11194123

AMA Style

Tan Q, Wu S, Tao Y. Privacy-Enhanced Federated Learning for Non-IID Data. Mathematics. 2023; 11(19):4123. https://doi.org/10.3390/math11194123

Chicago/Turabian Style

Tan, Qingjie, Shuhui Wu, and Yuanhong Tao. 2023. "Privacy-Enhanced Federated Learning for Non-IID Data" Mathematics 11, no. 19: 4123. https://doi.org/10.3390/math11194123

APA Style

Tan, Q., Wu, S., & Tao, Y. (2023). Privacy-Enhanced Federated Learning for Non-IID Data. Mathematics, 11(19), 4123. https://doi.org/10.3390/math11194123

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop