Next Article in Journal
A Systematic Approach to the Management of Military Human Resources through the ELECTRE-MOr Multicriteria Method
Next Article in Special Issue
Extrinsic Bayesian Optimization on Manifolds
Previous Article in Journal
k-Pareto Optimality-Based Sorting with Maximization of Choice and Its Application to Genetic Optimization
Previous Article in Special Issue
Fed-DeepONet: Stochastic Gradient-Based Federated Training of Deep Operator Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Personalized Federated Multi-Task Learning over Wireless Fading Channels

Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA
*
Author to whom correspondence should be addressed.
Algorithms 2022, 15(11), 421; https://doi.org/10.3390/a15110421
Submission received: 11 October 2022 / Revised: 28 October 2022 / Accepted: 3 November 2022 / Published: 9 November 2022
(This article belongs to the Special Issue Gradient Methods for Optimization)

Abstract

:
Multi-task learning (MTL) is a paradigm to learn multiple tasks simultaneously by utilizing a shared network, in which a distinct header network is further tailored for fine-tuning for each distinct task. Personalized federated learning (PFL) can be achieved through MTL in the context of federated learning (FL) where tasks are distributed across clients, referred to as personalized federated MTL (PF-MTL). Statistical heterogeneity caused by differences in the task complexities across clients and the non-identically independently distributed (non-i.i.d.) characteristics of local datasets degrades the system performance. To overcome this degradation, we propose FedGradNorm, a distributed dynamic weighting algorithm that balances learning speeds across tasks by normalizing the corresponding gradient norms in PF-MTL. We prove an exponential convergence rate for FedGradNorm. Further, we propose HOTA-FedGradNorm by utilizing over-the-air aggregation (OTA) with FedGradNorm in a hierarchical FL (HFL) setting. HOTA-FedGradNorm is designed to have efficient communication between the parameter server (PS) and clients in the power- and bandwidth-limited regime. We conduct experiments with both FedGradNorm and HOTA-FedGradNorm using MT facial landmark (MTFL) and wireless communication system (RadComDynamic) datasets. The results indicate that both frameworks are capable of achieving a faster training performance compared to equal-weighting strategies. In addition, FedGradNorm and HOTA-FedGradNorm compensate for imbalanced datasets across clients and adverse channel effects.

1. Introduction

Multi-tasking (MTL) is a powerful technique for learning several related tasks simultaneously [1,2]. MTL improves the overall system performance, the training speed, and data efficiency, by leveraging the synergy among multiple related tasks. Each task in MTL setting has a single common encoder network to map the raw data into a lower dimensional shared representation space, in addition to a unique client-specific header network to infer task related prediction values from the shared representation. MTL is particularly suitable for distributed learning settings where no single entity has all the data and labels for all of the multiple different tasks.
Federated learning (FL) [3] is a distributed learning paradigm in which many clients train a shared model with the assistance of a central unit called parameter server (PS) by keeping their data private and decentralized. Statistical heterogeneity becomes a major challenge for FL as the size of the distributed setting, namely the number of clients or the number of tasks in a FL setting, increases. This is caused by different task complexities and non-i.i.d. data distribution among clients. Statistical heterogeneity due to non-i.i.d. data distribution degrades the system performance [4,5]. Further, synergy between the tasks may not always be positive, referred to as a negative transference, again degrading the performance [6].
Personalized federated learning (PFL) is introduced to deal with statistical heterogeneity, where the centralized server and clients learn a shared representation together, while each client trains its own client-specific header network further, referred to as personalization. PFL has the capability of learning user-specific models better while also capturing the distilled common knowledge from other clients [7,8,9]. As a result, PFL reduces the statistical heterogeneity among clients. Several PFL approaches have been proposed, where different local models are used to fit user-specific data, but also capture the common knowledge distilled from data of other devices for this purpose [7,8,9,10,11,12,13,14,15]. Hanzely et al. [16] provides a unified model framework to prove multiple kinds of PFL methods. In this paper, we consider MTL in a FL setting, enhanced with personalization, namely, personalized federated multi-task learning (PF-MTL).
PFL works that are closely related to our work are federated representation learning FedRep [7] and federated learning with personalization layers FedPer [8]. These works use a shared encoder with a unique task-specific header to enhance personalization in a FL setup. FedRep and FedPer aggregate the common encoder parameters by equal weighting. Our goal in this paper is to perform weighted gradient aggregation at the PS dynamically according to gradient updates coming from clients to overcome statistical heterogeneity.
Several approaches involve setting the weights of tasks manually at the beginning of training [17,18]. Task weights can also be set in an adaptive manner for each iteration. GradNorm is a dynamic weighting approach in MTL that normalizes gradient norms by scaling the task loss functions to regulate learning speeds and fairness across tasks [19]. GradNorm is proposed for a centralized learning setting. In our work, we introduce FedGradNorm framework [20], where dynamic weighting is incorporated in PFL setting by considering some aspects of [7,8,19] together. We provide a theoretical convergence proof for FedGradNorm, while FedPer [8] and GradNorm [19] do not provide a convergence proof, and FedRep [7] provides a convergence proof only for the linear learning model setting.
We investigate how the FedGradNorm framework is affected by the characteristics of the wireless fading communication channel between clients and the PS. There are different channel conditions since clients are geographically spread out. In addition to the wireless channel effects, the communication is performed over bandwidth- and power-limited regime, which brings concerns about communication costs. We utilize over-the-air (OTA) aggregation to perform efficient aggregation over a shared wireless channel to support the clients on the same bandwidth by utilizing the additive nature of wireless multiple access channel (MAC) [21,22]. In practice, when the OTA mechanism is utilized, the gradients are superposed, and it is not possible for the PS to receive individual gradients of the clients. However, the PS needs individual gradients from the clients to perform dynamic weighting. To address this issue, we modify FedGradNorm with OTA in a hierarchical structure, which is called hierarchical over-the-air FedGradNorm, HOTA-FedGradNorm [23]. Hierarchical federated learning (HFL) establishes clusters of clients around intermediate servers (IS). ISs communicate with the PS instead of clients directly communicating with the PS. Some aspects of HFL have been studied in the literature, such as, latency and power analysis [24,25], and resource allocation [26,27]. These works demonstrate the advantage of the proximity of ISs to the clients in terms of resource consumption of clients. In addition to these advantages, we utilize hierarchical structure since it provides an efficient way of combining FedGradNorm with OTA over the wireless fading channel.
The main contributions of our paper can be summarized as:
  • We propose the FedGradNorm algorithm. The proposed algorithm takes advantage of the GradNorm [19] dynamic weighting strategy in a PFL setup for achieving a more effective and fair learning performance when the clients have a diverse set of tasks to perform.
  • We propose HOTA-FedGradNorm. The proposed algorithm takes into account the characteristics of the communication channel by defining a hierarchical structure for the PFL setting.
  • We provide the convergence analysis for adaptive weighting strategy for MTL in PFL setting. Existing works either do not provide convergence analysis or do it in special cases. We demonstrate that FedGradNorm has an exponential convergence rate.
  • We conduct several experiments on our framework using Multi-Task Facial Landmark (MTFL) dataset [28], and RadComDynamic dataset on the wireless communication domain [29]. We investigate the changes in task loss during training to compare the learning speed and fairness of FedGradNorm with a similar PFL setting which uses equal weighting technique, namely FedRep. Experimental results exhibit a better and faster learning performance for FedGradNorm than FedRep. In addition, we demonstrate that HOTA-FedGradNorm results in faster training over the wireless fading channel compared to algorithms with naive static equal weighting strategies since dynamic weight selection process takes the channel conditions into account.

2. System Model and Problem Formulation

2.1. Federated Learning (FL)

FL [3] is a distributed machine learning approach that enables training on decentralized data in devices such as smart phones, IoT devices, and so on. FL can be described as an approach that brings the training model to the data, instead of bringing data to the training model [30]. In FL, edge devices collaboratively learn a shared model under the orchestration of a PS without sharing their training data.
The generic form of FL with N clients is
min ω F ( ω ) 1 N i = 1 N p ( i ) F ( i ) ( ω )
where p ( i ) is the loss weight for client i such that i = 1 N p ( i ) = N , and  F ( i ) is the local loss function for client i.

2.2. Personalized Federated Multi-Task Learning (PF-MTL)

We consider a PFL setting with N clients, in which client i has its own local dataset D i = { ( x j ( i ) , y j ( i ) ) } j = 1 n i where n i is the size of the local dataset, and  T i is the task of client i, i [ N ] . The system model consists of a global representation network q ω : R d R d which is a function parameterized by ω W and maps data points to a lower space of size d . All clients share the same global representation network which is synchronized across clients with global aggregation. Client-specific heads q h ( i ) : R d Y are functions parameterized by h ( i ) H for all clients i [ N ] and map from the low dimensional representation space to the label space Y . The system model is shown in Figure 1. Then, the local model for the ith client is the composition of the ith client’s global representation model q ω and the personalized model q h ( i ) ,
q i ( · ) = ( q h ( i ) q ω ) ( · )
The local loss for the ith client is represented as
F ( i ) ( h ( i ) , ω ) = F ( i ) ( q i ( · ) ) = F ( i ) ( ( q h ( i ) q ω ) ( · ) )
Through alternating minimization, the clients and the centralized server aim to learn a set of global parameters ω together, while each client i learns its own set of client-specific parameters h ( i ) locally. Specifically, client i performs τ h local gradient based updates to optimize h ( i ) , i [ N ] , while the global network parameters at client i, i.e.,  ω ( i ) , are frozen. Thereafter, client i performs τ ω local updates to optimize the global shared network parameters, while the parameters corresponding to the client-specific head are frozen. Then, the global shared network parameters { ω ( i ) } i = 1 N are aggregated at the PS to obtain a common ω . Thus, the problem is   
min ω W 1 N i = 1 N p ( i ) min h ( i ) H F ( i ) ( h ( i ) , ω )
FedRep [7] investigates this framework with p ( i ) = 1 , i [ N ] .

2.3. PF-MTL as Bilevel Optimization Problem

The optimization problem in (4) can be rewritten as (5) because F ( i ) ( h ( i ) , ω ) relies on only h ( i ) and ω for all i [ N ] , and  h ( i ) are independent of each other
min ω W min { h ( i ) H } i = 1 N 1 N i = 1 N p ( i ) F ( i ) ( h ( i ) , ω )
Equivalently, we have
min ω W , { h ( i ) H } i = 1 N 1 N i = 1 N p ( i ) F ( i ) ( h ( i ) , ω )
Note that p ( i ) values in (6) are obtained from our proposed FedGradNorm algorithm which will be derived later as a consequence of another optimization problem. As a result, the problem can be expressed as a bilevel optimization problem, which is an optimization problem containing another optimization problem as a constraint
min x u X u , x l X l F ( x u , x l ) s . t . x l = arg min x l X l { g ( x u , x l ) , s . t . c j ( x u , x l ) 0 , j = 1 , , J } C m ( x u , x l ) 0 , m = 1 , , M
where F ( x u , x l ) represents the upper-level objective function and g ( x u , x l ) represents the lower-level objective function. { c j ( x u , x l ) 0 , j = 1 , , J } are the constraints for the lower-level optimization problem while { C m ( x u , x l ) 0 , m = 1 , , M } and the lower-level optimization problem itself are the constraints for the upper-level optimization problem.
Multiple algorithms have been developed to solve bilevel optimization problems in the literature [31]. Different reformulations of the bilevel optimization problem have been made by utilizing the optimality conditions of the lower-level optimization problem to formulate the bilevel optimization problem as a single-level constraint problem [32,33,34]. In addition, there are recently developed gradient-based bilevel optimization algorithms [35,36,37,38,39,40]. Our algorithm is based on iterative differentiation (ITD), as explained in Algorithm 1.
Algorithm 1 Iterative differentiation (ITD) algorithm.
Input: K, D, step sizes α , β , initialization x u ( 0 ) , x l ( 0 ) .
fork = 0, 1, 2, …, K do
  Set x l 0 ( k ) = x l D ( k 1 ) if k > 0 otherwise x l ( 0 ) .
  for t = 1, …, D do
    Update x l t ( k ) = x l t 1 ( k ) α x l g ( x u ( k ) , x l t 1 ( k ) )
  Compute ^ x u F ( x u ( k ) , x l D ( k ) ) = F ( x u ( k ) , x l D ( k ) ) x u
  Update x u ( k + 1 ) = x u ( k ) β ^ x u F ( x u ( k ) , x l D ( k ) )
Updates to the upper-level optimization take place in the outer loop, while updates to the lower-level optimization are performed in the inner loop.
For our problem, x u and x l correspond to { h ( i ) } i = 1 N , ω , { p ( i ) } i = 1 N , respectively. Additionally, x u ( k ) and x l ( k ) are denoted as { h ( i ) } i = 1 N , ω k , { p k ( i ) } i = 1 N to represent the outer loop iteration index in Algorithm 1 for the rest of the paper. Furthermore, i in p k i represents the inner loop iteration index, while i in p k ( i ) represents the client index. Then, the bilevel optimization problem in our case can be written as
min ω , { h ( i ) } i = 1 N , { p ( i ) } i = 1 N F ( { h ( i ) } i = 1 N , ω , { p ( i ) } i = 1 N ) s . t . { p ( i ) } i = 1 N arg min { p ( i ) } i = 1 N R N F g r a d
The objective function is a weighted sum of local loss functions, F = 1 N i = 1 N p ( i ) F ( i ) ( h ( i ) , ω ) , and  F g r a d is the auxiliary loss function defined by the FedGradNorm algorithm in the next section.

2.4. Hierarchical Federated Learning (HFL) for Wireless Fading Channels

The characteristics of the communication channel should be considered in PF-MTL, since clients can be distributed in a large geographic area in an FL framework [41,42]. The PS can be far away from the clients, and the communication between the PS and the clients can be noisy and subject to channel effects. Then, PF-MTL can be constructed in hierarchical setting by creating clusters of clients around IS to communicate with the PS instead of the direct communication of clients with the PS. Further, the communication can be performed over a shared wireless channel, where the transmission power and the bandwidth are constrained. Thus, we employ over-the-air aggregation (OTA) to address these issues [21,22].
The generic HFL problem shown in Figure 2, with C clusters each containing an IS and N clients, can be formulated as
min ω F ( ω ) 1 C N l = 1 C i = 1 N p ( l , i ) F ( l , i ) ( ω )
where p ( l , i ) is the loss weight for client i in cluster l such that i = 1 N p ( l , i ) = N , l [ C ] , and  F ( l , i ) ( · ) is the local loss function for client i in cluster l.
We consider a PFL setting of N clients within each cluster, in which client i of cluster l has its own local dataset D l , i = { ( x j ( l , i ) , y j ( l , i ) ) } j = 1 n l , i where n l , i is the size of the local dataset. Within cluster l, T l , i denotes the task of client i, i [ N ] , and  l [ C ] . T l , i is assigned from the task set T = { T 1 , T 2 , , T N } such that T l , i T l , i , for  i i and for any l [ C ] . Real-life scenarios might involve the same or very similar tasks for clients in a cluster. We assume that tasks are different due to the lack of prior information about it.
We assume that clients in a cluster have error-free and high-speed connections to corresponding IS over local area networks (LANs). In addition, the PS and ISs share a bandwidth-limited wireless fading MAC. Using the wireless fading MAC, each IS sends the corresponding local gradient aggregations within its cluster to the PS. The broadcast from the PS to the ISs is considered error-free.
As in the case of the simple PF-MTL setting illustrated in Figure 1, the system model in Figure 2 is composed of a global representation network q ω : R d R d , which is a function parameterized by ω W , that maps data points into a lower space of size d . The same global representation network is shared by all clients in each cluster, which is synchronized through global aggregation. A client-specific head q h ( l , i ) : R d Y is a function parameterized by h ( l , i ) H for all clients i [ N ] of every cluster l [ C ] , mapping a low-dimensional representation space to a label space Y . The local model for client i of cluster l is the composition of the client’s global representation model q ω and personalized model q h ( l , i ) , shown as q l , i ( · ) = ( q h ( l , i ) q ω ) ( · ) . In addition, the local loss for the ith client of cluster l is shown as F ( l , i ) ( h ( l , i ) , ω ) = F ( l , i ) ( q l , i ( · ) ) = F ( l , i ) ( ( q h ( l , i ) q ω ) ( · ) ) .
Using alternating minimization, the PS and the clients learn the global representation ω together, while only client i learns the the client-specific head h ( l , i ) in cluster l, i [ N ] and l [ C ] . Specifically, τ h local updates are performed by client i in cluster l to optimize h ( l , i ) when global network parameters at client i of cluster l, i.e.,  ω ( l , i ) , are frozen. Then, τ ω local updates are performed to optimize ω ( l , i ) while the parameters corresponding to the client-specific head are frozen. Thereafter, the lth IS aggregates { ω ( l , i ) } i = 1 N which are sent via LAN, for any l [ C ] . The ISs send cluster aggregations to the PS to perform the global aggregation over the wireless fading MAC. The additive nature of the wireless MAC enables global aggregation to occur over-the-air. The optimization problem is
min ω W 1 C N l = 1 C i = 1 N p ( l , i ) min h ( l , i ) H F ( l , i ) ( h ( l , i ) , ω )

3. Algorithm Description

In this section, we present the FedGradNorm algorithm after introducing the definitions and preliminaries. Then, we present the extension of FedGradNorm algorithm for hierarchical structure with OTA.

3.1. Definitions and Preliminaries

In FedGradNorm, we aim to learn the dynamic loss weights { p ( i ) } i = 1 N given in the lower-level optimization problem of (8). The main objective of the algorithm is to dynamically adjust the gradient norms so that the different tasks across clients can be trained at similar learning speeds. In the rest of the paper, clients and tasks will be used interchangeably as we assume that each client has its own different task. Before describing the algorithm in detail, we first introduce the notation:
  • ω ˜ : A subset of the global shared network parameters ω ˜ ω . FedGradNorm is applied on ω ˜ k ( i ) ω k ( i ) , which is a subset of the global shared network parameters at client i at iteration k. ω ˜ k ( i ) is generally chosen as the last layer of the global shared network at client i at iteration k.
  • G ω ˜ k ( i ) ( i ) ( k ) = ω ˜ k ( i ) p k ( i ) F k ( i ) = p k ( i ) ω ˜ k ( i ) F k ( i ) : The 2 norm of the gradient of the weighted task loss at client i at iteration k with respect to the chosen weights ω ˜ k ( i ) .
  • G ¯ ω ˜ ( k ) = E j task [ G ω ˜ k ( j ) ( j ) ( k ) ] : The average gradient norm across all clients (tasks) at iteration k.
  • F ˜ k ( i ) = F k ( i ) F 0 ( i ) : Inverse training rate of task i (at client i) at iteration k, where F k ( i ) is the loss for client i at iteration k, and  F 0 ( i ) is the initial loss for client i.
  • r k ( i ) = F ˜ k ( i ) E j task [ F ˜ k ( j ) ] : Relative inverse training rate of task i at iteration k.
Additional notation that is useful in algorithm description:
  • g k ( i ) = 1 τ ω j = 1 τ ω g k , j ( i ) is the average of gradient updates at client i at iteration k, where g k , j ( i ) is the jth local update of the global shared representation at client i at iteration k. Note that ω ˜ k ( i ) F k ( i ) is a subset of g k ( i ) since ω ˜ ω .
  • h k , j ( i ) is the client-specific head parameters h ( i ) after the jth local update on the client-specific network of client i at iteration k, j = 1 , , τ h .
  • ω k , j ( i ) is the global shared network parameters of client i after the jth local update at iteration k, j = 1 , , τ ω . Additionally, ω k ( i ) denotes ω k , τ ω ( i ) for brevity.

3.2. FedGradNorm Description

FedGradNorm adjusts gradient magnitudes to balance training rates between different tasks across clients. FedGradNorm is distributed across the clients and the parameter server. G ¯ ω ˜ is used to have a common scale for the gradient sizes while the gradient norms are adjusted according to the relative inverse training rates r k ( i ) . A higher value of r k ( i ) leads to a larger gradient magnitude for task i, which encourages the task to train faster. Each client i sends its inverse training rate F ˜ k ( i ) at time k to the PS, so that the PS can construct r k ( i ) , i [ N ] . Therefore, given the common scale of gradient magnitudes, and the relative inverse training rate, the desired gradient norm of task i at iteration k is calculated as G ¯ ω ˜ ( k ) × r k ( i ) γ , where γ represents the strength of the restoring force which pulls tasks back to a common training rate, which can also be considered as a metric of task asymmetry across different tasks. In cases where tasks have different learning dynamics, a larger γ would be a better choice for a stronger balancing.
In order to shift the gradient norms towards the desired norm, the loss weights p k ( i ) are updated by minimizing an auxiliary loss function F grad k ; { p k ( i ) } i = 1 N which is the sum of 2 distances between the actual gradient norm and the desired gradient norm across all tasks for each iteration k, i.e.,
F grad k ; { p k ( i ) } i = 1 N = i = 1 N F grad ( i ) k ; p k ( i )
= i = 1 N p k ( i ) ω ˜ k ( i ) F k ( i ) G ¯ ω ˜ ( k ) × [ r k ( i ) ] γ
The auxiliary loss function F grad k ; { p k ( i ) } i = 1 N is constructed by the parameter server at each global iteration k by using ω ˜ k ( i ) F k ( i ) , which is a subset of the whole gradient of the global shared network sent by client i at iteration k for the global aggregation.
Next, the parameter server performs the differentiation of F grad k ; { p k ( i ) } i = 1 N with respect to each element of { p ( i ) } i = 1 N so that p ( i ) F grad is applied via gradient descent to update p ( i ) . The desired gradient norm terms, G ¯ ω ˜ ( k ) × r k ( i ) α , are treated as constant to prevent loss weights { p ( i ) } i = 1 N from drifting towards zero while differentiating F grad k ; { p k ( i ) } i = 1 N with respect to each loss weight p k ( i ) . The weights are updated as,
p ( i ) p ( i ) α p ( i ) F grad , i [ N ]
The updated { p ( i ) } i = 1 N are normalized such that i = 1 N p ( i ) = N . Finally, the parameter server obtains the global aggregated gradient g k = 1 N i = 1 N p k ( i ) g k ( i ) to update the global shared network parameters ω via ω k + 1 = ω k β g k and broadcasts the updated parameters to the clients for the next iteration. The overall FedGradNorm algorithm is summarized in Algorithm 2. In FedGradNorm, Update ( f , h ) represents the generic notation for the update of the variable h by using the gradient of f function with respect to the variable h.
Algorithm 2 Training with FedGradNorm
Initialize ω 0 , { p 0 ( i ) } i = 1 N , { h 0 ( i ) } i = 1 N
fork=1 toKdo
  The parameter server sends the current global shared network parameters ω k to the clients.
  for Each client i [ N ]  do
    Initialize global shared network parameters for local updates by ω k , 0 ( i ) ω k
    for  j = 1 , , τ h  do
       h k , j ( i ) = Update ( F ( i ) ( h k , j 1 ( i ) , ω k , 0 ( i ) ) , h k , j 1 ( i ) )
     F k ( i ) = 0
    for  j = 1 , , τ ω  do
       ω k , j ( i ) ω k , j 1 ( i ) β g k , j ( i )
       F k ( i ) += F ( i ) ( h k , τ h ( i ) , ω k , j ( i ) )
     F k ( i ) 1 τ ω F k ( i )
    Client i sends g k ( i ) = 1 τ ω j = 1 τ ω g k , j ( i ) , and  F ˜ k ( i ) = F k ( i ) F 0 ( i ) to the parameter server
  After collecting g k ( i ) , and  F ˜ k ( i ) for active clients i [ N ] , the parameter server performs the following operations in the order:
  •  Constructs F grad ( k ; { p k ( i ) } i = 1 N ) using { g k ( i ) } i = 1 N and { F ˜ k ( i ) } i = 1 N as given in Equation (12).
  •  Updates p k ( i ) p k 1 ( i ) α p ( i ) F grad , i [ N ] .
  •  Aggregates the gradient for the global shared network by g k = 1 N i = 1 N p k ( i ) g k ( i ) .
  •  Updates the global shared network parameters with the aggregated gradient by ω k + 1 = ω k β g k .
  •  Broadcasts ω k + 1 to clients for the next global iteration.

3.3. Hierarchical Over-the-Air (HOTA) FedGradNorm

The HOTA-FedGradNorm algorithm is a two-stage version of the FedGradNorm algorithm in HFL setting, which is shown in Figure 2. During the first stage of the algorithm, the learning speeds of the clients are balanced using a dynamic weighting approach for each cluster. FedGradNorm as a dynamic weighting strategy is used jointly with a power allocation scheme to satisfy the total average transmit power constraint and to ensure that the wireless fading MAC between the ISs and the PS is robust against negative channel conditions.
Each client within a cluster sends its gradient for the global model q ω to its corresponding IS via the LAN, where the channels between each client and the IS are assumed to be error-free inside a cluster. The corresponding IS performs a modified version of FedGradNorm based on the client’s gradients by taking the power allocation scheme into account. Specifically, the IS of cluster l computes the loss weight p k ( l , i ) for each client i [ N ] in cluster l via FedGradNorm algorithm to eventually obtain the local weighted aggregation i = 1 N p k ( l , i ) g k ( l , i ) at iteration k, where g k ( l , i ) is the local gradient update of client i in cluster l for iteration k.
The power allocation vector β k ( l , i ) constructed by the IS of cluster l for each client i in the cluster is designed as
β k ( l , i ) ( j ) = p k ( l , i ) H k l ( j ) , if | H k ( l ) ( j ) | 2 H k th 0 , otherwise
where β k ( l , i ) ( j ) is the jth entry of the power allocation vector β k ( l , i ) R | ω | , and  H k ( l ) ( j ) is the jth entry of the channel gain vector H k ( l ) R | ω | , which represents the fading effect of the wireless channel between the IS and the PS of cluster l. H k ( l ) ( j ) is assumed to be i.i.d. according to N ( 0 , σ l 2 ) . The threshold H k th is set to satisfy the total average transmit power constraint given as follows,
1 K k = 1 K E [ x k ( l ) 2 ] P ¯
where x k ( l ) = i = 1 N x k ( l , i ) and x k ( l , i ) = β k ( l , i ) g k ( l , i ) , i [ N ] , l [ C ] , ∘ represents the element-wise multiplication. The expectation is taken over the randomness of the channel gains.
From the power allocation scheme in (14), each cluster transmits only the scaled entries of its weighted gradient for which the channel conditions are sufficiently good. Consequently, F grad is modified according to the power allocation scheme to have power efficient system design as follows,
F grad ( l ) k ; { p k ( l , i ) } i = 1 N = i = 1 N F grad ( l , i ) k ; p k ( l , i )
= i = 1 N p k ( l , i ) M k ( l ) ω ˜ k ( l , i ) F k ( l , i ) G ¯ ω ˜ ( l ) ( k ) × [ r k ( l , i ) ] γ
where M ( l ) { 0 , 1 } | ω | is a mask matrix designed for the sparsification of cluster l as follows:
M k ( l ) ( j ) = 1 , if | H k ( l ) ( j ) | 2 H k th 0 , otherwise
Here, ω ˜ k ( l , i ) is the last layer of the shared network at client i of cluster l at iteration k. G ¯ ω ˜ ( l ) ( k ) is the average sparsified gradient norm across all clients (tasks) in cluster l at iteration k. r k ( l , i ) = F ˜ k ( l , i ) E j task [ F ˜ k ( l , j ) ] is the relative inverse training rate of task i in cluster l at iteration k, and  γ represents the strength of the restoring force, as defined in FedGradNorm previously.
Gradient sparsification used during the calculation of F grad acts as an implicit constraint on F grad minimization problem by considering the channel conditions. Consequently, it ensures that the learning speed of tasks is invariant to the dynamic channel conditions with an appropriate selection process of loss weights. In other words, the implicit constraint of the channel condition preserves the fairness of the learning speed among the clients, as shown in the experimental results.
The second stage of the algorithm involves the process of global aggregation over the wireless fading MAC. The PS obtains a noisy estimate of the aggregated gradient over the wireless fading channel while updating the model parameters. Due to the additive nature of the wireless MAC, the summation of the signals transmitted by clusters arrives at the PS. The jth entry of the received signal at iteration k, y k R | ω | is
y k ( j ) = l M k ( j ) H k ( l ) ( j ) x k ( l ) ( j ) + z k ( j )
where z k ( j ) is the jth entry of the Gaussian noise vector z k and is i.i.d. according to N ( 0 , 1 ) . M k ( j ) = { c [ C ] : | H k ( l ) ( j ) | 2 > H k t h } represents the set of clusters contributing to the jth entry of the received signal at the kth iteration. M k ( j ) is known by the PS, for  j [ | ω | ] since the PS has the perfect channel state information (CSI).
By considering (14) and the definition of x k ( l ) in terms of the power allocation vector, we have
y k ( j ) = l M k ( j ) i = 1 N p k ( l , i ) g k ( l , i ) ( j ) + z k ( j )
where g k ( l , i ) ( j ) is the jth entry of g k ( l , i ) . The noisy aggregated gradient estimate is
g ^ k ( j ) = y k ( j ) | M k ( j ) | N , j [ | ω | ]
Then, the estimated gradient vector is used to update the model parameters as ω k + 1 = ω k β g ^ k . The overall algorithm is shown in Algorithm 3.
Algorithm 3 HOTA-FedGradNorm
1:
Initialize ω 0 , { p 0 ( 1 , i ) } l = 1 , i = 1 C , N , { h 0 ( 1 , i ) } l = 1 , i = 1 C , N
2:
fork=0 toKdo
3:
  The PS broadcasts the current global shared network parameters ω k to the ISs.
4:
  for Each cluster l [ C ]  do
5:
     ω k ( l ) ω k .
6:
    The IS l broadcasts ω k ( l ) to clients within cluster.
7:
    for Each client i [ N ]  do
8:
      Initialize global shared network parameters for local updates by ω k , 0 ( l , i ) ω k ( l )
9:
      Initialize F k ( l , i ) = 0 , and  g k ( l , i ) = 0
10:
      for  j = 1 , , τ h  do
11:
         h k , j ( l , i ) = Update ( F ( l , i ) ( h k , j 1 ( l , i ) , ω k , 0 ( l , i ) ) , h k , j 1 ( l , i ) )
12:
      for  j = 1 , , τ ω  do
13:
         ω k , j ( l , i ) ω k , j 1 ( l , i ) β g k , j ( l , i )
14:
         F k ( l , i ) += 1 τ ω F ( l , i ) ( h k , τ h ( l , i ) , ω k , j ( l , i ) )
15:
      Client sends g k ( l , i ) = 1 τ ω j = 1 τ ω g k , j ( l , i ) , and  F ˜ k ( l , i ) = F k ( l , i ) F 0 ( l , i ) to IS l for dynamic weighting.
16:
    The IS l performs the followings:
  • { p k ( l , i ) } i = 1 N =FGN_server( { g k ( l , i ) } i = 1 N , { F ˜ k ( l , i ) } i = 1 N , p k 1 ( l , i ) )
  • The IS l constructs the power allocation vector β k ( l , i ) for each clients in cluster l as given in Equation (14)
  • aggregates the gradients of clients in cluster l for the global shared network by combining with power allocation scheme as x k ( l ) = i = 1 N β k ( l , i ) g k ( l , i ) .
17:
  The gradients are aggregated over the wireless fading channel as given in Equation (19).
18:
  The estimated gradient aggregation g ^ k is obtained by the PS as given in Equation (21).
19:
  The PS updates the global shared network by ω k + 1 ω k β g ^ k .
Update ( f , h ) in Algorithm 3 represents the generic notation for the update of the variable h by using the gradient of f function with respect to the variable h. ω k , j ( l , i ) , h k , j ( l , i ) , and  g k , j ( l , i ) denote the global shared network parameters, the client-specific network parameters and the gradient for the jth local iteration of the global iteration k on the client i of cluster l, respectively. Further, F k ( l , i ) is the loss for the client i of cluster l at the global iteration k. ω k ( l ) is the global shared network parameters on the IS l at the beginning of the global iteration k, and  β is the learning rate for both the client local updates and the PS global updates. F G N _ S e r v e r ( · ) given in Algorithm 4 performs the auxiliary loss F grad construction and minimization via gradient descent.
Algorithm 4 FGN_Server { F ˜ ( l , i ) } i = 1 N , { g ( l , i ) } i = 1 N , { p ( l , i ) } i = 1 N
1:
Construct the sparsified version of auxiliary loss function F grad ( l ) { p ( l , i ) } i = 1 N as given in Equation (16) using { g ( l , i ) } i = 1 N and the loss ratios { F ˜ ( l , i ) } i = 1 N .
2:
Update the loss weights by gradient descent p ( l , i ) p ( l , i ) α p ( l , i ) F grad ( l ) , i [ N ] .

4. Convergence Analysis

In this section, we provide the convergence analysis for FedGradNorm along with necessary assumptions and lemmas.
Assumption 1.
The following strong convexity assumptions hold for upper-level optimization function F ( · ) and lower-level optimization function g ( · ) given in (7),
  • g ( x , p ) is μ-strongly convex with respect to p R N
  • F ( i ) ( x , p ( x ) ) is μ-strongly convex with respect to x H N × W , i [ N ] , where x = { h ( i ) } i = 1 N , ω , and p * ( x ) = a r g m i n p R N g ( x ) .
Assumption 2.
F ( z ) and g ( z ) are L-Lipschitz, i.e., for any z, z ,
F ( z ) F ( z ) L z z
g ( z ) g ( z ) L z z
Assumption 3.
The derivatives x p g ( z ) and p 2 g ( z ) are τ- and ρ-Lipschitz, i.e., for any z, z ,
x p g ( z ) x p g ( z ) τ | | z z | |
| | p 2 g ( z ) p 2 g ( z ) | | ρ | | z z | |
Assumption 4.
The expected value of the squared 2 norm of stochastic gradient of F ( · ) with respect to p is bounded, i.e.,
E ξ p F ( x , p ; ξ ) 2 M 2
The expectation is taken over the randomness of stochasticity of gradient descent, where ξ represents the stochastic data samples.
Assumption 5.
The stochastic gradient of F ( · ) with respect to x is an unbiased estimator of the gradient, i.e.,
E ξ p F ( x , p ; ξ ) = p F ( x , p )
where ξ represents the stochastic data samples.
The following lemma characterizes the Lipschitz properties of the upper-level objective function F ( · ) . It is adapted from [36] (Lemma 2.2).
Lemma 1.
Suppose Assumptions 1 to 5 hold. For any x , x W × H N , we have
F ( x , p ) F ( x , p ) L F x x
where the constant L F is given by
L F = L + 2 L 2 + τ M 2 μ + ρ L M + L 3 + τ M L μ 2 + ρ L 2 M μ 3
Lemma 2.
Suppose Assumptions 1 to 5 hold and let α 1 L . Define ^ x k F ( x k , p k ) = F ( x k , p k D ) x k and x k F ( x k , p k ) = F ( x k , p k * ) x k . Then,
| | ^ x k F ( x k , p k ) x k F ( x k , p k ) | | ( L ( L + μ ) ( 1 α μ ) D 2 μ + 2 M ( τ μ + L ρ ) μ 2 ( 1 α μ ) D 1 2 ) | | p k 0 p * ( x k ) | | + L M ( 1 α μ ) D μ .
The proof of Lemma 2 is provided in Appendix A.
Lemma 3.
Suppose Assumptions 1 to 5 hold. Then, the following holds,
F ( x k + 1 , p k + 1 ) F ( x k , p k ) β 2 β 2 L F | | F ( x k , p k ) | | 2 + β 2 + β 2 L F | | ^ F ( x k , p k ) F ( x k , p k ) | | 2
The proof of Lemma 3 is provided in Appendix A.
Theorem 1.
Let Assumptions 1 to 5. Then, the algorithm satisfies
F ( x k , p k ) F ( x * , p * ( x * ) ) L F 2 μ 2 β 2 + μ 2 β 2 L F ( 1 β μ ) k 1 x 0 x * 2 + β 2 + β 2 L F B
where
B = 3 Δ L 2 ( L + μ ) 2 ( 1 α μ ) D μ 2 + 4 M 2 ( τ μ + L ρ ) 2 μ 4 ( 1 α μ ) D 1 + 3 L 2 M 2 ( 1 α μ ) 2 D μ 2
Theorem 1 shows that FedGradNorm algorithm converges exponentially over the iterations. The proof of Theorem 1 is provided in Appendix A.

5. Experiments

5.1. Dataset Specifications

The following two datasets are used for experiments:
  • Multi-Task Facial Landmark (MTFL) [28]: This dataset contains 10,000 training and 3000 test images, which are human face images annotated by (1) five facial landmarks, (2) gender, (3) smiling, (4) wearing glasses, and (5) head pose. The first task (five facial landmarks) is a regression task, and other tasks are classification tasks.
  • RadComDynamic [29]: This dataset is a multi-class wireless signal dataset which contains 125,000 samples. Samples are radar and communication signals from GNU Radio Companion derived for different SNR values. The dataset contains six modulation types and eight signal types. Dynamic parameters for samples are listed in Table 1. We perform 3 different tasks over RadComDynamic dataset, (1) modulation classification, (2) signal type classification, and (3) anomaly detection.
    Task 1. Modulation classification: The modulation classes are amdsb, amssb, ask, bpsk, fmcw, and pulsed continous wave (PCW).
    Task 2. Signal type classification: The signal classes are AM radio, short-range, Radar-Altimeter, Air-Ground-MTI, Airborne-detection, Airborne-range, Ground-mapping.
    Task 3. Anomaly behavior: Signal to noise ratio (SNR) can be considered as a proxy for geo-location information. We define anomaly behavior as having SNR lower than −4 dB.
Each data point in this dataset is a normalized signal vector of size 256 which is obtained by vectorizing the real and complex parts of the signal, x = x I + j x Q where x I , x Q R 128 , as follows,
x ^ = x I x Q R 256

5.2. Hyperparameters and Model Specifications

A detailed description of the hyperparameters of the system model for both FedGradNorm and HOTA-FedGradNorm algorithms are given in Table 2. Note that γ is a hyperparameter that should be determined with respect to the task asymmetry in the system. We use Adam optimizer for both network training and F grad optimization. β is a learning rate that optimizer uses to update the global shared network as well as the personalized network on the client side, and α is a learning rate used for F grad optimization. The shared network model is explained in Table 3. Each client also has a simple linear layer that maps the shared network’s output to the corresponding prediction value for a personalized network. Cross-entropy and mean squared error (MSE) are used as the loss functions for classification and regression tasks, respectively.

5.3. Results and Analysis

In the experiments with MTFL dataset, we observe that task 1 (facial landmark regression task) has a higher gradient norm compared to all other tasks, which are classification tasks. Figure 3 illustrates how FedGradNorm gradually decreases the loss weight of the first task to balance the learning speed among tasks. At epoch 70, when tasks 2 and 3 finally can reduce their loss with a higher rate, the weight of their corresponding tasks decreases to help improve the two remaining tasks. Tasks 2 and 3 could not be improved without dynamic-weighting since task 1 would mask the gradient updates for the remaining tasks. As a result of reducing the weight of tasks 2 and 3, the weight of tasks 4 and 5 would then be increased with a similar slope (the weight change of both is the same, since they are stacked on top of each other in Figure 3f) in order to improve the training performance if possible. Unlike other tasks, task 4 (detecting glasses on human faces) and task 5 (pose estimation) reach the minimum very quickly on the first epochs as they are easy tasks. Thus, as indicated by Figure 3, the performance does not improve much for tasks 4 and 5. Although the performance of tasks 1 and 5 are also quite the same in the long-run, FedGradNorm helps to learn faster at the early stages. For Figure 3, the data allocation is balanced.
We also perform experiments with the imbalanced data distribution. Table 4 exhibits the loss comparisons between FedGradNorm and FedRep when task 2 and task 4 have access to 500 data points, whereas other tasks have 3000 data points. The FedGradNorm performs better than FedRep.
Furthermore, we conduct experiments with the RadComDynamic dataset using Network 2 given in Table 3. FedGradNorm outperforms FedRep on modulation detection and signal detection tasks, as illustrated in Figure 4. The modulation detection task and the signal detection task have slower training than the anomaly detection task with respect to the change of the loss. By employing FedGradNorm, we demonstrate that the learning speed of signal and modulation detection tasks are balanced against the anomaly detection task. Moreover, we observe that the loss weight for task 1 is increased to speed up its training at the beginning of the training since the loss for task 2 and task 3 decreases faster compared to the loss of task 1 initially. In epoch 55, the loss of task 1 decreases significantly. Therefore, the loss weight of task 1 is decreased to prevent task 1 from dominating the training.
Next, we conduct experiments for HOTA-FedGradNorm setting to investigate the effects of the wireless fading channel. Figure 5 depicts the task losses in the first cluster. We observe that the change in the loss for the first task (modulation classification) is less than the change of the loss for the second and third tasks at the beginning of the training. Then, the loss weight of the first task is increased. After epoch 65, it is decreased since the loss decreases significantly. Comparing the result with the result in Figure 4, we observe that considering the wireless MAC channel between the IS servers and the PS leads to slower training. However, as shown in Figure 5, HOTA-FedGradNorm yields a higher training speed compared to naive equal weighting update strategy.
To demonstrate the effectiveness of F g r a d in reducing negative channel effects, the first cluster channel gain is changed from σ 1 2 = 1 to σ 1 2 = 0.5 while channel gains for the remaining clusters are left unchanged. A decreased σ 1 l value is equivalent to intensifying the sparsification of the corresponding gradient according to the defined H t h . Figure 6 shows how even having a single bad channel can negatively impact the entire learning process if we do not utilize FedGradNorm into our system model. With HOTA-FedGradNorm, clients’ weights can be adapted based on the channel conditions, thereby reducing the channel effects. Figure 6 illustrates that both the first and second tasks have improved after epoch 85. Additionally, we compare the effects of channels for more diverse σ values in Figure 7. From these result, we observe that HOTA-FedGradNorm is both robust and faster to train, even under more challenging channel conditions.

6. Conclusions and Discussion

We proposed FedGradNorm, a distributed version of the GradNorm dynamic weighting algorithm for the personalized FL setting. We provided the convergence analysis for FedGradNorm and showed that it has an exponential convergence rate. Moreover, we proposed HOTAFedGradNorm, which is the modified version of FedGradNorm designed with the utilization of over-the-air aggregation in a hierarchical FL setting. The characteristics of the wireless communication channel were considered for the design of HOTA-FedGradNorm. In the experiments with FedGradNorm, the learning speed and task performance of FedGradNorm were compared with the naive equal weighting strategy. In contrast to naively assigning equal weights to each task, we observed that FedGradNorm could ensure faster training and more consistent performance. Additionally, FedGradNorm could compensate for the effects of imbalanced allocation of data among the clients. Tasks with insufficient data are also eligible for fair training since the weights of task losses are adjusted with respect to training speeds to encourage the slow learning tasks. Furthermore, the experimental results with HOTA-FedGradNorm indicated that HOTA-FedGradNorm provides robustness under negative channel effects while having faster training compared to naive equal weighting strategy.

Author Contributions

Conceptualization, M.M., C.V. and S.U.; methodology, M.M., C.V. and S.U.; software, M.M., C.V. and S.U.; validation, M.M., C.V. and S.U.; formal analysis, M.M., C.V. and S.U.; investigation, M.M., C.V. and S.U.; resources, M.M., C.V. and S.U.; data curation, M.M., C.V. and S.U.; writing—original draft preparation, M.M., C.V. and S.U.; writing—review and editing, M.M., C.V. and S.U.; visualization, M.M., C.V. and S.U.; supervision, M.M., C.V. and S.U.; project administration, M.M., C.V. and S.U.; funding acquisition, M.M., C.V. and S.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Lemma 2. 
Since p k depends on x k , the following holds by the chain rule,
F ( x k , p k i ) x k = x F ( x k , p k i ) + p k i x k p F ( x k , p k i )
Then,
F ( x k , p k D ) x k F ( x k , p * ( x k ) ) x k = x F ( x k , p k D ) + p k D x k p F ( x k , p k D ) x F ( x k , p * ( x k ) ) p * ( x k ) x k p F ( x k , p * ( x k ) ) + p * ( x k ) x k p F ( x k , p k D ) p * ( x k ) x k p F ( x k , p k D )
x F ( x k , p k D ) x F x k , p * ( x k ) + p k D x k p * ( x k ) x k p F ( x k , p k D ) + p * ( x k ) x k p F ( x k , p k D ) p F ( x k , p * ( x k ) )
where the last inequality follows from the triangle inequality. The first term of (A2) is bounded by Assumption 2 as follows,
x F ( x k , p k D ) x F ( x k , p * ( x k ) ) L p k D p * ( x k )
Similarly, the third term of (A2) is bounded by Assumption 1
p * ( x k ) x k p F ( x k , p k D ) p F ( x k , p * ( x k ) ) L p * ( x k ) x k p k D p * ( x k )
Furthermore, the second term of (A2) is bounded as
p k D x k p * ( x k ) x k p F ( x k , p k D ) M p k D x k p * ( x k ) x k
since we have p F ( x , p ) M by Assumptions 4 and 5. Then, (A2) is upper bounded as
F ( x k , p k D ) x k F ( x k , p * ( x k ) ) x k L p k D p * ( x k ) + L p * ( x k ) x k p k D p * ( x k ) + M p k D x k p * ( x k ) x k
To upper bound (A6) further, we bound p k D x k p * ( x k ) x k .
By the gradient descent update of p, i.e., p k t = p k t 1 α p g ( x k , p k t 1 ) for t = 1 , , D , and by the chain rule,
p k t x k = p k t 1 x k α x p g ( x k , p k t 1 ) + p k t 1 x k p 2 g ( x k , p k t 1 )
Additionally, based on the optimality of p * ( x k ) , we have p g ( x k , p * ( x k ) ) = 0 . Then, by taking the partial derivative with respect to x k ,
x p g x k , p * ( x k ) + p * ( x k ) x k p 2 g x k , p * ( x k ) = 0
By combining (A7) and (A8),
p k t x k p * ( x k ) x k = p k t 1 x k p * ( x k ) x k α x p g ( x k , p k t 1 ) + p k t 1 x k p 2 g ( x k , p k t 1 ) + α x p g x k , p * ( x k ) + p * ( x k ) x k p 2 g x k , p * ( x k ) = p k t 1 x k p * ( x k ) x k α x p g ( x k , p k t 1 ) x p g x k , p * ( x k ) α p k t 1 x k p * ( x k ) x k p 2 g ( x k , p k t 1 ) + α p * ( x k ) x k p 2 g ( x k , p * ( x k ) ) p 2 g ( x k , p k t 1 )
Moreover, by (A8) and Assumptions 1, 2,
p * ( x k ) x k = x p g ( x k , p * ( x k ) ) [ p 2 g ( x k , p * ( x k ) ) ] 1 L μ
By (A9), (A10) and Assumption 2, we have
p k t x k p * ( x k ) x k I α p 2 g ( x k , p k t 1 ) × p k t 1 x k p * ( x k ) x k + α τ + L ρ μ p k t 1 p * ( x k )
Furthermore, based on μ -strong convexity of g ( x , · ) with respect to p, we have p 2 g ( x , · ) μ for any x H N × W . Then, (A11) can be simplified as
p k t x k p * ( x k ) x k ( 1 α μ ) p k t 1 x k p * ( x k ) x k + α τ + L ρ μ p k t 1 p * ( x k )
In addition, the following equation is obtained from μ -strong convexity of g ( x , · ) with respect to p as well,
p k t 1 p * ( x k ) ( 1 α μ ) t 1 2 p k 0 p * ( x k )
Inserting (A13) into (A12) and telescoping results as
p k D x k p * ( x k ) x k ( 1 α μ ) D p k 0 x k p * ( x k ) x k
+ α τ + L ρ μ t = 0 D 1 1 α μ D 1 t 1 α μ t 2 × p k 0 p * ( x k )
= ( 1 α μ ) D p k 0 x k p * ( x k ) x k + 2 τ μ + L ρ μ 2 1 α μ D 1 2 p k 0 p * ( x k )
L ( 1 α μ ) D μ + 2 τ μ + L ρ μ 2 1 α μ D 1 2 p k 0 p * ( x k )
where the last inequality comes from p k 0 x k = 0 and (A10). Then, the proof is completed by using (A16) in (A6) in addition to upper bounding p * ( x k ) x k and p k D p * ( x k ) in (A6) with (A10) and (A13), respectively. □
Proof of Lemma 3. 
Based on L F -smoothness of F ( · ) ,
F ( x k + 1 , p k + 1 ) F ( x k , p k ) + ( x k + 1 x k ) T F ( x k , p k ) + L F 2 | | x k + 1 x k | | 2
By the gradient descent update of x
x k + 1 x k = β ^ F ( x k , p k )
By inserting (A18) into (A17), the following holds,
F ( x k + 1 , p k + 1 ) F ( x k , p k ) + ( β ^ F ( x k , p k ) + β F ( x k , p k ) β F ( x k , p k ) ) T F ( x k , p k )
+ L F 2 β ^ F ( x k , p k ) + β F ( x k , p k ) β F ( x k , p k ) 2 = F ( x k , p k ) β ^ F ( x k , p k ) F ( x k , p k ) , F ( x k , p k ) β F ( x k , p k ) 2
+ β 2 L F ^ F ( x k , p k ) F ( x k , p k ) 2 + β 2 L F F ( x k , p k ) 2 F ( x k , p k ) β 2 β 2 L F F ( x k , p k ) 2
+ β 2 + β 2 L F ^ F ( x k , p k ) F ( x k , p k ) 2
where the last inequality comes from
| | x y | | 2 = | | x | | 2 + | | y | | 2 2 x , y | | x | | 2 2 x , y x
by substituting x = F ( x k , p k ) and y = ^ F ( x k , p k ) . □
Proof of Theorem 1. 
By Lemma 3, we have
F ( x k + 1 , p k + 1 ) F ( x k , p k ) β 2 β 2 L F | | F ( x k , p k ) | | 2 A 1 + β 2 + β 2 L F | | ^ F ( x k , p k ) F ( x k , p k ) | | 2 A 2
To upper bound A 2 , we use Lemma 2 and the fact that ( a + b + c ) 2 3 a 2 + 3 b 2 + 3 c 2 , a , b , c R , while also assuming p k 0 p k * ( x k ) 2 Δ . By choosing a = L ( L + μ ) ( 1 α μ ) D 2 μ Δ 1 2 , b = 2 M ( τ μ + L ρ ) μ 2 ( 1 α μ ) D 1 2 Δ 1 2 , c = L M ( 1 α μ ) D μ , we have
| | ^ F ( x k , p k ) F ( x k , p k ) | | 2 3 Δ L 2 ( L + μ ) 2 ( 1 α μ ) D μ 2 + 4 M 2 ( τ μ + L ρ ) 2 μ 4 ( 1 α μ ) D 1 + 3 L 2 M 2 ( 1 α μ ) 2 D μ 2
where constant B is defined as
B 3 Δ L 2 ( L + μ ) 2 ( 1 α μ ) D μ 2 + 4 M 2 ( τ μ + L ρ ) 2 μ 4 ( 1 α μ ) D 1 + 3 L 2 M 2 ( 1 α μ ) 2 D μ 2
To upper bound the A 1 , we use the μ -strong convexity of F ( x , p ) with respect to x by Assumption 1. By μ -strong convexity of F ( x , p ) , for any fixed p P N , we have,
F ( x k , p ) μ ( x k x * )
Then,
F ( x k ) 2 μ 2 ( x k x * ) 2
By substituting (A24) and (A27) in (A23), we have
F ( x k + 1 , p k + 1 ) F ( x k , p k ) μ 2 ( β 2 β 2 L F ) x k x * 2 + ( β 2 + β 2 L F ) B
By subtracting F ( x * , p * ( x * ) ) from both sides, we have
F ( x k + 1 , p k + 1 ) F ( x * , p * ( x * ) ) F ( x k , p k ) F ( x * , p * ( x * ) ) μ 2 β 2 β 2 L F x k x * 2 + β 2 + β 2 L F B
By L F -smoothness of F ( x , p ) from Lemma 1, and the fact that x F ( x , p ) | { x = x * , p = p * } = 0
F ( x k , p k ) F ( x * , p * ( x * ) ) L F 2 x k x * 2
for any x k . By substituting (A30) in (A29), we have
F ( x k + 1 , p k + 1 ) F ( x * , p * ( x * ) ) L F 2 μ 2 β 2 + μ 2 β 2 L F × x k x * 2 + ( β 2 + β 2 L F ) B
Additionally, by the μ -strong convexity of F ( x , p ) with respect to x, we have
x k + 1 x * ( 1 β μ ) 1 2 x k x *
x k x * ( 1 β μ ) k 2 x 0 x *
Then,
F ( x k , p k ) F ( x * , p * ( x * ) ) L F 2 μ 2 β 2 + μ 2 β 2 L F × ( 1 β μ ) k 1 x 0 x * 2 + β 2 + β 2 L F B
completing the proof. □

References

  1. Caruana, R. Multitask learning. Mach. Learn. 1997, 28, 41–75. [Google Scholar] [CrossRef]
  2. Zhang, Y.; Yang, Q. A survey on multi-task learning. arXiv 2017, arXiv:1707.08114. [Google Scholar] [CrossRef]
  3. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Aguera y Arcas, B. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the AISTATS, Fort Lauderdale, FL, USA, 20–22 April 2017. [Google Scholar]
  4. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. In Proceedings of the MLSys, Austin, TX, USA, 2–4 March 2020. [Google Scholar]
  5. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. SCAFFOLD: Stochastic controlled averaging for federated learning. In Proceedings of the ICML, Virtual, 13–18 July 2020. [Google Scholar]
  6. Fifty, C.; Amid, E.; Zhao, Z.; Yu, T.; Anil, R.; Finn, C. Measuring and harnessing transference in multi-task learning. arXiv 2020, arXiv:2010.15413. [Google Scholar]
  7. Collins, L.; Hassani, H.; Mokhtari, A.; Shakkottai, S. Exploiting shared representations for personalized federated learning. In Proceedings of the ICML, Virtual, 18–24 July 2021. [Google Scholar]
  8. Arivazhagan, M.G.; Aggarwal, V.; Singh, A.K.; Choudhary, S. Federated learning with personalization layers. arXiv 2019, arXiv:1912.00818. [Google Scholar]
  9. Deng, Y.; Kamani, M.; Mahdavi, M. Adaptive personalized federated learning. arXiv 2020, arXiv:2003.13461. [Google Scholar]
  10. Fallah, A.; Mokhtari, A.; Ozdaglar, A. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In Proceedings of the NeurIPS, Virtual, 6–12 December 2020. [Google Scholar]
  11. Lan, G.; Zhou, Y. An optimal randomized incremental gradient method. Math. Program. 2018, 171, 167–215. [Google Scholar] [CrossRef] [Green Version]
  12. Smith, V.; Chiang, C.K.; Sanjabi, M.; Talwalkar, A.S. Federated multi-task learning. In Proceedings of the NeurIPS, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  13. Hanzely, F.; Richtárik, P. Federated learning of a mixture of global and local models. arXiv 2020, arXiv:2002.05516. [Google Scholar]
  14. Liang, P.P.; Liu, T.; Ziyin, L.; Allen, N.B.; Auerbach, R.P.; Brent, D.; Salakhutdinov, R.; Morency, L.P. Think locally, act globally: Federated learning with local and global representations. arXiv 2020, arXiv:2001.01523. [Google Scholar]
  15. Agarwal, A.; Langford, J.; Wei, C.Y. Federated residual learning. arXiv 2020, arXiv:2003.12880. [Google Scholar]
  16. Hanzely, F.; Zhao, B.; Kolar, M. Personalized federated learning: A unified framework and universal optimization techniques. arXiv 2021, arXiv:2102.09743. [Google Scholar]
  17. Kendall, A.; Gal, Y.; Cipolla, R. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the CVPR, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  18. Qian, W.; Chen, B.; Zhang, Y.; Wen, G.; Gechter, F. Multi-task variational information bottleneck. arXiv 2020, arXiv:2007.00339. [Google Scholar]
  19. Chen, Z.; Badrinarayanan, V.; Lee, C.; Rabinovich, A. GradNorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In Proceedings of the ICML, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  20. Mortaheb, M.; Vahapoglu, C.; Ulukus, S. FedGradNorm: Personalized federated gradient-normalized multi-task learning. In Proceedings of the IEEE SPAWC, Oulu, Finland, 4–6 July 2022. [Google Scholar]
  21. Amiri, M.M.; Gündüz, D. Machine learning at the wireless edge: Distributed stochastic gradient descent over-the-air. In Proceedings of the IEEE ISIT, Paris, France, 7–12 July 2019. [Google Scholar]
  22. Amiri, M.M.; Gündüz, D. Over-the-air machine learning at the wireless edge. In Proceedings of the IEEE SPAWC, Cannes, France, 2–5 July 2019. [Google Scholar]
  23. Vahapoglu, C.; Mortaheb, M.; Ulukus, S. Hierarchical over-the-air FedGradNorm. In Proceedings of the IEEE Asilomar, Pacific Grove, CA, USA, 1–4 November 2022. [Google Scholar]
  24. Abad, M.S.H.; Ozfatura, E.; Gündüz, D.; Erçetin, Ö. Hierarchical federated learning across heterogeneous cellular networks. In Proceedings of the IEEE ICASSP, Virtual, 4–8 May 2020. [Google Scholar]
  25. Liu, L.; Zhang, J.; Song, S.H.; Letaief, K.B. Client-edge-cloud hierarchical federated learning. In Proceedings of the IEEE ICC, Virtual, 7–11 June 2020. [Google Scholar]
  26. Luo, S.; Chen, X.; Wu, Q.; Zhou, Z.; Yu, S. HFEL: Joint edge association and resource allocation for cost-efficient hierarchical federated edge learning. IEEE Trans. Wirel. Commun. 2020, 19, 6535–6548. [Google Scholar] [CrossRef]
  27. Wang, J.; Wang, S.; Chen, R.R.; Ji, M. Demystifying why local aggregation helps: Convergence analysis of hierarchical SGD. arXiv 2020, arXiv:2010.12998. [Google Scholar] [CrossRef]
  28. Zhang, Z.; Luo, P.; Loy, C.; Tang, X. Facial landmark detection by deep multi-task learning. In Proceedings of the ECCV, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
  29. Jagannath, A.; Jagannath, J. Multi-task learning approach for automatic modulation and wireless signal classification. In Proceedings of the IEEE ICC, Virtual, 7–11 December 2021. [Google Scholar]
  30. Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konecný, J.; Mazzocchi, S.; McMahan, H.; et al. Towards federated learning at scale: System design. In Proceedings of the MLSys, Stanford, CA, USA, 31 March–2 April 2019. [Google Scholar]
  31. Sinha, A.; Malo, P.; Deb, K. A review on bilevel optimization: From classical to evolutionary approaches and applications. IEEE Trans. Evol. Comput. 2017, 22, 276–295. [Google Scholar] [CrossRef]
  32. Hansen, P.; Jaumard, B.; Savard, G. New branch-and-bound rules for linear bilevel programming. SIAM J. Sci. Comput. 1992, 13, 1194–1217. [Google Scholar] [CrossRef]
  33. Shi, C.; Lu, J.; Zhang, G. An extended kuhn-tucker approach for linear bilevel programming. Appl. Math. Comput. 2005, 162, 51–63. [Google Scholar] [CrossRef]
  34. Bennett, K.P.; Moore, G.M. Bilevel programming algorithms for machine learning model selection. In Proceedings of the Rensselaer Polytechnic Institute, 9 March 2010. [Google Scholar]
  35. Domke, J. Generic methods for optimization-based modeling. In Proceedings of the AISTATS, La Palma, Canary Islands, 21–23 April 2012. [Google Scholar]
  36. Ghadimi, S.; Wang, M. Approximation methods for bilevel programming. arXiv 2018, arXiv:1802.02246. [Google Scholar]
  37. Grazzi, R.; Franceschi, L.; Pontil, M.; Salzo, S. On the iteration complexity of hypergradient computation. In Proceedings of the ICML, Virtual, 13–18 July 2020. [Google Scholar]
  38. Shaban, A.; Cheng, C.A.; Hatch, N.; Boots, B. Truncated back-propagation for bilevel optimization. In Proceedings of the AISTATS, Naha, Okinawa, Japan, 16–18 April 2019. [Google Scholar]
  39. Maclaurin, D.; Duvenaud, D.; Adams, R. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the ICML, Lille, France, 6–11 July 2015. [Google Scholar]
  40. Ji, K.; Yang, J.; Liang, Y. Bilevel optimization: Convergence analysis and enhanced design. In Proceedings of the ICML, Virtual, 18–24 July 2021. [Google Scholar]
  41. Hsieh, K.; Harlap, A.; Vijaykumar, N.; Konomis, D.; Ganger, G.R.; Gibbons, P.B.; Mutlu, O. Gaia: Geo-distributed machine learning approaching LAN speeds. In Proceedings of the NSDI, Boston, MA, USA, 27–29 March 2017; pp. 629–647. [Google Scholar]
  42. Yang, Z.; Chen, M.; Wong, K.; Poor, H.V.; Cui, S. Federated learning for 6G: Applications, challenges, and opportunities. Engineering 2022, 8, 33–41. [Google Scholar] [CrossRef]
Figure 1. Personalized federated learning (PFL) framework with a common network (shown in blue) and small personalized headers (shown in red, green, black).
Figure 1. Personalized federated learning (PFL) framework with a common network (shown in blue) and small personalized headers (shown in red, green, black).
Algorithms 15 00421 g001
Figure 2. Hierarchical personalized federated learning (HPFL) framework with a common network (shown in blue) and small personalized headers (shown in red, green, black, orange).
Figure 2. Hierarchical personalized federated learning (HPFL) framework with a common network (shown in blue) and small personalized headers (shown in red, green, black, orange).
Algorithms 15 00421 g002
Figure 3. Comparison of task losses in FedGradNorm and FedRep; balanced data allocation among tasks (a) task 1 (face landmark), (b) task 2 (gender), (c) task 3 (smile), (d) task 4 (glasses), (e) task 5 (pose), (f) task weights.
Figure 3. Comparison of task losses in FedGradNorm and FedRep; balanced data allocation among tasks (a) task 1 (face landmark), (b) task 2 (gender), (c) task 3 (smile), (d) task 4 (glasses), (e) task 5 (pose), (f) task weights.
Algorithms 15 00421 g003
Figure 4. Comparison between task accuracy achieved via FedGradNorm and FedRep in RadComDynamic dataset (a) task 1 (modulation classification), (b) task 2 (signal classification), (c) task 3 (anomaly behavior), (d) task weights.
Figure 4. Comparison between task accuracy achieved via FedGradNorm and FedRep in RadComDynamic dataset (a) task 1 (modulation classification), (b) task 2 (signal classification), (c) task 3 (anomaly behavior), (d) task weights.
Algorithms 15 00421 g004
Figure 5. Comparison between task loss achieved via HOTA-FedGradNorm and naive equal weighting case in RadComDynamic dataset for the first cluster (a) task 1 (modulation classification), (b) task 2 (signal classification), (c) task 3 (anomaly behavior), (d) task weights.
Figure 5. Comparison between task loss achieved via HOTA-FedGradNorm and naive equal weighting case in RadComDynamic dataset for the first cluster (a) task 1 (modulation classification), (b) task 2 (signal classification), (c) task 3 (anomaly behavior), (d) task weights.
Algorithms 15 00421 g005
Figure 6. Comparison between task loss achieved via HOTA-FedGradNorm and naive equal weighting case in RadComDynamic dataset for the second cluster where σ 1 2 = 0.5 and σ l 2 = 1 2 (a) task 1 (modulation classification), (b) task 2 (signal classification), (c) task 3 (anomaly behavior), (d) task weights.
Figure 6. Comparison between task loss achieved via HOTA-FedGradNorm and naive equal weighting case in RadComDynamic dataset for the second cluster where σ 1 2 = 0.5 and σ l 2 = 1 2 (a) task 1 (modulation classification), (b) task 2 (signal classification), (c) task 3 (anomaly behavior), (d) task weights.
Algorithms 15 00421 g006
Figure 7. Comparison between task loss achieved via HOTA-FedGradNorm and naive equal weighting case in RadComDynamic dataset where σ 2 2 = 0.75 and σ l 2 = 1 for l 3 (a) task 1 (modulation classification) when σ 1 2 = 2 , (b) task 1 (modulation classification) when σ 1 2 = 0.25 , (c) task 2 (signal classification) when σ 1 2 = 2 , (d) task 2 (signal classification) when σ 1 2 = 0.25 .
Figure 7. Comparison between task loss achieved via HOTA-FedGradNorm and naive equal weighting case in RadComDynamic dataset where σ 2 2 = 0.75 and σ l 2 = 1 for l 3 (a) task 1 (modulation classification) when σ 1 2 = 2 , (b) task 1 (modulation classification) when σ 1 2 = 0.25 , (c) task 2 (signal classification) when σ 1 2 = 2 , (d) task 2 (signal classification) when σ 1 2 = 0.25 .
Algorithms 15 00421 g007
Table 1. RadComDynamic: Dynamic settings.
Table 1. RadComDynamic: Dynamic settings.
Dynamic ParametersValue
Carrier frequency offset std. dev/sample0.05 Hz
Maximum carrier frequency offset250 Hz
Sample rate offset std. dev/sample0.05 Hz
Maximum sample rate offset60 Hz
Num. of sinusoids in freq. selective fading5
Maximum doppler frequency2 Hz
Rician K-factor3
Fractional sample delays comprising PDP[0.2, 0.3, 0.1]
Number of multipath taps5
List of magnitudes corresponding to each delay in PDP[1, 0.5, 0.5]
Table 2. System model hyperparemeters.
Table 2. System model hyperparemeters.
HyperparameterValue
OptimizerAdam
FedGradNorm
γ 0.9
Learning rate ( β )0.0002
Learning rate ( α )0.004
HOTA-FedGradNorm
Number of clusters C10
Number of clients in each cluster N3
σ l 2 for l C 1
H t h 3.2 × 10 2
γ 0.6
Learning rate ( β )0.0003
Learning rate ( α )0.008
Table 3. Shared network model.
Table 3. Shared network model.
Network 1Network 2
Conv2d(1, 16, 5)FC(256, 512)
MaxPool2d(2, 2)FC(512, 1024)
Conv2d(16, 48, 3)FC(1024, 2048)
MaxPool2d(2, 2)FC(2048, 512)
Conv2d(48, 64, 3)FC(512, 256)
MaxPool2d(2, 2)
Conv2d(64, 64, 2)
Table 4. Comparison of task losses after 100 epochs in FedGradNorm and FedRep; imbalanced data allocation among tasks.
Table 4. Comparison of task losses after 100 epochs in FedGradNorm and FedRep; imbalanced data allocation among tasks.
TasksFace LandmarkGenderSmileGlassPose
FedRep loss33.280.660.600.441.1
FedGradNorm loss33.250.560.570.431.1
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mortaheb, M.; Vahapoglu, C.; Ulukus, S. Personalized Federated Multi-Task Learning over Wireless Fading Channels. Algorithms 2022, 15, 421. https://doi.org/10.3390/a15110421

AMA Style

Mortaheb M, Vahapoglu C, Ulukus S. Personalized Federated Multi-Task Learning over Wireless Fading Channels. Algorithms. 2022; 15(11):421. https://doi.org/10.3390/a15110421

Chicago/Turabian Style

Mortaheb, Matin, Cemil Vahapoglu, and Sennur Ulukus. 2022. "Personalized Federated Multi-Task Learning over Wireless Fading Channels" Algorithms 15, no. 11: 421. https://doi.org/10.3390/a15110421

APA Style

Mortaheb, M., Vahapoglu, C., & Ulukus, S. (2022). Personalized Federated Multi-Task Learning over Wireless Fading Channels. Algorithms, 15(11), 421. https://doi.org/10.3390/a15110421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop