Next Article in Journal
Detection of Crabs and Lobsters Using a Benchmark Single-Stage Detector and Novel Fisheries Dataset
Next Article in Special Issue
Securing Critical Infrastructure with Blockchain Technology: An Approach to Cyber-Resilience
Previous Article in Journal
A Systematic Review of Using Deep Learning in Aphasia: Challenges and Future Directions
Previous Article in Special Issue
Blockchain Integration and Its Impact on Renewable Energy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Attribute-Based Participant Selecting Scheme with Blockchain for Federated Learning in Smart Cities

School of Computer Science and Technology, University of Science and Technology of China, Hefei 230027, China
*
Author to whom correspondence should be addressed.
Computers 2024, 13(5), 118; https://doi.org/10.3390/computers13050118
Submission received: 8 April 2024 / Revised: 30 April 2024 / Accepted: 30 April 2024 / Published: 9 May 2024

Abstract

:
In smart cities, large amounts of multi-source data are generated all the time. A model established via machine learning can mine information from these data and enable many valuable applications. With concerns about data privacy, it is becoming increasingly difficult for the publishers of these applications to obtain users’ data, which hinders the previous paradigm of centralized training through collecting data on a large scale. Federated learning is expected to prevent the leakage of private data by allowing users to train models locally. The existing works generally ignore architectures designed in real scenarios. Thus, there still exist some challenges that have not yet been explored in federated learning applied in smart cities, such as avoiding sharing models with improper parties under privacy requirements and designing satisfactory incentive mechanisms. Therefore, we propose an efficient attribute-based participant selecting scheme to ensure that only someone who meets the requirements of the task publisher can participate in training under the premise of high privacy requirements, so as to improve the efficiency and avoid attacks. We further extend our scheme to encourage clients to take part in federated learning and provide an audit mechanism using a consortium blockchain. Finally, we present an in-depth discussion of the proposed scheme by comparing it to different methods. The results show that our scheme can improve the efficiency of federated learning by enabling reliable participant selection and promote the extensive use of federated learning in smart cities.

1. Introduction

The concept of smart cities has become central to contemporary discussions on urban development, where the integration of Information and Communication Technology (ICT) is pivotal in transforming the city’s infrastructure and services [1,2]. Smart cities utilize advanced data analytics and IoT technologies to optimize resources, improve service delivery, and enhance the quality of urban life. These urban areas are defined by their ability to efficiently manage vast amounts of data generated from a multitude of sources—ranging from traffic sensors to healthcare records—aiming to improve sectors such as energy, healthcare, and community governance, as Figure 1 shows. Despite the advantages, the challenge of data acquisition persists, exacerbated by strict data protection regulations and the growing demand for privacy, which contribute to the formation of fragmented data ecosystems or ‘data islands’ within urban settings. In response, federated learning emerges as an effective approach to navigate these challenges. This method allows for the decentralized training of models on local data held by various stakeholders, thereby adhering to privacy concerns without centralizing sensitive information. Since its initial introduction by Google [3], the application of federated learning has expanded, driven by ongoing research aimed at enhancing its efficiency and accuracy [4,5,6,7]. However, the implementation of federated learning within smart cities is fraught with obstacles, such as high communication costs; difficulties in achieving model convergence in diverse, non-IID data environments; and the critical need for robust security measures to safeguard against potential data breaches during the model training process [8,9,10,11].
In existing federated learning systems, the number of clients involved in each round of updates is usually fixed. In the context of a smart city, federated learning schemes normally select a small number of clients randomly to participate in each round, due to the limitations of participants’ state and network conditions. However, as there is a mass of heterogeneous clients in reality, such random selection of clients will increase the adverse impact of data heterogeneity [12]. Therefore, it is very important to select appropriate clients for training. Current schemes either select clients with higher statistical utility based on the measurement of their contributions to model updates [13] or select clients based on computing resources and communication constraints [14]. Although these schemes achieve certain effects, there still exist some challenges. For example, some schemes need to analyze private gradients uploaded by participants, or they consume a lot of resources for learning and testing, while some can only select participants at a coarse-grained level.
Federated learning prevents direct uploads of private data, but the issue of privacy leakage has not been completely resolved. Traditional client selection schemes in federated learning typically allow participants to train models with local datasets and upload gradients to update the global model, so that the central server can use this information to avoid model poisoning and select participants for the next round of training to favor model convergence [15,16]. However, some scholars have pointed out that this will also cause serious privacy disclosures [8]. To solve this problem, some studies have used homomorphic encryption [17] and differential privacy [18] to mask the gradient, but this undoubtedly prevents the central server from selecting participants, because the server cannot obtain valid information from the encrypted or confusing gradient. In addition, existing federated learning schemes usually assume that the participants unconditionally use local resources to train the models and upload gradients to the central server, which is not sustainable in reality [19]. Some scholars have looked at federated learning from the perspective of crowdsourcing [20]. Inspired by this, we believe that, in smart cities, the publisher of a federated learning task should have no control over the participants, and the clients should choose whether or not to use local data for training. Therefore, it is necessary to set up an incentive mechanism to attract participants to join the training [11].
In the context of smart cities, we have sufficient reasons to design a federated learning framework from the perspective of crowdsourcing. This framework should consider selecting participants during training to improve the training efficiency, blocking malicious adversaries before training, and encouraging more high-quality clients to participate in constructing the models. In recent years, attribute-based encryption has been widely studied as a promising direction of functional encryption [21]. Ciphertext-policy attribute-based encryption (CP-ABE) can conduct fine-grained access control for users conforming to specific policies without revealing any private data. This enables us to separate the participant selection module from the federated learning module, thus providing the possibility of complete privacy protection, including homomorphic encryption. It is worth noting that there is no research on its application in federated learning. In addition, a consortium blockchain is a tamper-resistant and traceable distributed ledger that can be used to record the contributions of participants.
To better understand our scheme, let us consider a scenario in which a company needs to train a model of people’s desire to consume different goods. It is hoped that as many clients as possible in the region will participate, even if this is done at a cost. At the same time, the company wishes to eliminate malicious attacks from competitors and select participants with an appropriate data distribution in training to improve the learning efficiency. Although stringent data confidentiality regulations prevent it from deducing the appropriateness from gradients, it can still apply an attribute-based encryption scheme to select participants. Specifically, the task publisher develops a policy for each round of training so that only those who meet this policy can decrypt and participate in subsequent training. At the same time, participants can record decryption logs in a blockchain, which can provide both non-repudiation credentials to incentivize the participants and an auditing report to trace the transactions if a malicious adversary tries to disrupt the model.
The contributions of this article are as follows.
  • We propose a client selecting framework in federated learning based on ciphertext-policy attribute-based encryption, which extends traditional federated learning from the perspective of crowdsourcing. Our scheme can select appropriate participants on the premise of protecting gradient privacy.
  • An incentive mechanism based on blockchain is proposed, so that the profits to participate in training belong to clients. The use of immutable smart contracts can greatly improve the enthusiasm of clients participating in federated learning.
  • The security of the proposed scheme is proven, and the performance of the proposed scheme is evaluated. The experiments show that the method proposed in this paper can perform better than the existing methods.
The rest of our article is organized as follows. Section 2 presents an analysis of related work. Section 3 briefly describes the preliminaries, including the security model of this scheme. Section 4 describes the workflow and the architecture of the proposed CP-ABE scheme. Section 5 characterizes the IND-CPA security model and describes other security proofs. Section 6 compares the performance of our proposed scheme with that of other recent schemes. Finally, Section 7 draws the conclusions.

2. Related Work

The concept of federated learning was proposed by researchers at Google [3], who devised an interesting virtual keyboard application. Federated learning, as defined by Kairouz et al. [9], is a machine learning setting where multiple entities (clients) collaborate in solving machine learning problems, under the coordination of a central server or service provider. Each client’s raw data are stored locally and not exchanged or transferred. A typical federated learning process consists of five steps: client selection, broadcast, client computation, aggregation, and model updates. Among them, it is a very challenging task to select appropriate clients during training, rather than performing random selection, and there are still some problems to be solved in the existing client selection schemes.
Zhang et al. [14] selected the clients according to the resource information sent by them, such as the computing ability and channel state. However, this may mean that clients with a large amount of data are unlikely to participate in training. Chai et al. [12] stratified the clients and adaptively selected those with similar training performance per round in order to mitigate heterogeneity without compromising the model accuracy, but this means that the central server has to control all participants to capture the training time on-the-fly. Fan et al. [22] used importance sampling to select clients, i.e., to select clients by utility. In addition, they developed an exploration–exploitation strategy to select participants. However, each of these clients was designed to upload complete model updates to the central server at each round, ignoring the fact that not all model updates contribute equally to the global model. As an improvement on this work, Li et al. [23] proposed PyramidFL, which calculated the importance ranking of each client based on feedback from past training rounds to determine a list of qualified clients for the next round of training, but the central server still obtains private information, such as the gradients and loss uploaded by clients. Wang et al. [24] put forward an experience-driven federated learning framework (Favor) based on reinforcement learning, which can intelligently select the clients participating in each round of federated learning to offset the deviation caused by non-IID. However, the disadvantage is that the efficiency of reinforcement learning restricts the performance of the system, and sometimes it is unclear why it is effective.
We can consider federated learning from the perspective of crowdsourcing, which may be an important direction for future federated learning because few companies have as many registered users as Google. Thus, we have a strong motivation to respect participants’ willingness to participate in training while fully protecting their data. The additional challenge that needs to be addressed to apply federated learning in smart city scenarios is participant motivation [11], and most existing federated learning schemes assume that the participants use local data for training and upload model updates unconditionally. This is not realistic, as participants have the right to claim remuneration for the resources that they consume to participate in training. In order to provide appropriate incentives, Sarikaya et al. [25] designed a Stackelberg game to motivate participants to allocate more computing resources. Richardson et al. [26] designed payment structures based on the impact characteristics of data points on the model loss function to motivate clients to provide high-quality data as soon as possible. In many applications, blockchain is considered to be the best solution to achieve an incentive mechanism, because it is immutable and auditable and has inherent consensus [27]. Almutairi et al. [28] proposed a solution integrating federated learning with a lightweight blockchain, enhancing the performance and reducing the gas consumption while maintaining security against data leaks. Weng et al. [29] proposed a value-driven incentive mechanism based on blockchain to force participants to behave correctly. Bao et al. [30] designed a blockchain platform that allows honest trainers to earn a fair share of profits from trained models based on their contributions, while malicious parties can be promptly detected and severely punished. Most of these blockchain platforms complete the verification and audit of gradient updates via the blockchain itself, while ignoring the costs. Moreover, these pure blockchains overemphasize transactions, without taking into account the difference in data value between different participants. We believe that, from the perspective of crowdsourcing, it is natural for the task publisher to pay high-value participants who meet his/her requirements.
In order to achieve a balance between privacy, performance, and incentives in federated learning, we introduce attribute-based encryption based on ciphertext-policy in participant selection. Sahai and Waters et al. [31] proposed an attribute-based encryption scheme in 2005. Their scheme used a single threshold access structure, and only when the number of attributes owned by users is greater than or equal to a threshold value in the access policy can the ciphertext data be decrypted successfully. Bethencourt et al. [32] first proposed an attribute-based encryption scheme based on ciphertext-policy in 2007. The keys were associated with an attribute set, and the access structure was embedded in the ciphertext. Only when a user’s own attribute set meets the access structure set by the data owner can the user successfully decrypt the ciphertext to obtain the ciphertext data, and the access tree structure is used in this scheme. In order to reduce the storage and transmission overhead of the CP-ABE scheme, Emura et al. [33] proposed a scheme with a fixed ciphertext length for the first time, which improved the efficiency of encryption and decryption. However, all these schemes adopt a simple “AND” gate access structure. Waters et al. [34] proposed a new linear secret shared scheme (LSSS) to represent the access structure, which can realize any monotonous access structure, such as “AND”, “OR”, and the threshold operation of attributes. This scheme is more expressive, flexible, and efficient.
In smart city scenarios, there are many complex situations, such as the attributes of the participants being revoked. Updating participants’ attributes timely and effectively guarantees system security. Pirretti et al. [35] proposed a CP-ABE scheme of indirect attribute revocation in order to solve the loose coupling problem in social networks. Zhang et al. [36] proposed a CP-ABE scheme based on an “AND" gate structure with attribute revocation, but this scheme has poor access structure expression abilities. Hur et al. [37] proposed an access control scheme with coercive revocation capabilities to solve a problem in the access permissions caused by changes in the users’ identity in the system. They introduced the concept of attribute groups. Users with the same attributes belong to the same attribute group and are assigned to the same attribute group key. Once a member of the attribute group is revoked, a new group key is generated and sent to all group members except the revoked user. The ciphertext is updated in the cloud with the new group key, which makes it impossible for the revoked user to decrypt the data. However, their scheme does not prevent a collusion attack between the current and revoked users. In order to prevent cooperative decryption between users who have revoked attributes and users who do not have attributes, Li et al. [38] proposed a CP-ABE scheme to resist collusion attacks and support attribute revocation. However, the computational complexity of their scheme is still too high.
To address the challenges identified in the related work, our study introduces a novel federated learning framework that utilizes ciphertext-policy attribute-based encryption (CP-ABE) and a consortium blockchain. This methodology combines the strengths of CP-ABE to provide fine-grained access control and ensure privacy with the transparency and traceability of blockchain to manage and audit participant contributions effectively. The selection of participants based on attribute encryption ensures that only those who meet pre-defined criteria can access and process the training data, thereby enhancing the privacy and security of the data used in our federated learning model. Additionally, the consortium blockchain serves as a decentralized ledger to record all participant activities, which supports non-repudiation and helps in maintaining a trustworthy environment for all parties involved.

3. Preliminary

3.1. Federated Learning

Federated learning is a promising research area for distributed machine learning that protects privacy. In the process of federated learning, the task publisher can train models with the help of other participants. Instead of uploading private data to the central server, participants obtain a shared global model from the server and train it on a local dataset. These participants then upload the gradients or weights of the local model to the task publisher to update the global model. In particular, taking FedAVG as an example, the objective function under federated learning is rewritten with the non-convex loss function of a typical neural network.
f ( w ) = k = 1 K n k n F k ( w ) where F k ( w ) = 1 n k i P k f i ( w )
Here, k represents a total of k participants, and n k represents the number of training set samples in the k- t h participant. The specific algorithm is quite simple. Firstly, we select some nodes in each batch for epoch training, and then each node uploads weight updates to the server.
w w η ( w ; b )
Then, the server collects all the w t + 1 k to obtain the weighted average value of the new global w t + 1 , and it is then sent to each participant.
w t + 1 k = 1 K n k n w t + 1 k
Finally, each participant replaces the w t + 1 calculated from the last epoch with the delivered update to train a new epoch. The system repeats the above three steps until the server determines w convergence.

3.2. Bilinear Pairing

Bilinear pairing, also known as bilinear mapping, was initiated to build functional encryption schemes. At present, most ABE schemes [39] are based on bilinear pairing cryptography, and its security has been recognized by many experts. The general definition of bilinear pairing is given below.
Consider three cyclic groups G 1 , G 2 , and G T , each of prime order p. Typically, G 1 and G 2 are groups of points on an elliptic curve over a finite field, and G T is a multiplicative group of a finite field. A bilinear pairing is a map
e : G 1 × G 2 G T
that satisfies the following properties.
  • Bilinearity: For all elements u , v G 1 and w , z G 2 , the pairing operation respects the distributive property over the group operation. That is,
    e ( u · v , w ) = e ( u , w ) · e ( v , w )
    e ( u , w · z ) = e ( u , w ) · e ( u , z )
    This property can be extended to the exponents in the groups
    e ( u a , w b ) = e ( u , w ) a b
    for all a , b Z . This property is fundamental in enabling many cryptographic protocols because it allows the pairing operation to “interact” with the group operations in a predictable way.
  • Non-degeneracy: The pairing is non-trivial in the sense that there exists at least some u G 1 and w G 2 such that e ( u , w ) 1 in G T . This ensures that the pairing map is not constantly zero and thus is useful for cryptographic applications. It is often required that for all u 1 in G 1 and all w 1 in G 2 , e ( u , w ) 1 .
  • Symmetry (in some cases): For some pairings, particularly symmetric pairings, G 1 = G 2 and the pairing satisfies e ( u , w ) = e ( w , u ) . This symmetry is not always required or desired, depending on the cryptographic application.
  • Computability: There must be an efficient algorithm to compute e ( u , w ) for all u G 1 and w G 2 . The efficiency of this computation is critical because the practicality of cryptographic protocols based on pairings depends heavily on the ability to compute these pairings quickly.
Bilinear pairings are not only theoretical constructs but are practically implemented using specific types of elliptic curves, such as supersingular curves or curves with a low embedding degree, which provide the necessary mathematical structure to support efficient and secure pairings. These properties make bilinear pairings powerful tools in modern cryptographic systems, providing functionalities that are not feasible with traditional cryptographic primitives.

3.3. Consortium Blockchain

Blockchain is essentially a decentralized database. It adopts distributed accounting and relies on ingenious algorithms based on cryptography to achieve the characteristics of tamper-proofing and traceability. These features can establish a foundation of trust for a fair distribution of incentives in federated learning [10].
There are three main types of blockchain, namely public chain, private chain, and consortium chain. The essential differences between them are related to who has the write permission and how distributed they are. The public chain is highly decentralized, so anyone can access and view other nodes, but the cost is that the ledgers are very slow to update. At the other extreme is the private chain, where accessing and authoring are entirely controlled by an agency, but this also leads to the excessive concentration of power. The most appropriate blockchain applied in federated learning is the consortium chain, which is jointly maintained by the members and is highly suitable for transaction clearing within the consortium. It is more reliable than the purely private chain and has better performance than the public chain.
Regardless of the type of blockchain applied in a specific scenario, the data structure is a linked list of ledgers containing transaction records, as Figure 2 shows. Each block in the linked list contains hash values of the previous block, a new transaction record, and other information, such as timestamps. This structure ensures that each block is not tampered with and any nodes can easily trace back each transaction along the pointer.

3.4. Security Model

Let Π ( S e t u p , K e y G e n , E n c r y p t , R e e n c r y p t , D e c r y p t ) be our scheme. To define a selective IND-CPA security model for Π , the following G a m e Π , A game is designed, involving a PPT attacker A and a PPT challenger C .
Init: An adversary A controls a series of attribute authorities A A k A A (where at least two authorities in A A are not controlled by A ) and the remaining A A are controlled by the challenger C . An adversary A submits the access structure A * to be challenged, and then sends it to challenger C .
Setup: C runs a setup algorithm in order to obtain the master keys M S K and public parameters P P . Subsequently, challenger C sends the public parameters P P to adversary A . Meanwhile, challenger C initializes the user list, which includes authorization attributes and the challenged access structure A * .
Phase 1: A adaptively sends a set of attributes S. C generates the corresponding S K 1 , , S K q 1 , which is returned to A .
Challenge: A submits two messages M 0 and M 1 with equal length and submits an access structure A * to C . It is required that, for every S queried by A , S cannot satisfy A * . C flips a coin b { 0 , 1 } and encrypts M b with the access structure A * to obtain C T * . Finally, C sends the ciphertext C T * to A .
Phase 2: Repeat Phase 1. For every S queried by A , S cannot satisfy the access structure A * .
Guess: A outputs a guess b { 0 , 1 } for b and wins the game if b = b .
The advantage of A is defined in this game as follows:
A d v ( A ) = Pr b = b 1 2 .
We note that the model can easily be extended to handle chosen-ciphertext attacks by allowing for decryption queries in Phase 1 and Phase 2.
Definition 1.
The protocol Π is CPA security if no probabilistic polynomial-time (PPT) adversaries have a non-negligible advantage in the above game.
Under our security model, the task publisher and its central servers are considered to be honest but curious. In other words, they do not counterfeit, attack, or try to decipher the data uploaded by the owners, and they faithfully execute the algorithms. However, they may have a certain degree of curiosity and may bypass some restrictions to access users’ data or the system parameters directly. Meanwhile, the participants may be malicious, and they may attempt to access data that exceed their permissions in collusion with others.

4. Proposed Scheme

In this section, we provide our proposed system framework and details of our scheme, and we then verify their appropriateness. Figure 3 shows the framework diagram of our scheme.

4.1. System Framework

The central authority ( C A ) receives a security parameter λ and generates public parameters ( P P ) before publishing them in the system.
The task publisher tries to build a global model by selecting a set of attribute names and delegating attribute authorities to generate different attribute value keys for potential participants.
The task publisher initializes the model weights and establishes an access policy before generating a linear secret sharing matrix ( M , ρ ) . Then, he/she uses public keys obtained from A A s to encrypt a flag as a credential to participate in the current communication round.
If some participants’ attributes change, the task publisher obtains the latest version of the attribute public keys from the attribute authorities, re-encrypts the ciphertext, and attaches a digital signature.
Participants download the ciphertext from the central server (CS), verify the signature, and then perform decryption operations. If a participant meets the access policy set by the task publisher, such as requirements on the data quantity, data quality, and computing ability, he can successfully decrypt the ciphertext and obtain the flag. After an interaction with the server, such as homomorphic encryption key negotiation, the participant can use local data to carry out the next round of updates and return the updated weights to the central server.
Participants upload the decrypted flag of the current round to the consortium chain as a credential for an incentive.
After verifying the flag sent by the selected participants, the publisher can use the weight update to calculate new global weights and repeats this process until the global model converges.

4.2. Algorithms

We describe the specific algorithm as follows.

4.2.1. Global Setup: Setup ( λ ) PP

The central authority ( C A ) firstly selects a system security parameter λ , and then selects a large prime p as the order of multiplicative groups G and G T . Thus, e: G × G G T is the bilinear map. Let g be the generator of G . Finally, it chooses a hash function H : { 0 , 1 } * G , used to map binary sequences such as identifiers or attribute values to elements in a group.
After these top-level parameters are set, the central authority runs the initialization algorithm. It generates the master keys M S K by choosing α , τ , a Z p randomly.
M S K = ( α , τ , a )
Then, the public parameter P P is as follows:
P P = g , g a , e ( g , g ) α
In addition, all A A s are required to register with the C A to obtain unique identifiers a i d that are used to prove their legal identities.

4.2.2. Key Generation: KeyGen ( GP ) PK

In our system, each A A manages different attributes. The attribute authority with an identifier a i d is denoted as A A a i d , and the attribute set managed by is S a i d . Once an A A is initialized, it begins to execute a series of key generation programs.
When a task publisher needs to publish a federated learning task, he can pre-determine the attributes of the participants involved in the training, such as the computational performance, data set distribution, data quantity, and the willingness to participate, etc. He then instructs the attribute authority to generate the associated attribute keys on his behalf.
First, to distinguish between different versions of the attribute keys due to attribute revocation, the authority chooses a random number v x Z p as the initial version number of attribute x. The public attribute keys P K x are generated as
P K x = P K 1 , x = H ( x ) v x , P K 2 , x = H ( x ) v x τ
In particular, the attribute authority is also responsible for updating the keys. If the attribute x of a participant changes, the authority runs the algorithm N K e y G e n ( M S K , V K ) to generate update keys. The inputs are the new version number v x n corresponding to attribute x and the master key M S K . It generates the current attribute keys by choosing a number v x n randomly. After this, the authority computes the update keys as
U K x = U K 1 , x = v x n v x , U K 2 , x = v x v x n v x τ
The new version number of the attribute x, the update keys U K x that can be used to update the secret keys of unrevoked participants, and the ciphertexts that are associated with the revoked attribute x are the outputs of this algorithm. In addition to generating the update keys, the attribute authority also needs to update the public keys of the revoked attribute as
P K x N = P K 1 , x N = H x v x n , P K 2 , x N = H x v x n τ
The attribute authority then sends the update keys to the task publisher and all parties that have not been revoked via a corresponding attribute over a secure channel. The new public attribute keys for the revoked attribute are available to all owners from the institution’s public bulletin board. All generated secret keys are centrally managed by A A and isolated from outside. Malicious adversaries cannot obtain any information about the private keys through the network, but all public keys are publicized.

4.2.3. Registration: UserReg ( MSK , VK , S ) SK

Participants in federated learning can be community residents with valuable data. From the perspective of crowdsourcing, when an institution publishes a federated learning task, since each participant has absolute control over their own data, they can independently decide whether to join the training of this model and obtain certain benefits. On the other hand, each participant has a high degree of specificity, as their computing power, the amount of data, and the data distribution is different. Therefore, they need to register with the trusted attribute authority before participating in the training, so as to obtain the corresponding attribute keys according to their respective computed value and data value.
When a participant attempts to join the federated learning system, he can declare his set of attributes to the attribute authority for verification. If the information provided by the client is sufficient to prove the set of attributes that he claims, the attribute authority runs the key generation algorithm U s e r R e g to generate the unique secret keys S K for the participant. The algorithm takes the master keys M S K , a set of attributes S, and the version number { v x } x S corresponding to the attributes as inputs. It then computes the participants’ secret keys by choosing a random number t Z p as
S K = { K = g α · g a t , L = g t , x S : K x = H ( x ) v x t }
When a certain attribute x of a user is revoked—for example, he leaves an organization—the attribute authority needs to update the decryption private keys for other members of the attribute group, as follows:
K x N = K x U K 1 , x = H x v x n t
The attribute authority returns the updated keys over a secure channel to the users who have not been revoked. If the attribute of a participant is revoked, the participant cannot use the previous attribute keys to decrypt the ciphertext. However, a participant whose attributes are not revoked only needs to update the keys corresponding to the revoked attribute as
S K N = K , L , K x N , x S x : K x

4.2.4. Server Encryption: Enc ( GP , PK , f , A ) CT

When a task publisher needs to share global gradient information updated in each training round with the participants, firstly, the task publisher needs to formulate an appropriate access policy according to the model to be trained. The specific principle is that no information, including gradients, can be obtained from a single participant, and all participants who meet the access policy can obtain positive benefits after model training. For example, the task publisher can specify quantitative indicators to compute the ability, data quantity, degree of independence, and data distribution.
The algorithm takes in the access policy created by the task publisher and then outputs an n × l LSSS access matrix M with ρ ( x ) mapping its rows to attributes. Now, A = ( M , ρ ) , where ρ = ( a t t ρ ( 1 ) , a t t ρ ( 2 ) , , a t t ρ ( n ) ) .
Typically, in order to fully ensure gradient privacy, the participant may use homomorphic encryption with the server to protect the updated gradient information, which requires the negotiation of the homomorphic encryption keys with the task publisher. This means that, before each round of training, the task publisher needs to identify who the participants are. To do this, the central server secretly selects a flag f as a credential to participate in the training round. Participants who can successfully decrypt and return the f can participate in the next training round. Therefore, the flag serves as the ciphertext that needs to be encrypted.
After this, the central server (CS) chooses a random vector ξ Z p l with s as its first entry. Let λ i denote M i · ξ , where M i is the row i of M. For each i [ 1 , n ] , the central server randomly chooses r i Z p and computes the following ciphertext:
C = f e ( g , g ) α s , C = g s , C 0 , i = g a λ i H ( ρ ( i ) ) r i v ρ ( i ) , C 1 , i = H ( ρ ( i ) ) v ρ ( i ) r i τ , C 2 , i = g r i ( i = 0 , , n 1 )
Lastly, CS generates ciphertext C T .
C T = { ( M , ρ ( i ) ) , C , C , C 0 , i , C 1 , i , C 2 , i | i [ 1 , n ] }
As is well known, the attributes owned by participants in federated learning may change dynamically over time. Thus, in order to support attribute revocation, the central server controlled by the task publisher needs to re-encrypt the ciphertext. In other words, when a participant’s attributes are revoked, the central server re-encrypts the ciphertext to prevent malicious or inappropriate participants from training the model. If some user’s attribute x is revoked, the central server receives an updated message sent by some of the attribute authorities. Assume that the updated key is U K x . After re-encryption, the new ciphertext is as follows:
C T N = C N = C , C N = C , i = 0 t o n 1 : C 2 , i N = C 2 , i , ρ ( i ) x : C 0 , i N = C 0 , i , C 1 , i N = C 1 , i ρ ( i ) = x : C 0 , i N = C 0 , i · C 1 , i U K 2 , x , C 1 , i N = C 1 , i U K 1 , x
Finally, to achieve IND-CCA security, the central server runs a signature algorithm to obtain verification key v k and signing key s k , after which the cloud runs S i g n s k ( C T ) σ . Note that an adversary cannot forge a new signature on a message that has been signed previously.
F i n a l c i p h e r t e x t = ( v k , C T , σ )
It is worth mentioning that because homomorphic encryption is used to completely protect the privacy of a participant’s upload gradient, the task publisher cannot access the participant’s data. Therefore, the selection of suitable participants by the central server is based on the authentication of the flag. When a participant decrypts and obtains the flag successfully within the deadline, the task publisher can include it in the node pool of this round of training. The central server can then negotiate homomorphic encryption keys with these participants and execute federated learning algorithms, such as Fedavg.

4.2.5. Participant Decryption: Dec ( CT , SK ) flag

Firstly, a potential participant obtains a ciphertext from the central server and checks whether Ver v k ( C T ; σ ) = ? 1 . If it does not hold, the client outputs . Otherwise, it proceeds.
After successful verification, it selects an appropriate ω i Z p with polynomial time complexity, to make P ( x ) S ω i M i = ( 1 , 0 , , 0 ) , i [ 1 , n ] true. If it can find such a set of constants { ω i } , the decryption algorithm continues to execute as s = i I ω i λ i ; otherwise, it terminates and outputs ⊥.
The decryption algorithm first computes as follows:
e C , K i I e C i , L e K ρ ( i ) , D 2 , i w i = e g s , g α · g a t i I e g a λ i H ( ρ ( i ) ) r i v ρ ( i ) , g t e H ( x ) v ρ ( i ) t , g r i ω i = e g , g α s e g , g a s t i I e g a λ i , g t e H ( ρ ( i ) ) r i v ρ ( i ) , g t e H ( x ) v ρ ( i ) t , g r i ω i = e g , g α s e g , g a s t i I e g , g a t λ i e H ( ρ ( i ) ) , g r i v ρ ( i ) t e g , H ( x ) r i v ρ ( i ) t ω i = e g , g α s e g , g a s t i I e g , g a t λ i ω i = e g , g α s e g , g a s t e g , g a s t = e ( g , g ) α s
Then, it can decrypt the flag as
f = C e ( g , g ) α s
Upon acquiring the flag, the participant can send it to the central server to indicate that it meets the policy set by the task publisher and can participate in this round of training, without compromising their privacy. The complete algorithm is shown in Algorithm 1. At the same time, the flag is uploaded to the blockchain to receive the revenue after the training is done.
Algorithm 1 FedAvg-ABE. The K clients are indexed by k; B is the local minibatch size; E is the number of local epochs; and η is the learning rate.
Server executes:
  1:
initialize w 0
  2:
for each round t = 1, 2, … do
  3:
    m m a x ( C · K , 1 ) ;
  4:
    E n c ( G P , P K , f , A ) C T
  5:
   for each appropriate client in parallel do
  6:
      [ [ w t + 1 k ] ] ClientUpdate( k , C T )
  7:
   end for
  8:
    [ [ w t + 1 ] ] k = 1 K n k n [ [ w t + 1 k ] ]
  9:
    w t + 1 [ [ w t + 1 ] ] // homomorphic decryption
10:
end for
ClientUpdate( k , C T ): // Run on client k
11:
if Not match policy: then
12:
   return
13:
else
14:
    D e c ( C T , S K ) f l a g
15:
end if
16:
Negotiates the keys of homomorphic encryption with server
17:
B (split P k into batches of size B)
18:
for each local epoch i from 1 to E do
19:
   for batch b B  do
20:
      w w η ( w ; b )
21:
      [ [ w ] ] w // Multi-key homomorphic encryption
22:
   end for
23:
end for
24:
return [ [ w ] ] .

4.2.6. Incentive: Inc ( CT , CID , Time ) ( Trans . )

Since data and computing resources are valuable, participants that use local data and local computing resources should be paid. In this work, if a participant runs the ABE decryption algorithm and obtains updated global gradient information from the central server, which also means that the participant meets a series of policies formulated by the central server, his local data have been used reasonably. Therefore, the task publisher should pay them after the training according to each participant’s contribution to the global model. Specifically, in each round of training, participants decrypt messages to obtain the flag that signifies successful decryption, before using their own digital signature to sign the flag and upload it to the non-repudiation consortium chain, where smart contracts are executed. If the client’s c i d is in the list provided by the central server, indicating that the participant is entitled to the benefits arising from the training round, these records are recorded in the blockchain. After the training, the server can trace back the blockchain and distribute the actual profits to the various participants.
As shown in Algorithms 2 and 3, the incentive mechanism can be divided into an upload transaction and confirm transaction. Before each round of training, the task publisher needs to select a flag as the voucher of this round for profit distribution, and each participant tries to decrypt and obtain the flag. The participant and the task publisher together generate the upload transaction T X u p l o a d and send it to the blockchain data pool. The transaction is then broadcast to other nodes in the blockchain for verification. Once the deal is validated, it is packaged into the consensus block via PBFT. At the end of the training, the consortium blockchain can be backtracked and the revenue can be distributed to all clients who participated in the training.
Algorithm 2 Upload Transaction Generation.
Input: f, c i d , t i m e
Output:  T X u p l o a d ;
 1:
The task publisher releases the flag f of round t, computes ξ = H ( f | | t ) , and sends it to the blockchain secretly.
 2:
The participant sends client id c i d and t i m e to the blockchain
 3:
Let f c be the flag decrypted from the ciphertext
 4:
Compute
ζ = H ( H ( f c | | t ) | | c i d | | t i m e )
s i g n = S i g n c i d ( ζ )
 5:
return  T X u p l o a d = { s i g n , c i d , H ( f c | | t ) , t i m e }
Algorithm 3 Confirm Transaction Generation.
Input:  T X u p l o a d
Output:  S u c c or F a i l ;
 1:
Blockchain nodes receive transaction T X u p l o a d ;
 2:
The node calculates
ζ = V e r i f y c i d ( s i g n )
ζ = H ( H ( f c | | t ) | | c i d | | t i m e )
 3:
if  ζ = ζ then
 4:
   if  ξ = H ( f c | | t )  then
 5:
     Execute smart contract to allocate the revenues of round t to participants corresponding to c i d .
 6:
     return  S u c c
 7:
   end if
 8:
end if
 9:
return  F a i l

5. Security Analysis

Before we begin our security analysis, we need to clarify the security assumptions of the various entities in the system. First, attribute authorities are considered to be fully trusted entities, similar to certificate authorities, generally initiated by city governments. The task publisher can be a commercial institution, which is reflected in the system as honest and curious, i.e., they faithfully execute the algorithms that they are responsible for without maliciously destroying the ciphertext uploaded by the clients, but they may spy on or infer the clients’ private information from the access record. Finally, there may be malicious clients in the system, trying to collude with other clients to obtain data beyond their own permissions or trying to destabilize the system.

5.1. Selective CPA Security

Theorem 1.
There is no polynomial adversary that can selectively break our system with a challenge matrix of size l × n , where n q , when the decisional q-parallel BDHE assumption holds.
Proof. 
Inspired by Waters [34], we can build a simulator B that solves the decisional q-parallel BDHE problem with a non-negligible advantage under the prerequisite that none of the updated secret keys S K N that are generated by both the queried secret keys S K and update keys U K s can decrypt the challenge ciphertext. This is based on the assumption that we have an adversary A that chooses a challenge matrix M with the dimension of at most q columns with a non-negligible advantage ϵ = A d v A in the selective security game against our construction. The proof is produced by the challenger and the attacker through a series of interactions in the game. Because the mathematical discussion of the game details is beyond the scope of this article and it resembles Waters’ work, it is omitted. □

5.2. Data Security

In our scheme, only users with specific attributes can obtain the corresponding keys through the attribute authorities. Since the underlying protocol is based on elliptic curves, and ECDLP is unsolvable, clients without the correct attributes cannot obtain any information about the private keys from the corresponding public keys in polynomial time.
Based on the training progress and results, the task publisher will select the access policy and the flag f of the training round, which is hidden in ciphertext C. Since s is randomly chosen by the task publisher, it is a random number in the eyes of an attacker. Thus, the attacker cannot obtain any valuable information about f. With a linear secret sharing scheme, s is a secret divided by λ i and can only be recovered if there are enough parts; in other words, the ciphertext can only be decrypted if the participant has a set of attributes that match the access policy. For any invalid users who do not have the attributes declared by the access policy, since they do not have the attributes corresponding to rows of M, they do not make ρ ( i ) S ω i M i = ( 1 , 0 , , 0 ) true, where ω i Z p . Then, they cannot compute the first element of ξ , which is s. Therefore, this scheme ensures data security.

5.3. Forward and Backward Security

Forward security means that any clients that have been revoked cannot access subsequent data unless the remaining set of attributes of the client still satisfies the access structure. In the scheme proposed in this paper, if the attributes of a client are revoked, only some of the keys and the ciphertext are updated by the central server, which not only reduces the local computational overhead but also effectively prevents clients who have lost access permissions from posing threats to the updated ciphertext in the system, so as to ensure forward security. Considering that the revoked client already has permission to read the old ciphertext, the central server must restrict him from downloading the old ciphertext.
Backward security means that new clients cannot decrypt previously encrypted data. Note that we use v e r to control the ciphertext version; thus, new clients cannot decrypt the old ciphertext using the latest version of the attribute keys.

5.4. Collusion Attack

Theorem 2.
The scheme is secure under a multi-user collusive attack.
Proof. 
In the proposed scheme, the attribute authority will assign a random value t Z p * to each participant. Even if multiple participants have exactly the same attribute, the value will be different in the keys obtained by them. In the decryption algorithm, t must be consistent to realize a collusion attack. Therefore, no client can conspire with other users or groups of users to illegally decrypt the data. For example, one participant P 0 has attributes A , and the other participant P 1 has attributes B ; for an access policy of “ A B ”, individual participants P 0 or P 1 cannot decrypt the data alone. Even if they use their attribute keys with A and B to collude, the calculation cannot eliminate t; thus, they are unable to perform decryption. □
Tseng et al. [40] found that some attribute-based encryption (ABE) schemes [41,42] based on elliptic curve scalar multiplication are vulnerable to collusion attacks, because users with the same attributes can obtain the attribute private key set in the system by solving linear equations. Our scheme does not have this problem because we use bilinear pairing instead of scalar multiplication, and no party can obtain the secret parameters of the system by solving the equations.

6. Performance Comparison and Evaluation

In this section, we use public datasets to evaluate the performance of our scheme and compare it with previous work. In particular, in addition to showing how the proposed scheme improves the model accuracy in federated learning, we analyze the impact of using attribute-based encryption on the computational efficiency.
First, in Table 1, we present the characteristics of the currently popular federated learning client selection schemes. It can be seen that our proposed scheme comprehensively considers the dimensions of the client data quantity, data distribution, and computing power, avoiding complex importance measurements and reinforcement learning. We then qualitatively evaluate our work against some of the known incentive mechanisms. As shown in Table 2, most of the existing schemes use either the quantity or quality of data to distribute revenues fairly. Fortunately, the task publisher in our scheme can consider two aspects comprehensively to formulate an access policy, which is more applicable to reality. With the help of the blockchain, we can easily implement the features of auditing and traceability. This is why we use post-training allocation rather than simultaneous allocation during training, to reduce the cost of evaluating the contributions of each participant.
Next, we describe some details of the experiments.

6.1. Setup

We trained popular convolutional neural network models on two benchmark datasets, FashionMNIST and CIFAR-10. The convergence speed and the final model accuracy of the proposed ABEFedAvg algorithm are compared with three other federated learning aggregation algorithms FedAvg [3], FedProx [50] and FedIR [51] with randomly selected clients. The specific experimental Settings are as follows:
Hardware and Software setup: This paper conducts experiments on a set of Linux servers, each running one experimental task. After all resources have been allocated, the hardware and software setup of each server is shown in Table 3.
Dataset: We comprehensively evaluate the efficiency of ABEFedAvg in simulation experiments using different datasets, namely FashionMNIST and CIFAR-10, which contain numerous fixed-size images and have been used in a large number of studies. The dataset, validation set and test set are allocated to different parties with different data distribution patterns according to Dirichlet distribution to evaluate the performance of ABEFedAvg under non-independent and identically distributed data. The FashionMNIST dataset is a very classic dataset in the field of machine learning. It consists of 60,000 training samples and 10,000 test samples, each of which is a 28 × 28 pixel image representing an item numbered from 0 to 9. The CIFAR-10 dataset has a total of 60,000 color images, each with a scale of 32 × 32 pixels, and is divided into 10 categories with 6000 images each. Of these, 50,000 images are used for training to form five training batches of 10,000 images each, and the remaining 10,000 images are used for testing to form a separate testing batch.
Party: Then this paper uses the method in [52] to generate the partition of Non-IID. Specifically, the parameters of the Dirichlet distribution are set to partition the dataset to different parties in an unbalanced manner. When the parameter α is larger, the data of each party tends to be independently and identically distributed. On the contrary, the data distribution is more uneven. In this paper, three distribution cases are set, and α = i n f is used to simulate the ideal situation where the data is completely independent and identically distributed, as Figure 4 shows. Use α = 0.5 to simulate a slightly independent and identically distributed scenario, which is common in real-world scenarios, as Figure 5 shows. We use α = 0.1 to simulate a worst-case data distribution where almost each party has only 3–4 classes, as Figure 6 shows. The data distribution of each parameter setting participant is shown as follows:
Model: The model used in this article is LeNet-5 convolutional Neural Network (CNN), which is commonly used for image classification. The model structure of LeNet-5 includes convolutional layer, pooling layer and fully connected layer. The convolutional layer and pooling layer are used to extract the local features of the image, and the fully connected layer is used to map the features to the class probabilities. The first and third layers are convolutional layers with 6 and 16 kernels, respectively, each of size 5 × 5 and step size 1; Convolutional layers are followed by average pooling layers with a pooling kernel of size 2 × 2 and no padding is used, their role is to downsample the input feature map and reduce the size of the feature map. The last three layers are fully connected layers with 120, 84 and 10 neurons, respectively. In the convolution kernel and the fully connected layer, ReLU is used as the activation function to avoid the problem of gradient disappearance. For the FashionMNIST dataset, the input image is 28 × 28 × 1 , while the CIFAR-10 dataset has an input image specification of 32 × 32 × 3 .
Performance index: In order to evaluate the optimization degree of the proposed party selection mechanism based on attribute encryption on various synchronous federated learning algorithms, this paper uses the test set accuracy as the main indicator to measure the performance of the model, trains on the FashionMNIST dataset and CIFAR-10 dataset for 500 rounds and 1000 rounds respectively, and plots the test set accuracy curve. Finally, the average accuracy and the highest accuracy are calculated, where the accuracy is defined as the ratio of the number of correctly classified images to the total number of test sets, and the range is between 0 and 1. To evaluate the convergence speed of the proposed ABEFedAvg algorithm, the number of communication rounds for the model to converge to the target accuracy, ToA@x, is used as the main metric to measure the efficiency of model training, where x represents the target accuracy.

6.2. Experimental Results

6.2.1. Effect of the Number of Participant Selection on performance

Firstly, we study the impact of using the stringency of the access policy in the proposed attribute-based encryption participant selection scheme and the participant selection score C of the baseline algorithm FedAvg on the performance of federated learning. In this paper, we assume that there are K = 100 parties in a region, and three different access strategies are selected, and the stringency is set to “strict”, “moderate” and “loose” respectively. The corresponding comparison of the three participant selection scores is as follows. m a t h c a l C = 0.1 , m a t h c a l C = 0.2 , m a t h c a l C = 0.3. The performance evaluation of different access strategies and selection scores using FashionMNIST and CIFAR-10 picture datasets is shown in Table 4.
For the FedAvg algorithm, when C = 0.3 , the accuracy of the model in the test set is the highest, and with the decrease of the participant selection score, the accuracy also decreases in turn. When C = 0.1 , the accuracy is only 0.8318, and the training curve has the largest degree of fluctuation. This is because the small number of selected parties in each round of training reduces the number of samples for model learning. When the selection score C is 0.3, the model training time is the shortest, only 127 rounds are needed. When the selection score C is reduced to 0.2, the number of communication rounds increases by 48 rounds. The training time of the model grows substantially, requiring 393 rounds of communication to reach the target accuracy, an increase of 266 rounds compared to the setting with C = 0.3 . Therefore, in order to balance the model accuracy and training time overhead, the party selection score is set to C = 0.2 in the following experiments.
For ABEFedAvg algorithm, the key to affect the number of selected parties is the stringency of the access policy. When using the “strict” access policy, the central server only allows the subjects with the largest number of samples and the most uniform distribution of all participants to participate in the training, while using the “loose” access policy means accepting the participants with low degree of independent and identical distribution. Experimental results show that the model has the highest accuracy when selecting the “moderate” access strategy, reaching an average accuracy of 0.8943 and 0.7508 on the FashionMNIST and CIFAR-10 datasets, which shown in Figure 7 and Figure 8, respectively. This is because choosing a more stringent access policy can improve the quality of the selected parties, but it also rejects more data samples that are still valuable for training. On the contrary, choosing a more relaxed access policy will weaken the effect of access control and introduce more parties with uneven local sample data. Based on this, the “moderate” access policy is selected in the following experiments.

6.2.2. Influence of Independent and Identically Distributed Data on Performance

It is well known that in real scenarios, the degree of independence and identically distributed data of each participant in federated learning is often unpredictable. Generally speaking, the higher the degree of independence and identically distributed data of each participant, the better the accuracy and generalization of the trained model. Therefore, this section verifies the effectiveness and robustness of the proposed scheme in three different data distribution scenarios according to the experimental setup described in Section 6.1.
According to the experimental results shown in Table 5, it is obvious that when each party meets the local independent and identically distributed (IID) data, the proposed scheme has limited improvement on the accuracy of model training. Compared with the original algorithm, the proposed scheme only improves 1.05 and 1.17 percentage points respectively on the FashionMNIST and CIFAR-10 datasets. The reason is that in such an ideal federated learning environment, the randomly selected clients all have almost the same data distribution as the clients that satisfy the access policy.
However, it can be seen that when using the setting α = 0.5 for Non-IID, the accuracy of the CNN model using ABEFedAvg is significantly higher than that of FedAvg with randomly selected clients, which is 3.12% and 4.62% higher for FashionMNIST and CIFAR-10 datasets, which shown in Figure 9 and Figure 10, respectively. If we look at the more extreme case of α = 0.1 , the advantage of our scheme will be even more prominent, outdoing the random selection strategy in traditional federated learning by 7.81% and 6.06% in two datasets, respectively. The reason here is also obvious, because the proposed scheme can adaptively select participants with matching access policies in each round of training, which enables the system to control the data distribution of participants in a better range, so as to achieve higher training accuracy. It is worth mentioning that under the setting of α = 0.1 , due to the moderate access strategy used in this scheme, there may be a proportion that the number of selected parties is less than the default, but from the experimental results, the influence of this factor on the training accuracy is very limited. In addition, the ‘-’ in Table 5 indicates that the algorithm cannot reach the target accuracy within a given number of rounds. For example, under the setting of α = 0.1 of FashionMNIST dataset, neither algorithm can reach the test set accuracy of 0.85 within 500 communication rounds. Under the setting of α = 0.1 of CIFAR-10 dataset, the traditional FedAvg algorithm cannot achieve an accuracy of 0.70 within 1000 communication rounds, while the ABEFedAvg algorithm can achieve the accuracy target with 237 communication rounds.

6.2.3. Impact of Federated Learning Algorithms on Performance

This section investigates the applicability and optimization degree of the proposed attribute-based encryption party selection algorithm to two synchronous federated learning aggregation algorithms, FedProx and FedIR, when used as a module embeddable in federated learning. Although these latest schemes proposed many improvement strategies in the aggregation parameters, which improved the performance of the model to a certain extent, most of them still used the random selection method to select participants, which had a great impact on the accuracy of the model. Therefore, this paper applies the client selection scheme as a couplable module to each mainstream algorithm to show its performance optimization effect for each aggregation strategy. Table 6 details the performance metrics for accuracy and processing time using two different datasets.
Figure 11 and Figure 12 show the training curves of each algorithm on FashionMNIST and CIFAR-10 datasets, respectively. It can be observed that the performance of different algorithms on the two datasets is basically the same. In general, the three algorithms can achieve the target accuracy within a given number of communication rounds, and FedAvg algorithm produces the lowest performance, followed by FedProx algorithm and FedIR algorithm. Although FedIR algorithm has higher accuracy, its training curve has a large degree of fluctuation due to the addition of additional weight information. For example, FedAvg using FashionMNIST dataset has an accuracy of 0.8631, while FedProx and FedIR have an accuracy of 0.8747 and 0.8786, respectively. After adding the attribute-based encryption selection module, it can be clearly seen that the performance of each algorithm is improved, and the accuracy is increased by 3.12, 2.23 and 2.39 percentage points respectively compared with the above three benchmark algorithms. Using the proposed scheme has the most obvious optimization effect on the FedAvg algorithm.
On the CIFAR-10 dataset, the proposed scheme can obtain more obvious advantages. The original FedAvg algorithm achieves an average accuracy of 0.7046 on this dataset, the FedProx algorithm is 0.7100, and the highest accuracy algorithm is FedIR, which reaches 0.7202. Using the proposed ABE can also improve the overall performance of the above algorithms on the test set. For example, for the CIFAR10 dataset, the accuracy of FedProx and FedIR algorithms with ABE filtering module is 0.7597 and 0.7725, respectively, which is 4.97 and 5.23 percentage points higher than that of the random selection scheme. In addition, although the introduction of encryption and decryption mechanism in the participant selection phase will increase the time overhead, the number of communication rounds can be greatly reduced once the appropriate participants are selected. The results show that the number of communication rounds is reduced by 502, 255 and 158 rounds respectively for the above three schemes. It can be concluded that the scheme in this paper has a strong optimization effect on various aggregation algorithms of synchronous federated learning.

6.2.4. Comparison with Other Participant Selection Schemes

The comparison between ABEFedAvg and other party selection schemes is shown in the related work section. The most successful recent works include Newt proposed by Zhao et al. [53] and FedFNS proposed by Wu et al. [45]. The former is to find the balance between accuracy and execution time in each round based on weight difference. The weight change between two adjacent rounds is defined as a utility that converges quickly. Moreover, since clients with large data volumes may negatively affect the training time, the ratio of the local dataset size to the total data size is also added as a coefficient of the client utility. Since it is not always necessary to select participants in each round of testing, the authors also designed a feedback control component that dynamically adjusts the frequency of customer selection; The latter is based on the selection of probability assignment, which designs an aggregation algorithm to determine the optimal subset of local model updates by excluding unfavorable local updates. In addition, a probabilistic node selection framework (FedPNS) was proposed, which dynamically adjusted the selection probability of the device according to its contribution to the data distribution model.
Next, the performance of the proposed scheme is compared with the above two latest federated learning participant selection schemes. Similarly, this section also uses the most classical FedAvg aggregation algorithm of federated learning to evaluate the test set accuracy and stability of the two datasets under the setting of C = 0.2 and α = 0.5 . The experimental results are shown in Table 7. On the FashionMNIST dataset, the proposed attribute-based encryption access control scheme achieves an average accuracy of 0.8943, Zhao et al.’s scheme achieves an accuracy of 0.8782, and Wu et al.’s scheme achieves an accuracy of 0.8715. Compared with the above two schemes, the proposed scheme is improved by 1.83% and 2.62% respectively. On the CIFAR-10 dataset, the average accuracy of the proposed scheme reaches 0.7508, the other two schemes are 0.7294 and 0.7148, and the accuracy is improved by 2.93% and 5.04%, respectively. Then we further evaluate the number of communication rounds required by ABEFedAvg algorithm and other two schemes applied to federated learning training to achieve the target accuracy. As shown in Figure 13 and Figure 14, on the FashionMNIST and CIFAR-10 datasets, the accuracy of 0.85 and 0.7 are achieved respectively, and the proposed scheme only needs 29 and 167 rounds. Although Newt and FedFNS have a great improvement over the original FedAvg random selection strategy, they are still weaker than the proposed FedABE scheme in this index. In summary, the party selection strategy based on attribute-based encryption proposed in this paper has obvious advantages even in the existing latest work, and has great application and promotion value.

7. Conclusions

In conclusion, our study introduces an innovative attribute-based participant selecting scheme for federated learning within smart city frameworks that leverages the integration of ciphertext-policy attribute-based encryption (CP-ABE) and consortium blockchain. This approach enhances both the security and efficiency of participant selection, mitigating common risks associated with privacy breaches and malicious attacks.
Our findings demonstrate that the proposed scheme significantly improves the efficiency of federated learning processes by enabling precise participant selection based on detailed attribute criteria, rather than relying on the traditional methods of random or resource-based selection. The attribute-based method ensures that only participants meeting specific pre-defined criteria contribute to the model training, thus optimizing the quality and relevance of the aggregated data.
Moreover, the incorporation of consortium blockchain technology provides a robust incentive mechanism and audit trail that ensures participant accountability and motivates continued engagement. This novel integration not only supports the scalability and sustainability of federated learning projects but also enhances their transparency and trustworthiness.

7.1. Theoretical and Practical Implications

Our research introduces a novel attribute-based participant selecting scheme enhanced with blockchain technology for federated learning in smart cities. This approach theoretically expands the understanding of federated learning by integrating privacy-preserving techniques (CP-ABE) and blockchain to safeguard against unauthorized access and ensure data integrity. Practically, the scheme provides a reliable and scalable solution for smart city administrators to deploy machine learning models that comply with stringent privacy regulations while maintaining high efficiency and participant motivation.
The implementation of our scheme in smart cities could significantly enhance the operational efficiency of various urban systems, such as public transportation networks, healthcare services, and emergency response systems. By ensuring that only qualified and authorized participants contribute to federated learning tasks, our model promotes the creation of more accurate and reliable predictive models, driving smarter decision-making in urban management.

7.2. Limitations

While our approach offers substantial improvements in privacy and efficiency, there are several limitations to consider. The complexity of CP-ABE may lead to an increased computational overhead, particularly as the number of attributes grows. This could potentially slow down the process in scenarios where real-time data processing is crucial. Additionally, our study’s focus on theoretical design and simulated environments may not fully capture the practical challenges encountered in real-world implementations. The effectiveness and efficiency of the encryption might vary significantly under different operational conditions and with different data volumes.

7.3. Future Research Directions

Considering the identified limitations, future research should focus on optimizing the efficiency of attribute-based encryption techniques to reduce the computational demands, particularly in environments with extensive attributes. Further empirical research is also necessary to test the scheme across various real-world settings in smart cities, to evaluate its practicality and performance under diverse conditions. Such studies could help to refine the model, making it more robust and adaptable to different types of data and applications.
Exploring the application of our federated learning scheme in other domains, such as healthcare and public safety, could provide insights into its adaptability and effectiveness in other critical areas of smart city development. Moreover, integrating advanced machine learning techniques, such as deep learning, might enhance the predictive capabilities of the models trained using our scheme, thus broadening its applicability and impact.

Author Contributions

Conceptualization, X.Y. and H.Q.; methodology, X.Y. and H.Q.; software, X.Y. and H.Q.; validation, X.Y. and H.Q.; formal analysis, X.Y. and H.Q.; investigation, X.Y. and H.Q.; resources, X.Y. and H.Q.; data curation, X.Y. and H.Q.; writing—original draft preparation, X.Y. and H.Q.; writing—review and editing, X.W.; visualization, H.Q.; supervision, X.Z.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFB2103803.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding authors.

Acknowledgments

We wish to acknowledge the anonymous referees who gave valuable suggestions to improve the work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hashem, I.A.T.; Usmani, R.S.A.; Almutairi, M.S.; Ibrahim, A.O.; Zakari, A.; Alotaibi, F.; Alhashmi, S.M.; Chiroma, H. Urban Computing for Sustainable Smart Cities: Recent Advances, Taxonomy, and Open Research Challenges. Sustainability 2023, 15, 3916. [Google Scholar] [CrossRef]
  2. Band, S.S.; Ardabili, S.; Sookhak, M.; Theodore, A.; Elnaffar, S.; Moslehpour, M.; Csaba, M.; Torok, B.; Pai, H.T.; Mosavi, A. When Smart Cities Get Smarter via Machine Learning: An In-depth Literature Review. IEEE Access 2022, 10, 60985–61015. [Google Scholar] [CrossRef]
  3. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  4. Liu, J.; Jia, J.; Che, T.; Huo, C.; Ren, J.; Zhou, Y.; Dai, H.; Dou, D. Fedasmu: Efficient asynchronous federated learning with dynamic staleness-aware model update. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; pp. 13900–13908. [Google Scholar]
  5. Abdelmoniem, A.M.; Sahu, A.N.; Canini, M.; Fahmy, S.A. Refl: Resource-efficient federated learning. In Proceedings of the Eighteenth European Conference on Computer Systems, Rome, Italy, 8–12 May 2023; pp. 215–232. [Google Scholar]
  6. Xiong, Y.; Wang, R.; Cheng, M.; Yu, F.; Hsieh, C.J. Feddm: Iterative distribution matching for communication-efficient federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16323–16332. [Google Scholar]
  7. Chetoui, M.; Akhloufi, M.A. Peer-to-Peer Federated Learning for COVID-19 Detection Using Transformers. Computers 2023, 12, 106. [Google Scholar] [CrossRef]
  8. Yang, H.; Ge, M.; Xue, D.; Xiang, K.; Li, H.; Lu, R. Gradient Leakage Attacks in Federated Learning: Research Frontiers, Taxonomy and Future Directions. IEEE Netw. 2023, 1–8. [Google Scholar] [CrossRef]
  9. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and Open Problems in Federated Learning. Found. Trends Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  10. Zhu, J.; Cao, J.; Saxena, D.; Jiang, S.; Ferradi, H. Blockchain-empowered federated learning: Challenges, solutions, and future directions. ACM Comput. Surv. 2023, 55, 1–31. [Google Scholar] [CrossRef]
  11. Ali, A.; Ilahi, I.; Qayyum, A.; Mohammed, I.; Al-Fuqaha, A.; Qadir, J. A systematic review of federated learning incentive mechanisms and associated security challenges. Comput. Sci. Rev. 2023, 50, 100593. [Google Scholar] [CrossRef]
  12. Chai, Z.; Ali, A.; Zawad, S.; Truex, S.; Anwar, A.; Baracaldo, N.; Zhou, Y.; Ludwig, H.; Yan, F.; Cheng, Y. TiFL: A Tier-based Federated Learning System. In Proceedings of the HPDC ’20: The 29th International Symposium on High-Performance Parallel and Distributed Computing, Stockholm, Sweden, 23–26 June 2020; pp. 125–136. [Google Scholar] [CrossRef]
  13. Marnissi, O.; Hammouti, H.E.; Bergou, E.H. Client selection in federated learning based on gradients importance, NY, USA. In Proceedings of the Ninth International Conference on Modeling, Simulation and Applied Optimization, Marrakesh, Morocco, 26–28 April 2023. [Google Scholar]
  14. Zhang, W.; Wang, X.; Zhou, P.; Wu, W.; Zhang, X. Client Selection for Federated Learning with Non-IID Data in Mobile Edge Computing. IEEE Access 2021, 9, 24462–24474. [Google Scholar] [CrossRef]
  15. Ozdayi, M.S.; Kantarcioglu, M.; Gel, Y.R. Defending against backdoors in federated learning with robust learning rate. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 9268–9276. [Google Scholar]
  16. Nagalapatti, L.; Narayanam, R. Game of gradients: Mitigating irrelevant clients in federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 9046–9054. [Google Scholar]
  17. Zhang, B.; Lu, G.; Qiu, P.; Gui, X.; Shi, Y. Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption. Entropy 2023, 25, 1550. [Google Scholar] [CrossRef]
  18. Shen, X.; Jiang, H.; Chen, Y.; Wang, B.; Gao, L. Pldp-fl: Federated learning with personalized local differential privacy. Entropy 2023, 25, 485. [Google Scholar] [CrossRef]
  19. Wu, X.; Huang, F.; Hu, Z.; Huang, H. Faster adaptive federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 10379–10387. [Google Scholar]
  20. Feng, D.; Helena, C.; Lim, W.Y.B.; Ng, J.S.; Jiang, H.; Xiong, Z.; Kang, J.; Yu, H.; Niyato, D.; Miao, C. CrowdFL: A Marketplace for Crowdsourced Federated Learning. In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, Virtual Event, 22 February–1 March 2022; pp. 13164–13166. [Google Scholar]
  21. Zhang, Y.; Deng, R.H.; Xu, S.; Sun, J.; Li, Q.; Zheng, D. Attribute-based encryption for cloud computing access control: A survey. ACM Comput. Surv. (CSUR) 2020, 53, 1–41. [Google Scholar] [CrossRef]
  22. Lai, F.; Zhu, X.; Madhyastha, H.V.; Chowdhury, M. Oort: Informed Participant Selection for Scalable Federated Learning. arXiv 2020, arXiv:2010.06081. [Google Scholar]
  23. Li, C.; Zeng, X.; Zhang, M.; Cao, Z. PyramidFL: A fine-grained client selection framework for efficient federated learning. In Proceedings of the 28th Annual International Conference on Mobile Computing and Networking, Sydney, Australia, 17–21 October 2022; pp. 158–171. [Google Scholar]
  24. Wang, H.; Kaplan, Z.; Niu, D.; Li, B. Optimizing federated learning on non-iid data with reinforcement learning, Toronto, ON, Canada. In Proceedings of the IEEE INFOCOM 2020, Toronto, ON, Canada, 6–9 July 2020; pp. 1698–1707. [Google Scholar]
  25. Sarikaya, Y.; Ercetin, O. Motivating workers in federated learning: A stackelberg game perspective. IEEE Netw. Lett. 2019, 2, 23–27. [Google Scholar] [CrossRef]
  26. Richardson, A.; Filos-Ratsikas, A.; Faltings, B. Rewarding high-quality data via influence functions. arXiv 2019, arXiv:1908.11598. [Google Scholar]
  27. Xu, J.; Wang, C.; Jia, X. A survey of blockchain consensus protocols. ACM Comput. Surv. 2023, 55, 1–35. [Google Scholar] [CrossRef]
  28. Almutairi, W.; Moulahi, T. Joining Federated Learning to Blockchain for Digital Forensics in IoT. Computers 2023, 12, 157. [Google Scholar] [CrossRef]
  29. Weng, J.; Weng, J.; Zhang, J.; Li, M.; Zhang, Y.; Luo, W. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based incentive. IEEE Trans. Dependable Secur. Comput. 2019, 18, 2438–2455. [Google Scholar] [CrossRef]
  30. Bao, X.; Su, C.; Xiong, Y.; Huang, W.; Hu, Y. Flchain: A blockchain for auditable federated learning with trust and incentive. In Proceedings of the 2019 5th International Conference on Big Data Computing and Communications (BIGCOM), Qingdao, China, 9–11 August 2019; pp. 151–159. [Google Scholar]
  31. Sahai, A.; Waters, B.R. Fuzzy Identity-Based Encryption. In Proceedings of the 24th annual international conference on Theory and Applications of Cryptographic Techniques, Zurich, Switzerland, 26–30 May 2004. [Google Scholar]
  32. Bethencourt, J.; Sahai, A.; Waters, B. Ciphertext-Policy Attribute-Based Encryption. In Proceedings of the IEEE Symposium on Security & Privacy, Berkeley, CA, USA, 20–23 May 2007. [Google Scholar]
  33. Emura, K.; Miyaji, A.; Nomura, A.; Omote, K.; Soshi, M. A ciphertext-policy attribute-based encryption scheme with constant ciphertext length. In Proceedings of the International Conference on Information Security Practice and Experience, Xi’an, China, 13–15 April 2009; pp. 13–23. [Google Scholar]
  34. Waters, B. Ciphertext-policy attribute-based encryption: An expressive, efficient, and provably secure realization. In Proceedings of the International Workshop on Public Key Cryptography, Taormina, Italy, 6–9 March 2011; pp. 53–70. [Google Scholar]
  35. Pirretti, M.; Traynor, P.; McDaniel, P.; Waters, B. Secure attribute-based systems. J. Comput. Secur. 2010, 18, 799–837. [Google Scholar] [CrossRef]
  36. Zhang, Y.; Chen, X.; Li, J.; Li, H.; Li, F. FDR-ABE: Attribute-based encryption with flexible and direct revocation. In Proceedings of the 2013 5th International Conference on Intelligent Networking and Collaborative Systems, Xi’an, China, 9–11 September 2013; pp. 38–45. [Google Scholar]
  37. Hur, J.; Noh, D.K. Attribute-based access control with efficient revocation in data outsourcing systems. IEEE Trans. Parallel Distrib. Syst. 2010, 22, 1214–1221. [Google Scholar] [CrossRef]
  38. Li, J.; Yao, W.; Han, J.; Zhang, Y.; Shen, J. User collusion avoidance CP-ABE with efficient attribute revocation for cloud storage. IEEE Syst. J. 2017, 12, 1767–1777. [Google Scholar] [CrossRef]
  39. Prantl, T.; Zeck, T.; Horn, L.; Iffländer, L.; Bauer, A.; Dmitrienko, A.; Krupitzer, C.; Kounev, S. Towards a Cryptography Encyclopedia: A Survey on Attribute-Based Encryption. J. Surveill. Secur. Saf. 2023, 4, 129–154. [Google Scholar] [CrossRef]
  40. Tseng, Y.F.; Huang, J.J. Cryptanalysis on Two Pairing-Free Ciphertext-Policy Attribute-Based Encryption Schemes. In Proceedings of the 2020 International Computer Symposium (ICS), Tainan, Taiwan, 17–19 December 2020; pp. 403–407. [Google Scholar]
  41. Ding, S.; Li, C.; Li, H. A novel efficient pairing-free CP-ABE based on elliptic curve cryptography for IoT. IEEE Access 2018, 6, 27336–27345. [Google Scholar] [CrossRef]
  42. Wang, Y.; Chen, B.; Li, L.; Ma, Q.; Li, H.; He, D. Efficient and secure ciphertext-policy attribute-based encryption without pairing for cloud-assisted smart grid. IEEE Access 2020, 8, 40704–40713. [Google Scholar] [CrossRef]
  43. Nishio, T.; Yonetani, R. Client selection for federated learning with heterogeneous resources in mobile edge. In Proceedings of the ICC 2019-2019 IEEE international conference on communications (ICC), Shanghai, China, 20–24 May 2019; pp. 1–7. [Google Scholar]
  44. Cho, Y.J.; Wang, J.; Joshi, G. Client selection in federated learning: Convergence analysis and power-of-choice selection strategies. arXiv 2020, arXiv:2010.01243. [Google Scholar]
  45. Wu, H.; Wang, P. Node selection toward faster convergence for federated learning on non-iid data. IEEE Trans. Netw. Sci. Eng. 2022, 9, 3099–3111. [Google Scholar] [CrossRef]
  46. Song, T.; Tong, Y.; Wei, S. Profit allocation for federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; pp. 2577–2586. [Google Scholar]
  47. Yu, H.; Liu, Z.; Liu, Y.; Chen, T.; Cong, M.; Weng, X.; Niyato, D.; Yang, Q. A sustainable incentive scheme for federated learning. IEEE Intell. Syst. 2020, 35, 58–69. [Google Scholar] [CrossRef]
  48. Zeng, R.; Zhang, S.; Wang, J.; Chu, X. Fmore: An incentive scheme of multi-dimensional auction for federated learning in mec. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 278–288. [Google Scholar]
  49. Zhan, Y.; Li, P.; Qu, Z.; Zeng, D.; Guo, S. A learning-based incentive mechanism for federated learning. IEEE Internet Things J. 2020, 7, 6360–6368. [Google Scholar] [CrossRef]
  50. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
  51. Hsu, T.M.H.; Qi, H.; Brown, M. Federated visual classification with real-world data distribution. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 76–92. [Google Scholar]
  52. Hsu, T.M.H.; Qi, H.; Brown, M. Measuring the effects of non-identical data distribution for federated visual classification. arXiv 2019, arXiv:1909.06335. [Google Scholar]
  53. Zhao, J.; Chang, X.; Feng, Y.; Liu, C.H.; Liu, N. Participant selection for federated learning with heterogeneous data in intelligent transport system. IEEE Trans. Intell. Transp. Syst. 2022, 24, 1106–1115. [Google Scholar] [CrossRef]
Figure 1. Our application of access control in a smart city.
Figure 1. Our application of access control in a smart city.
Computers 13 00118 g001
Figure 2. Typical blockchain data structure with hash pointers.
Figure 2. Typical blockchain data structure with hash pointers.
Computers 13 00118 g002
Figure 3. Framework of the proposed scheme.
Figure 3. Framework of the proposed scheme.
Computers 13 00118 g003
Figure 4. Completely IID.
Figure 4. Completely IID.
Computers 13 00118 g004
Figure 5. Slightly IID.
Figure 5. Slightly IID.
Computers 13 00118 g005
Figure 6. Worst-case.
Figure 6. Worst-case.
Computers 13 00118 g006
Figure 7. FashionMNIST Test accuracy for different number of participant selection.
Figure 7. FashionMNIST Test accuracy for different number of participant selection.
Computers 13 00118 g007
Figure 8. CIFAR-10 Test accuracy for different number of participant selection.
Figure 8. CIFAR-10 Test accuracy for different number of participant selection.
Computers 13 00118 g008
Figure 9. FashionMNIST for different IID degrees.
Figure 9. FashionMNIST for different IID degrees.
Computers 13 00118 g009
Figure 10. CIFAR-10 for different IID degrees.
Figure 10. CIFAR-10 for different IID degrees.
Computers 13 00118 g010
Figure 11. FashionMNIS Test accuracy for different federated learning algorithms.
Figure 11. FashionMNIS Test accuracy for different federated learning algorithms.
Computers 13 00118 g011
Figure 12. CIFAR-10 Test accuracy for different federated learning algorithms.
Figure 12. CIFAR-10 Test accuracy for different federated learning algorithms.
Computers 13 00118 g012
Figure 13. FashionMNIST Test accuracy for different participant selection schemes [45,53].
Figure 13. FashionMNIST Test accuracy for different participant selection schemes [45,53].
Computers 13 00118 g013
Figure 14. CIFAR-10 Test accuracy for different participant selection schemes [45,53].
Figure 14. CIFAR-10 Test accuracy for different participant selection schemes [45,53].
Computers 13 00118 g014
Table 1. Comparison of client selection schemes.
Table 1. Comparison of client selection schemes.
SchemesSystem HeterogeneityStatistical HeterogeneityPrivacyExpansibilityFine-GrainedMain Idea
Nishio 2019 [43]×××Select as many clients as possible within a specified deadline
Cho 2020 [44]×××Select clients with higher local losses
Chai 2020 [12]××Select clients with similar response latencies
Lai 2021 [22]×Select clients through importance sampling
Zhang 2021 [14]××××Select clients with lower non-IID degrees of data
Wu 2022 [45]××××Select clients by comparing the gradients of the local and the global
Li 2022 [23]×Select clients with higher importance ranking
Our schemeSelect clients using attribute-based encryption
Table 2. Comparison of incentive mechanisms.
Table 2. Comparison of incentive mechanisms.
SchemesData QualityData QuantityPrivacyEfficiencyAuditabilityUniversalityMain Idea
Song 2019 [46]××low××Measure the contribution with a Contribution Index (CI)
Yu 2020 [47]×mid×Participants dynamically receive payoff according to contributions
Zeng 2020 [48]×high×Auction theory
Zhan 2020 [49]×low×DRL-based reward allocation
Weng 2019 [29]×mid×Use blockchain to record the process of federated learning
Bao 2020 [30]×mid×Provide a healthy marketplace for collaborative training models
Our schemehighSelect clients using attribute-based encryption
Table 3. Hardware and Software setup.
Table 3. Hardware and Software setup.
Hardware and SoftwareSetup
CPUIntel Core™
i9-9900X CPU @ 3.50 GHz
Memory128 G
GPUNVIDIA GeForce RTX 2080 Ti × 8
CUDA Version12.0
Programming Languagepython3.9
Operating SystemUbuntu 18.04.6 LTS
Federated Learning FrameworkPytorch 1.10.2
Table 4. Training results for different number of participant selection.
Table 4. Training results for different number of participant selection.
AlgorithmFraction
Size (C)
Average
Accuracy
Highest
Accuracy
[email protected]
[email protected]
F-MNIST CIFAR-10 F-MNIST CIFAR-10 F-MNIST CIFAR-10
FedAvgc = 0.10.83180.64980.85670.6809393-
ABEFedAvg0.88160.73460.88630.743370241
FedAvg c = 0.20.86310.70460.87130.7121175669
ABEFedAvg0.89430.75080.89740.758362167
FedAvgc = 0.30.87780.71150.88030.7153127462
ABEFedAvg0.88930.73780.89120.741273195
Table 5. Training results under different independent identically distributed Settings.
Table 5. Training results under different independent identically distributed Settings.
DatasetFashionMNISTCIFAR-10
IID α = 0 . 5 α = 0 . 1 IID α = 0 . 5 α = 0 . 1
FedAvgAverage
Accuracy
0.91180.86310.75220.81200.70460.6680
Highest
Accuracy
0.91270.87130.78110.81340.71210.6772
[email protected]
[email protected]
24175-66669-
ABEFedAvgAverage
Accuracy
0.92230.89430.83030.82370.75080.7286
Highest
Accuracy
0.92280.89740.83890.82530.75830.7433
[email protected]
[email protected]
2262-52167237
Table 6. Training results for different federated learning algorithms.
Table 6. Training results for different federated learning algorithms.
AlgorithmAverage
Accuracy
Highest
Accuracy
[email protected]
[email protected]
F-MNIST CIFAR-10 F-MNIST CIFAR-10 F-MNIST CIFAR-10
FedAvg0.86310.70460.87130.7121175669
FedProx0.87470.71000.88020.7192143401
FedIR0.87860.72020.88270.7266106293
ABEFedAvg0.89430.75080.89740.758370167
ABEFedProx0.89700.75970.90110.766661146
ABEFedIR0.90250.77250.90580.780351125
Table 7. Training results for different participant selection schemes.
Table 7. Training results for different participant selection schemes.
AlgorithmAverage
Accuracy
Highest
Accuracy
[email protected]
[email protected]
Fashion
MNIST
CIFAR-10 Fashion
MNIST
CIFAR-10 Fashion
MNIST
CIFAR-10
FedAvg0.86310.70460.87130.712165669
Newt [53]0.87820.72940.88140.735339213
FedFNS [45]0.87150.71480.87660.720742341
ABEFedAvg0.89430.75080.89740.758329167
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yin, X.; Qiu, H.; Wu, X.; Zhang, X. An Efficient Attribute-Based Participant Selecting Scheme with Blockchain for Federated Learning in Smart Cities. Computers 2024, 13, 118. https://doi.org/10.3390/computers13050118

AMA Style

Yin X, Qiu H, Wu X, Zhang X. An Efficient Attribute-Based Participant Selecting Scheme with Blockchain for Federated Learning in Smart Cities. Computers. 2024; 13(5):118. https://doi.org/10.3390/computers13050118

Chicago/Turabian Style

Yin, Xiaojun, Haochen Qiu, Xijun Wu, and Xinming Zhang. 2024. "An Efficient Attribute-Based Participant Selecting Scheme with Blockchain for Federated Learning in Smart Cities" Computers 13, no. 5: 118. https://doi.org/10.3390/computers13050118

APA Style

Yin, X., Qiu, H., Wu, X., & Zhang, X. (2024). An Efficient Attribute-Based Participant Selecting Scheme with Blockchain for Federated Learning in Smart Cities. Computers, 13(5), 118. https://doi.org/10.3390/computers13050118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop