A Fair Contribution Measurement Method for Federated Learning
Abstract
:1. Introduction
- In the process of contribution measurement using Shapley, gradient multiplexing is performed: we use gradient approximation to reconstruct the model, which saves a lot of time. The reconfigurable model can be used to evaluate its performance more easily.
- Considering the heterogeneity of data distribution, a new aggregate weight is used to mitigate the impact of data heterogeneity on contribution measurement and improve the accuracy of contribution measurement.
- We propose a novel metric for measuring participant contributions in federated learning. By utilizing new aggregation weights, this method effectively mitigates the issue of data heterogeneity and enables a fairer assessment of participants’ contribution levels in the federated learning process.
2. Background
2.1. IID and Non-IID DATA
2.2. Shapley Value
2.3. Federated Learning
3. Related Work
4. Materials and Methods
4.1. Contribution Measurement Method Based on the Shapley Value
- Group Rationality: The value of the entire dataset is completely distributed among all users, i.e., .
- Fairness: (1) Two users who are identical with respect to what they contribute to a dataset’s utility should have the same value. That is, if user i and j are equivalent in the sense that: i.e., , if , then ; (2) Users with zero marginal contributions to all subsets of the dataset receive zero payoff: i.e., if , then for all .
- Additivity: The values under multiple utilities sum up to the value under a utility that is the sum of all these utilities: i.e.,, , where are two utility tests.
4.2. Gradient-Based Model Reconstruction
4.3. Build a New Aggregate Weight
4.4. New Aggregate Function
4.5. Contribution Measurement Algorithm Based on the Shapley Value
Algorithm 1 FL participant contribution evaluation. |
|
5. Experiment and Results
5.1. Dataset
- Same Distribution and Same Size: In this setup, we randomly partitioned the dataset into five equally sized subsets, each containing the same number of images and maintaining the same label distribution.
- Same Distribution and Different Size: We extracted the data we intended to use from the training set and divided them into 20 portions to create a local dataset for each participant. The proportion for participant 1 is 2/20; for participant 2, it is 3/20; for participant 3, it is 4/20; for participant 4, it is 5/20; and for participant 5, it is 6/20. We ensured that the amount of data varied between participants, and within each participant’s dataset, each numerical category had the same quantity.
- Different Distributions and Same Size: In this setup, we divide the dataset into five equal parts, with each part having a distinct distribution of feature data. To be specific, the dataset for participant 1 comprises 80% of the samples labeled as “1” and “2”, while the remaining 20% of the samples are equally distributed among the other numbers. This distribution strategy also applies to participants 2, 3, and 4. The final client exclusively contains handwritten numeric samples labeled as “8” and “9”.
- Biased and unbiased: This method builds upon the “Different Distributions and Same Size” method by aiming to enhance the heterogeneity of data distributions among clients. Under the condition that each participant possesses an equal number of samples, the method employs a more heterogeneous setup. It includes four biased clients, each containing two categories of non-overlapping data, and one unbiased client with an equal number of samples from all ten categories.
- Noisy Labels and Same Size: The data partitioning in this setting is identical to that in the “Same Distribution and Same Size” method. Subsequently, varying proportions of Gaussian noise are introduced to the input images. The specific configuration is as follows: participant 1 has 0% Gaussian noise; participant 2 has 5%; participant 3 has 10%; participant 4 has 15%; and participant 5 has 20%.
5.2. Baseline Algorithm
- Exact Shapley: This is the exact computation method proposed in the literature [32]. This method calculates the original Shapley values according to Formula (3), which involves a large amount of sub-model reconstruction. It evaluates all possible combinations of participants, and each sub-model is trained using their respective datasets.
- TMC Shapley: The method mentioned in the literature [32] is the truncated Monte Carlo Shapley algorithm, which uses local datasets and the initial FL model to train models for a subset of FL participants. To reduce unnecessary computational resources, the Monte Carlo Shapley value estimation is achieved by randomly sampling permutations and truncating unnecessary sub-model training and evaluation. Specifically, during model training, in each iteration, the algorithm generates a random sequence of training data points. The performance of the model trained with the first j datasets of the current random permutation is compared to the performance of the model obtained using all training datasets. If the difference is less than a predefined performance tolerance, it indicates that the addition of subsequent datasets will not produce new marginal contributions, and further model training is not required. Otherwise, the model needs to be retrained with the first j datasets to obtain new model performance.
- K-subset Shapley: This method [34] randomly takes every possible size subset of a participant, strictly records the occurrence time of size K, and reconstructs the participant’s Shapley value to his expected contribution to a K-size subset with a random base. The way retains the hierarchical structure of the Shapley value, and it has high approximation precision.
- SOR Shapley: Similar to the OR method in the literature [7], this method uses gradients to reconstruct sub-models, thereby avoiding the need for local users to retrain and thus saving computational resources. The Shapley value for each participant is calculated at each training epoch, and the results are recorded. These contribution results are then aggregated to reflect the overall performance of each client in federated learning.
- TMR Shapley: This method is the truncated multi-round (TMR) construction introduced in [35], which is an improvement of the MR algorithm. It uses a decay factor and the accuracy of each round to control the weights of the round-level in the final result. Once the round-level become negligible for the final result, the model is no longer constructed or evaluated. Specifically, during the iterative process of federated learning, when calculating each participant’s round-level , we check whether is less than the threshold we set (at this point, round-level can be considered negligible for the final result). If it is, the contribution assessment is truncated, and its result is not included in the final calculation of the participant’s contribution. This approach saves computation time and improves efficiency.
5.3. Performance Evaluation Metrics
- Time: We compared the training time of the model with the time required to calculate the contribution index.
- SV: We compare the Shapley value of the participants obtained using different algorithms in various scenarios.
- Accuracy: We evaluated the model accuracy using different algorithms in various scenarios.
5.4. Hyper-Parameters Setting
5.5. Experimental Result
5.5.1. Experimental Result on MNIST
5.5.2. Experimental Result on Fashion-MNIST
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Simeone, O. A very brief introduction to machine learning with applications to communication systems. IEEE Trans. Cogn. Commun. Netw. 2018, 4, 648–664. [Google Scholar] [CrossRef]
- Qu, Z.; Zhang, Z.; Liu, B.; Tiwari, P.; Ning, X.; Muhammad, K. Quantum detectable Byzantine agreement for distributed data trust management in blockchain. Inf. Sci. 2023, 637, 118909. [Google Scholar] [CrossRef]
- Yang, C.; Liu, J.; Sun, H.; Li, T.; Li, Z. WTDP-Shapley: Efficient and effective incentive mechanism in federated learning for intelligent safety inspection. IEEE Trans. Big Data 2022, 13, 2096–2108. [Google Scholar] [CrossRef]
- Liu, H.; Zhang, C.; Chen, X.; Tai, W. Optimizing Collaborative Crowdsensing: A Graph Theoretical Approach to Team Recruitment and Fair Incentive Distribution. Sensors 2024, 24, 2983. [Google Scholar] [CrossRef]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Lu, J.; Liu, H.; Jia, R.; Zhang, Z.; Wang, X.; Wang, J. Incentivizing proportional fairness for multi-task allocation in crowdsensing. IEEE Trans. Serv. Comput. 2023, 17, 990–1000. [Google Scholar] [CrossRef]
- Song, T.; Tong, Y.; Wei, S. Profit allocation for federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2577–2586. [Google Scholar]
- Hussain, G.J.; Manoj, G. Federated learning: A survey of a new approach to machine learning. In Proceedings of the 2022 First International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT), Trichy, India, 16–18 February 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–8. [Google Scholar]
- FedAI. Available online: https://www.fedai.org (accessed on 10 May 2024).
- Xiao, Z.; Fang, H.; Jiang, H.; Bai, J.; Havyarimana, V.; Chen, H.; Jiao, L. Understanding private car aggregation effect via spatio-temporal analysis of trajectory data. IEEE Trans. Cybern. 2021, 53, 2346–2357. [Google Scholar] [CrossRef]
- Hsu, H.Y.; Keoy, K.H.; Chen, J.R.; Chao, H.C.; Lai, C.F. Personalized Federated Learning Algorithm with Adaptive Clustering for Non-IID IoT Data Incorporating Multi-Task Learning and Neural Network Model Characteristics. Sensors 2023, 23, 9016. [Google Scholar] [CrossRef] [PubMed]
- Hellström, H.; da Silva, J.M.B., Jr.; Amiri, M.M.; Chen, M.; Fodor, V.; Poor, H.V.; Fischione, C. Wireless for machine learning: A survey. Found. Trends Signal Process. 2022, 15, 290–399. [Google Scholar] [CrossRef]
- Che, L.; Wang, J.; Zhou, Y.; Ma, F. Multimodal federated learning: A survey. Sensors 2023, 23, 6986. [Google Scholar] [CrossRef]
- Wang, S.; Zhao, H.; Wen, W.; Xia, W.; Wang, B.; Zhu, H. Contract Theory Based Incentive Mechanism for Clustered Vehicular Federated Learning. IEEE Trans. Intell. Transp. Syst. 2024, 25, 8134–8147. [Google Scholar] [CrossRef]
- Ye, R.; Xu, M.; Wang, J.; Xu, C.; Chen, S.; Wang, Y. Feddisco: Federated learning with discrepancy-aware collaboration. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; PMLR: London, UK, 2023; pp. 39879–39902. [Google Scholar]
- Seol, M.; Kim, T. Performance enhancement in federated learning by reducing class imbalance of non-iid data. Sensors 2023, 23, 1152. [Google Scholar] [CrossRef] [PubMed]
- Yong, W.; Guoliang, L.; Kaiyu, L. Survey on contribution evaluation for federated learning. J. Softw. 2022, 34, 1168–1192. [Google Scholar]
- Clauset, A. Inference, models and simulation for complex systems. Tech. Rep. 2011. Available online: https://aaronclauset.github.io/courses/7000/csci7000-001_2011_L0.pdf (accessed on 10 May 2024).
- Sattler, F.; Wiedemann, S.; Müller, K.R.; Samek, W. Robust and communication-efficient federated learning from non-iid data. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3400–3413. [Google Scholar] [CrossRef] [PubMed]
- Meng, X.; Li, Y.; Lu, J.; Ren, X. An Optimization Method for Non-IID Federated Learning Based on Deep Reinforcement Learning. Sensors 2023, 23, 9226. [Google Scholar] [CrossRef] [PubMed]
- Shapley, L. A value for n-person games. In Classics in Game Theory; Princeton University Press: Princeton, NJ, USA, 2020; pp. 69–79. [Google Scholar]
- Liu, Z.; Chen, Y.; Yu, H.; Liu, Y.; Cui, L. Gtg-shapley: Efficient and accurate participant contribution evaluation in federated learning. ACM Trans. Intell. Syst. Technol. (TIST) 2022, 13, 1–21. [Google Scholar] [CrossRef]
- Kawamura, N.; Sato, W.; Shimokawa, K.; Fujita, T.; Kawanishi, Y. Machine Learning-Based Interpretable Modeling for Subjective Emotional Dynamics Sensing Using Facial EMG. Sensors 2024, 24, 1536. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Dong, X.; Jia, N.; Zhao, W. Federated Learning-Oriented Edge Computing Framework for the IIoT. Sensors 2024, 24, 4182. [Google Scholar] [CrossRef] [PubMed]
- Zhu, H.; Li, Z.; Zhong, D.; Li, C.; Yuan, Y. Shapley-value-based Contribution Evaluation in Federated Learning: A Survey. In Proceedings of the 2023 IEEE 3rd International Conference on Digital Twins and Parallel Intelligence (DTPI), Orlando, FL, USA, 7–9 November 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
- Verbraeken, J.; Wolting, M.; Katzy, J.; Kloppenburg, J.; Verbelen, T.; Rellermeyer, J.S. A survey on distributed machine learning. ACM Comput. Surv. (CSUR) 2020, 53, 1–33. [Google Scholar] [CrossRef]
- Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated learning: Challenges, methods, and future directions. IEEE Signal Process. Mag. 2020, 37, 50–60. [Google Scholar] [CrossRef]
- Abhishek, V.; Binny, S.; Johan, T.R.; Raj, N.; Thomas, V. Federated Learning: Collaborative Machine Learning without Centralized Training Data. Int. J. Eng. Technol. Manag. Sci. 2022, 6, 355–359. [Google Scholar] [CrossRef]
- Uprety, A.; Rawat, D.B.; Li, J. Privacy preserving misbehavior detection in IoV using federated machine learning. In Proceedings of the 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), Virtual Event, 9–12 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
- Lyu, L.; Xu, X.; Wang, Q.; Yu, H. Collaborative fairness in federated learning. In Federated Learning: Privacy and Incentive; Springer: Berlin/Heidelberg, Germany, 2020; pp. 189–204. [Google Scholar]
- Kearns, M.; Ron, D. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. In Proceedings of the Tenth Annual Conference on Computational Learning Theory, Nashville, TN, USA, 6–9 July 1997; pp. 152–162. [Google Scholar]
- Ghorbani, A.; Zou, J. Data shapley: Equitable valuation of data for machine learning. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; PMLR: London, UK, 2019; pp. 2242–2251. [Google Scholar]
- Wang, G.; Dang, C.X.; Zhou, Z. Measure contribution of participants in federated learning. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 9–12 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2597–2604. [Google Scholar]
- Jia, R.; Dao, D.; Wang, B.; Hubis, F.A.; Hynes, N.; Gürel, N.M.; Li, B.; Zhang, C.; Song, D.; Spanos, C.J. Towards efficient data valuation based on the shapley value. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, Okinawa, Japan, 16–18 April 2019; PMLR: London, UK, 2019; pp. 1167–1176. [Google Scholar]
- Wei, S.; Tong, Y.; Zhou, Z.; Song, T. Efficient and Fair Data Valuation for Horizontal Federated Learning; Springer International Publishing: Cham, Switzerland, 2020; Volume 10, pp. 139–152. [Google Scholar]
- Kang, J.; Xiong, Z.; Niyato, D.; Xie, S.; Zhang, J. Incentive mechanism for reliable federated learning: A joint optimization approach to combining reputation and contract theory. IEEE Internet Things J. 2019, 6, 10700–10714. [Google Scholar] [CrossRef]
- Kang, J.; Xiong, Z.; Niyato, D.; Zou, Y.; Zhang, Y.; Guizani, M. Reliable federated learning for mobile networks. IEEE Wirel. Commun. 2020, 27, 72–80. [Google Scholar] [CrossRef]
- Zhu, H.; Xu, J.; Liu, S.; Jin, Y. Federated learning on non-IID data: A survey. Neurocomputing 2021, 465, 371–390. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA, 20–22 April 2017; PMLR: London, UK, 2017; pp. 1273–1282. [Google Scholar]
- Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; Poor, H.V. Tackling the objective inconsistency problem in heterogeneous federated optimization. Adv. Neural Inf. Process. Syst. 2020, 33, 7611–7623. [Google Scholar]
- Hsu, T.M.H.; Qi, H.; Brown, M. Measuring the effects of non-identical data distribution for federated visual classification. arXiv 2019, arXiv:1909.06335. [Google Scholar]
- Zhang, L.; Shen, L.; Ding, L.; Tao, D.; Duan, L.Y. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–22 June 2022; pp. 10174–10183. [Google Scholar]
- Zhu, Z.; Hong, J.; Zhou, J. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the International Conference on Machine Learning, Virtual Event, 18–24 July 2021; PMLR: London, UK, 2021; pp. 12878–12889. [Google Scholar]
- Jung, J.P.; Ko, Y.B.; Lim, S.H. Federated Learning with Pareto Optimality for Resource Efficiency and Fast Model Convergence in Mobile Environments. Sensors 2024, 24, 2476. [Google Scholar] [CrossRef] [PubMed]
- Castro, J.; Gómez, D.; Tejada, J. Polynomial calculation of the Shapley value based on sampling. Comput. Oper. Res. 2009, 36, 1726–1730. [Google Scholar] [CrossRef]
- Yang, C.; Hou, Z.; Guo, S.; Chen, H.; Li, Z. SWATM: Contribution-Aware Adaptive Federated Learning Framework Based on Augmented Shapley Values. In Proceedings of the 2023 IEEE International Conference on Multimedia and Expo (ICME), Brisbane, Australia, 10–14 July 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 672–677. [Google Scholar]
- Dong, L.; Liu, Z.; Zhang, K.; Yassine, A.; Hossain, M.S. Affordable federated edge learning framework via efficient Shapley value estimation. Future Gener. Comput. Syst. 2023, 147, 339–349. [Google Scholar] [CrossRef]
- LeCun, Y.; Cortes, C.; Burges, C. MNIST Handwritten Digit Database. 2020. ATT Labs. Available online: http://yann.lecun.com/exdb/mnist (accessed on 10 May 2024).
- Xiao, H.; Rasul, K.; Vollgraf, R. Fashion-mnist: A novel image dataset for benchmarking machine learning algorithms. arXiv 2017, arXiv:1708.07747. [Google Scholar]
- Zhang, J.; Guo, S.; Qu, Z.; Zeng, D.; Zhan, Y.; Liu, Q.; Akerkar, R. Adaptive federated learning on non-iid data with resource constraint. IEEE Trans. Comput. 2021, 71, 1655–1667. [Google Scholar] [CrossRef]
Name | Time Complexity | Approach | Characteristics |
---|---|---|---|
Exact Shapley [32] | Submodel reconstruction | Low computation complexity, time-consuming | |
TMC Shapley [32] | Truncation, model approximation | Reduced computation, high error risk | |
K-sub Shapley [34] | Stratified sampling | Reduced computation, Loss of some precision | |
SOR Shapley [7] | Model approximation | Reduced computation, unnecessary estimation | |
TMR Shapley [35] | Truncation, model approximation | Truncation reduces computation, high error risk | |
Ours | Reconstructed model based on gradient | Reduced computation, mitigate the effects of Non-IID, lack of noise data sensitivity |
a\b | 0.05 | 0.1 | 0.2 | 0.3 | 0.4 |
---|---|---|---|---|---|
0.2 | 0.56 | 0.03 | 0.20 | 0.32 | 0.63 |
0.3 | 0.43 | 0.74 | 0.39 | 0.31 | 0.12 |
0.4 | 0.21 | 1.09 | 0.14 | 0.22 | 0.16 |
0.5 | 0.72 | 1.29 | 0.57 | 0.20 | 0.89 |
0.6 | 0.17 | 2.54 | 0.77 | 0.66 | 1.39 |
0.7 | 0.61 | 0.76 | 0.71 | 1.19 | 0.06 |
a\b | 0.05 | 0.1 | 0.2 | 0.3 | 0.4 |
---|---|---|---|---|---|
0.2 | 0.74 | 0.27 | 0.06 | 0.79 | 0.19 |
0.3 | 0.39 | 0.89 | 0.24 | 0.89 | 0.27 |
0.4 | 0.53 | 0.49 | 0.52 | 0.35 | 0.59 |
0.5 | 0.49 | 0.19 | 0.69 | 0.45 | 0.69 |
0.6 | 0.77 | 1.97 | 0.71 | 0.58 | 0.56 |
0.7 | 0.89 | 0.54 | 0.83 | 0.83 | 0.19 |
Name | Time | Accuracy |
---|---|---|
Exact Shapley [32] | 10,840.24 s | 84.93% |
TMC Shapley [32] | 720.56 s | 86.95% |
K-subset Shapley [34] | 601.76 s | 86.87% |
SOR Shapley [7] | 636.94 s | 86.70% |
TMR Shapley [35] | 574.05 s | 86.69% |
Ours | 646.35 s | 90.64% |
Name | Time | Accuracy |
---|---|---|
Exact Shapley [32] | 11,570.86 s | 85.43% |
TMC Shapley [32] | 737.58 s | 86.71% |
K-subset Shapley [34] | 630.42 s | 86.32% |
SOR Shapley [7] | 600.94 s | 86.83% |
TMR Shapley [35] | 587.43 s | 86.72% |
Ours | 729.98s | 88.38% |
Name | Time | Accuracy |
---|---|---|
Exact Shapley [32] | 10,865.56 s | 84.42% |
TMC Shapley [32] | 659.58 s | 85.53% |
K-subset Shapley [34] | 679.24 s | 85.72% |
SOR Shapley [7] | 695.14 s | 85.83% |
TMR Shapley [35] | 564.35 s | 85.68% |
Ours | 655.78 s | 86.96% |
Name | Time | Accuracy |
---|---|---|
Exact Shapley [32] | 10,961.56 s | 83.25% |
TMC Shapley [32] | 650.53 s | 82.98% |
K-subset Shapley [34] | 662.89 s | 83.32% |
SOR Shapley [7] | 637.49 s | 82.83% |
TMR Shapley [35] | 587.93 s | 82.51% |
Ours | 655.78 s | 84.39% |
Name | Time | Accuracy |
---|---|---|
Exact Shapley [32] | 143,196.81 s | 78.85% |
TMC Shapley [32] | 784.17 s | 79.12% |
K-subset Shapley [34] | 630.42 s | 79.26% |
SOR Shapley [7] | 625.95 s | 78.90% |
TMR Shapley [35] | 565.53 s | 78.94% |
Ours | 716.84 s | 80.86% |
Name | Time | Accuracy |
---|---|---|
Exact Shapley [32] | 32,517.0 s | 78.29% |
TMC Shapley [32] | 1546.66 s | 78.69% |
K-subset Shapley [34] | 1286.26 s | 79.37% |
SOR Shapley [7] | 1185.95 s | 79.51% |
TMR Shapley [35] | 1424.55 s | 79.78% |
Ours | 2118.53 s | 81.45% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, P.; Yang, Y.; Guo, W.; Shen, Y. A Fair Contribution Measurement Method for Federated Learning. Sensors 2024, 24, 4967. https://doi.org/10.3390/s24154967
Guo P, Yang Y, Guo W, Shen Y. A Fair Contribution Measurement Method for Federated Learning. Sensors. 2024; 24(15):4967. https://doi.org/10.3390/s24154967
Chicago/Turabian StyleGuo, Peng, Yanqing Yang, Wei Guo, and Yanping Shen. 2024. "A Fair Contribution Measurement Method for Federated Learning" Sensors 24, no. 15: 4967. https://doi.org/10.3390/s24154967
APA StyleGuo, P., Yang, Y., Guo, W., & Shen, Y. (2024). A Fair Contribution Measurement Method for Federated Learning. Sensors, 24(15), 4967. https://doi.org/10.3390/s24154967