Decentralized Federated Learning-Enabled Relation Aggregation for Anomaly Detection
Abstract
:1. Introduction
- Reduce over-reliance on high-quality expert experience. Considering saving labor and time costs in the loss determination, pricing, and compensation process, these companies construct expert knowledge-based empirical information bases in conjunction with image recognition technology to help identify damaged parts, types of damage, and assess the extent of damage.
- Improve detection efficiency using machine learning. Insurance companies have extremely high requirements for the timeliness of auto insurance claims. In order to ensure the high accuracy of fraud detection, the insurance company mines the features of historical cases and utilizes machine learning algorithms such as decision tree [13], random forests [14], logistic regression [15], and XGBOOST [16,17] to train AI models that can be applied to reporting and surveying.
- Utilize graph structure to mine team fraud. Due to the characteristics of team-based and industrialized insurance fraud in the current auto insurance industry, inspectors may face challenges in swiftly unraveling the implicit connections of team fraud between various cases. Therefore, companies leverage the network structure of graphs to mine additional feature factors from perspectives such as the incident location and reporting time. By identifying abnormal nodes, edges, and neighboring combinations, they aim to enhance the efficiency and success rate of team fraud detection [18].
- We propose a framework, Decentralized Federated Learning-enabled Relation aggregation (DFLR), based on relation embedding aggregation for auto insurance anomaly detection. Insurance companies participating in this framework will be either in the client role or server role throughout the training. For the client role, it locally executes KGE training, while the server performs average aggregation on relation embeddings to enhance the expression capability of embeddings.
- We design a dynamic server selection mechanism that continuously reorganizes federation groups based on data dissimilarity among clients. This approach elevates training efficiency and quality while ensuring privacy protection.
- We conduct experiments on a real auto insurance dataset. The effectiveness of DFLR is successfully demonstrated through comparative experiments. In particular, the ComplEX+SVM model utilizing DFLR can even improve the average prediction precision to 0.6575, which is 0.0898 higher than the result of training only on private data.
2. Related Work
2.1. Decentralized Federated Learning
2.2. Knowledge Graph Embedding
3. Proposed Approach
3.1. DFLR Framework
Algorithm 1 The DFLR framework. |
|
3.1.1. Server Aggregation
3.1.2. Client Training
Model | Score Function |
---|---|
TransE [36]: TransE assumes that relations can be represented as translations between entities. It learns embeddings by minimizing the translation distance between the head and tail entities. | |
TransH [47]: TransH builds on TransE by introducing the concept of hyperplanes. Each relationship has a corresponding hyperplane, and entity embeddings are projected onto these hyperplanes for more flexible modeling. | |
TransF [48]: TransF is an improvement over TransH, using the Frobenius norm to measure the match between entities and relationships. | |
RotatE [38]: RotatE employs rotation operations and represents relationships as complex numbers. It models the interaction between entities by rotating them in the complex plane. | |
DISTMULT [49]: DISTMULT measures the match between entities and relationships by taking the dot product of their embeddings. | |
HolE [50]: HolE uses tensor representations to map entities and relationships to a high-dimensional space. It calculates match scores through convolution operations. | |
ComplEx [51]: ComplEx represents entities and relationships using complex numbers. It calculates match scores through complex multiplication, allowing for more flexible modeling of interactions. |
3.1.3. Server Selection Mechanism
- (1)
- Each client will record the triple embeddings of the previous Z rounds of federation for subsequent traceability and comparison.
- (2)
- If a client’s highest F1-score of fraud prediction on the validation set in the latest Z rounds is lower than the highest F1-score in the previous Z rounds, it means that this client did not benefit from this federation group. When more than P percentage of clients within one federation group have not benefited from training, then all clients within that group will be added into the Group Re-selected Client Sequences.
- (3.a)
- For all clients in the Group Re-selected Client Sequences, the client with the highest computing power will calculate the Euclidean distance between the highest quality relation embeddings in the previous Z rounds for K-means grouping. After constructing federation groups, the clients within the group are selected again based on the number of triples for the next round of federated training.
- (3.b)
- If client training stagnation occurs only within a single federation group, each client in it will be assigned to other groups. The Euclidean distance is also used as the grouping basis. What is calculated here is the Euclidean distance between the relation embeddings of clients in the Group Re-selected Client Sequences and the relation embeddings of clients who assume server roles within other groups.
4. Experimental Analysis
4.1. Evaluation Metrics and Data
4.2. Implementation
- Single-Client: In this mode, there is no cooperation between clients and they are trained using only their own private data.
- All-Clients: In this mode, competition between clients is ignored and no privacy protection is adopted. All clients pool their private data together, train a single detection model uniformly, and finally test the model effect with a test dataset on each client.
- DFLR-Clients: In this model, each client has the ability to switch roles, including client or server. In the local training phase, all clients only rely on local data for training. In the federation phase, the server selection mechanism is performed first, and some of the client identities are switched to servers. After collecting the relation embeddings uploaded by the clients in the group, the server performs average aggregation based on FedAVG, and then distributes them. Finally each client evaluates the model effect on its own test dataset.
4.3. Experiments
4.3.1. Verify the Effectiveness of DFLR Training Mode
4.3.2. The Impact of the Number of Participants N on One Specific Client
4.3.3. The Impact of the Number of Federation Groups on One Specific Client
5. Conclusions and Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, H.; Fu, T.; Du, Y.; Gao, W.; Huang, K.; Liu, Z.; Chandak, P.; Liu, S.; Van Katwyk, P.; Deac, A.; et al. Scientific discovery in the age of artificial intelligence. Nature 2023, 620, 47–60. [Google Scholar] [CrossRef]
- Zhang, L.; Wu, T.; Chen, X.; Lu, B.; Na, C.; Qi, G. Auto Insurance Knowledge Graph Construction and Its Application to Fraud Detection. In Proceedings of the 10th International Joint Conference on Knowledge Graphs, Virtual, 6–8 December 2021; Association for Computing Machinery: New York, NY, USA, 2022; pp. 64–70. [Google Scholar]
- Dhieb, N.; Ghazzai, H.; Besbes, H.; Massoud, Y. Extreme Gradient Boosting Machine Learning Algorithm for Safe Auto Insurance Operations. In Proceedings of the 2019 IEEE International Conference on vehicular Electronics and Safety (ICVES), Cairo, Egypt, 4–6 September 2019; pp. 1–5. [Google Scholar]
- Zhang, R.; Che, T.; Ghahramani, Z.; Bengio, Y.; Song, Y. MetaGAN: An Adversarial Approach to Few-Shot Learning. Adv. Neural Inf. Process. Syst. 2018, 31, 2365–2374. [Google Scholar]
- Vincent, V.; Wannes, M.; Jesse, D. Transfer learning for anomaly detection through localized and unsupervised instance selection. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 6054–6061. [Google Scholar]
- Chen, Z.; Duan, J.; Kang, L.; Qiu, G. Supervised Anomaly Detection via Conditional Generative Adversarial Network and Ensemble Active Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 7781–7798. [Google Scholar] [CrossRef]
- Li, W.; Du, Q. Collaborative Representation for Hyperspectral Anomaly Detection. IEEE Trans. Geosci. Remote Sens. 2014, 53, 1463–1474. [Google Scholar] [CrossRef]
- Peng, H.; Li, H.; Song, Y.; Zheng, V.; Li, J. Differentially Private Federated Knowledge Graphs Embedding. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual, 1–5 November 2021; pp. 1416–1425. [Google Scholar]
- Benedek, B.; Ciumas, C.; Nagy, B.Z. Automobile insurance fraud detection in the age of big data–A systematic and comprehensive literature review. J. Financ. Regul. Compliance 2022, 30, 503–523. [Google Scholar] [CrossRef]
- Bhowmik, R. Detecting Auto Insurance Fraud by Data Mining Techniques. J. Emerg. Trends Comput. Inf. Sci. 2011, 2, 156–162. [Google Scholar]
- China Life 2022 Annual Property and Casualty Claims Settlement Service Report; Technical Report; China Life Insurance Company: Beijing, China, 2022.
- Liang, C.; Liu, Z.; Liu, B.; Zhou, J.; Li, X.; Yang, S.; Qi, Y. Uncovering insurance fraud conspiracy with network learning. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, Paris, France, 21–25 July 2019; pp. 1181–1184. [Google Scholar]
- Askari, S.M.S.; Hussain, M.A. IFDTC4.5: Intuitionistic fuzzy logic based decision tree for E -transactional fraud detection. J. Inf. Secur. Appl. 2020, 52, 102469. [Google Scholar] [CrossRef]
- Lin, T.H.; Jiang, J.R. Credit Card Fraud Detection with Autoencoder and Probabilistic Random Forest. Mathematics 2021, 9, 2683. [Google Scholar] [CrossRef]
- Sanober, S.; Alam, I.; Pande, S.; Arslan, F.; Rane, K.P.; Singh, B.K.; Khamparia, A.; Shabaz, M. An Enhanced Secure Deep Learning Algorithm for Fraud Detection in Wireless Communication. Wirel. Commun. Mob. Comput. 2021, 2021, 6079582. [Google Scholar] [CrossRef]
- Kim, M.; Choi, J.; Kim, J.; Kim, W.; Baek, Y.; Bang, G.; Son, K.; Ryou, Y.; Kim, K.E. Trustworthy Residual Vehicle Value Prediction for Auto Finance. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 15537–15544. [Google Scholar]
- Jin, C.; Wang, J.; Teo, S.G.; Zhang, L.; Chan, C.; Hou, Q.; Aung, K.M.M. Towards end-to-end secure and efficient federated learning for xgboost. In Proceedings of the AAAI International Workshop on Trustable, Verifiable and Auditable Federated Learning, Vancouver, BC, Canada, 2 March 2022. [Google Scholar]
- Hilal, W.; Gadsden, S.A.; Yawney, J. Financial Fraud: A Review of Anomaly Detection Techniques and Recent Advances. Expert Syst. Appl. 2022, 193. [Google Scholar] [CrossRef]
- Chen, X.; Jia, S.; Xiang, Y. A Review: Knowledge Reasoning over Knowledge Graph. Expert Syst. Appl. 2020, 141, 112948. [Google Scholar] [CrossRef]
- Qu, Y.; Gao, L.; Luan, T.H.; Xiang, Y.; Yu, S.; Li, B.; Zheng, G. Decentralized Privacy Using Blockchain-enabled Federated Learning in Fog Computing. IEEE Internet Things J. 2020, 7, 5171–5183. [Google Scholar] [CrossRef]
- Kong, X.; Gao, H.; Shen, G.; Duan, G.; Das, S.K. FedVCP: A Federated-Learning-Based Cooperative Positioning Scheme for Social Internet of Vehicles. IEEE Trans. Comput. Soc. Syst. 2022, 9, 197–206. [Google Scholar] [CrossRef]
- Chen, M.; Zhang, W.; Yuan, Z.; Jia, Y.; Chen, H. Fede: Embedding knowledge graphs in federated setting. In Proceedings of the 10th International Joint Conference on Knowledge Graphs, Virtual, 6–8 December 2021; pp. 80–88. [Google Scholar]
- Federated AI Technology Enabler. Available online: https://github.com/FederatedAI/FATE (accessed on 21 August 2023).
- Li, L.; Fan, Y.; Tse, M.; Lin, K.Y. A review of applications in federated learning. Comput. Ind. Eng. 2020, 149, 106854. [Google Scholar] [CrossRef]
- Niu, Y.; Deng, W. Federated learning for face recognition with gradient correction. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 22 February–1 March 2022; Volume 36, pp. 1999–2007. [Google Scholar]
- Kong, X.; Wang, K.; Hou, M.; Hao, X.; Shen, G.; Chen, X.; Xia, F. A Federated Learning-Based License Plate Recognition Scheme for 5G-Enabled Internet of Vehicles. IEEE Trans. Ind. Inform. 2021, 17, 8523–8530. [Google Scholar] [CrossRef]
- Kong, X.; Zhang, W.; Qu, Y.; Yao, X.; Shen, G. FedAWR: An Interactive Federated Active Learning Framework for Air Writing Recognition. IEEE Trans. Mob. Comput. 2023, in press. [Google Scholar] [CrossRef]
- Hegedus, I.; Danner, G.; Jelasity, M. Decentralized learning works: An empirical comparison of gossip learning and federated learning. J. Parallel Distrib. Comput. 2021, 148, 109–124. [Google Scholar] [CrossRef]
- Lu, S.; Zhang, Y.; Wang, Y.; Mack, C. Learn electronic health records by fully decentralized federated learning. arXiv 2019, arXiv:1912.01792. [Google Scholar]
- Kalapaaking, A.P.; Khalil, I.; Rahman, M.S.; Atiquzzaman, M.; Yi, X.; Almashor, M. Blockchain-based federated learning with secure aggregation in trusted execution environment for internet-of-things. IEEE Trans. Ind. Inform. 2022, 19, 1703–1714. [Google Scholar] [CrossRef]
- Liu, W.; Chen, L.; Chen, Y.; Wang, W. Communication-Efficient Design for Quantized Decentralized Federated Learning. arXiv 2023, arXiv:2303.08423. [Google Scholar]
- Sun, T.; Li, D.; Wang, B. Decentralized federated averaging. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 4289–4301. [Google Scholar] [CrossRef]
- Ji, S.; Pan, S.; Cambria, E.; Marttinen, P.; Philip, S.Y. A survey on knowledge graphs: Representation, acquisition, and applications. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 494–514. [Google Scholar] [CrossRef] [PubMed]
- Huang, X.; Zhang, J.; Li, D.; Li, P. Knowledge graph embedding based question answering. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, Melbourne, Australia, 11–15 February 2019; pp. 105–113. [Google Scholar]
- Wang, Q.; Mao, Z.; Wang, B.; Guo, L. Knowledge graph embedding: A survey of approaches and applications. IEEE Trans. Knowl. Data Eng. 2017, 29, 2724–2743. [Google Scholar] [CrossRef]
- Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating embeddings for modeling multi-relational data. Adv. Neural Inf. Process. Syst. 2013, 26, 1–9. [Google Scholar]
- Lin, Y.; Liu, Z.; Sun, M.; Liu, Y.; Zhu, X. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015; Volume 29. [Google Scholar]
- Sun, Z.; Deng, Z.H.; Nie, J.Y.; Tang, J. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Zhang, Z.; Cai, J.; Zhang, Y.; Wang, J. Learning hierarchy-aware knowledge graph embeddings for link prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 6–9 May 2020; Volume 34, pp. 3065–3072. [Google Scholar]
- Nickel, M.; Tresp, V.; Kriegel, H.P. A three-way model for collective learning on multi-relational data. In Proceedings of the ICML, Bellevue, WA, USA, 28 June–2 July 2011; Volume 11, pp. 3104482–3104584. [Google Scholar]
- Balazevic, I.; Allen, C.; Hospedales, T. Multi-relational poincaré graph embeddings. Adv. Neural Inf. Process. Syst. 2019, 32, 4463–4473. [Google Scholar]
- Zhang, J.; Huang, J.; Gao, J.; Han, R.; Zhou, C. Knowledge graph embedding by logical-default attention graph convolution neural network for link prediction. Inf. Sci. 2022, 593, 201–215. [Google Scholar] [CrossRef]
- Wang, Y.; Xu, W. Leveraging deep learning with LDA-based text analytics to detect automobile insurance fraud. Decusion Support Syst. 2018, 105, 87–95. [Google Scholar] [CrossRef]
- Akoglu, L.; Tong, H.; Koutra, D. Graph Based Anomaly Detection and Description: A Survey. Data Min. Knowl. Discov. 2015, 29, 626–688. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A.Y. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. PMLR, Lauderdale, FL, USA, 20–22 April 2017; Volume 54, pp. 1273–1282. [Google Scholar]
- Zhang, Y.; Yao, Q.; Shao, Y.; Chen, L. NSCaching: Simple and efficient negative sampling for knowledge graph embedding. In Proceedings of the 2019 IEEE 35th International Conference on Data Engineering (ICDE), Macao, China, 8–11 April 2019; pp. 614–625. [Google Scholar]
- Wang, Z.; Zhang, J.; Feng, J.; Chen, Z. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, Quebec City, QC, Canada, 27–31 July 2014; Volume 28. [Google Scholar]
- Feng, J.; Huang, M.; Wang, M.; Zhou, M.; Hao, Y.; Zhu, X. Knowledge graph embedding by flexible translation. In Proceedings of the Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning, Cape Town‚ South Africa, 25–29 April 2016. [Google Scholar]
- Yang, B.; Yih, W.; He, X.; Gao, J.; Deng, L. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Nickel, M.; Rosasco, L.; Poggio, T. Holographic embeddings of knowledge graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Trouillon, T.; Welbl, J.; Riedel, S.; Gaussier, É.; Bouchard, G. Complex embeddings for simple link prediction. Proc. Mach. Learn. Res. 2016, 48, 2071–2080. [Google Scholar]
- Hu, W.; Liao, Y.; Vemuri, V.R. Robust Anomaly Detection Using Support Vector Machines. In Proceedings of the International Conference on Machine Learning, Los Angeles, CA, USA, 23–24 June 2003; pp. 282–289. [Google Scholar]
Symbol | Definition |
---|---|
Round t | The complete training process for a full federation round consists of three stages. (i) The server sends the parameters down to the client; (ii) the client receives them for local training; (iii) the client uploads the parameters to the server for aggregation. |
N, K | The number of clients; the number of the federation group. |
E, B | The batch size of the client; The training epoch of the client. |
, , | Knowledge Graph set owned by an auto insurance company; the entity embeddings of ; the relation embeddings of . |
, , | Set of servers at round t; the server of the j-th federation group; the client’s set of the j-th federation group. |
The fixed number of rounds that the client needs to participate in in federated training before implementing the server selection mechanism. | |
The proportion of clients within the same federation group which choose to regroup. |
Real Value | Predicted Value | |
---|---|---|
Positive | Negative | |
Positive | TP | FN |
Negative | FP | TN |
Number of Clients N | Triples | Relations | Entities |
---|---|---|---|
6 | 120,898 | 9 | 13,563.5 |
12 | 60,449 | 10.45 | 671.75 |
Model | Training Mode | The Average Results of 6 Clients | ||
---|---|---|---|---|
Precision | Recall | F1-Score | ||
TransE + SVM | Single-Client | 0.2675 | 0.2016 | 0.2299 |
All-Clients | 0.4283 | 0.2763 | 0.3359 | |
DFLR-Clients | 0.4029 | 0.2372 | 0.2987 | |
TransH + SVM | Single-Client | 0.2862 | 0.2348 | 0.2580 |
All-Clients | 0.4079 | 0.2783 | 0.3309 | |
DFLR-Clients | 0.4183 | 0.2902 | 0.3427 | |
TransF + SVM | Single-Client | 0.2822 | 0.1928 | 0.2291 |
All-Clients | 0.4324 | 0.2863 | 0.3445 | |
DFLR-Clients | 0.3739 | 0.2769 | 0.3182 | |
RotatE + SVM | Single-Client | 0.3001 | 0.2531 | 0.2746 |
All-Clients | 0.4421 | 0.3142 | 0.3673 | |
DFLR-Clients | 0.4267 | 0.2742 | 0.3339 | |
DISTMULT + SVM | Single-Client | 0.3093 | 0.2361 | 0.2678 |
All-Clients | 0.4812 | 0.4046 | 0.4396 | |
DFLR-Clients | 0.4728 | 0.3836 | 0.4236 | |
HolE + SVM | Single-Client | 0.3533 | 0.2363 | 0.2832 |
All-Clients | 0.4529 | 0.3740 | 0.4097 | |
DFLR-Clients | 0.4822 | 0.3823 | 0.4265 | |
ComplEx + SVM | Single-Client | 0.2683 | 0.1578 | 0.1987 |
All-Clients | 0.5026 | 0.3822 | 0.4342 | |
DFLR-Clients | 0.4508 | 0.3878 | 0.4169 |
Model | Training Mode | Precision | Recall | F1-Score |
---|---|---|---|---|
TransE + SVM | Single-Client | 0.5283 | 0.1827 | 0.2715 |
All-Clients | 0.6382 | 0.3740 | 0.4716 | |
DFLR-Clients | 0.5392 | 0.2216 | 0.3141 | |
TransH + SVM | Single-Client | 0.5503 | 0.2012 | 0.2947 |
All-Clients | 0.6821 | 0.3872 | 0.4940 | |
DFLR-Clients | 0.5927 | 0.2672 | 0.3683 | |
TransF + SVM | Single-Client | 0.4928 | 0.1822 | 0.2660 |
All-Clients | 0.6519 | 0.2426 | 0.3536 | |
DFLR-Clients | 0.6222 | 0.2229 | 0.3282 | |
RotatE + SVM | Single-Client | 0.5113 | 0.1729 | 0.2584 |
All-Clients | 0.6018 | 0.2537 | 0.3569 | |
DFLR-Clients | 0.5762 | 0.2322 | 0.3310 | |
DISTMULT + SVM | Single-Client | 0.4636 | 0.2273 | 0.3050 |
All-Clients | 0.6162 | 0.3093 | 0.4119 | |
DFLR-Clients | 0.5127 | 0.2292 | 0.3168 | |
HolE + SVM | Single-Client | 0.5412 | 0.2282 | 0.3210 |
All-Clients | 0.6296 | 0.3527 | 0.4521 | |
DFLR-Clients | 0.6188 | 0.2928 | 0.3975 | |
ComplEx + SVM | Single-Client | 0.5677 | 0.3015 | 0.3938 |
All-Clients | 0.6321 | 0.3746 | 0.4704 | |
DFLR-Clients | 0.6575 | 0.3772 | 0.4794 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shuai, S.; Hu, Z.; Zhang, B.; Liaqat, H.B.; Kong, X. Decentralized Federated Learning-Enabled Relation Aggregation for Anomaly Detection. Information 2023, 14, 647. https://doi.org/10.3390/info14120647
Shuai S, Hu Z, Zhang B, Liaqat HB, Kong X. Decentralized Federated Learning-Enabled Relation Aggregation for Anomaly Detection. Information. 2023; 14(12):647. https://doi.org/10.3390/info14120647
Chicago/Turabian StyleShuai, Siyue, Zehao Hu, Bin Zhang, Hannan Bin Liaqat, and Xiangjie Kong. 2023. "Decentralized Federated Learning-Enabled Relation Aggregation for Anomaly Detection" Information 14, no. 12: 647. https://doi.org/10.3390/info14120647
APA StyleShuai, S., Hu, Z., Zhang, B., Liaqat, H. B., & Kong, X. (2023). Decentralized Federated Learning-Enabled Relation Aggregation for Anomaly Detection. Information, 14(12), 647. https://doi.org/10.3390/info14120647