Federated Auto-Meta-Ensemble Learning Framework for AI-Enabled Military Operations
Abstract
:1. Introduction
- Too fast missions with reaction times of seconds or less to be executed in high complexity (data, context, type of mission).
- Missions with operation duration beyond human endurance or implying high operational (personnel) costs over a long period.
- Missions involving an overwhelming complexity which requires agility and adaptation to evolutions in context and objectives.
- Missions challenging operational contexts implying severe risks to war fighters.
2. Proposed Framework
- 5.
- Step 1—Fine-tune the best local model. The fine-tuning process will help to improve the accuracy of each machine learning model by integrating data from an existing dataset and using it as an initialization point to make the training process time- and resource-efficient.
- 6.
- Step 2—Upload the local model to the federated server.
- 7.
- Step 3—Ensemble the model by the federated server. This ensemble method uses multiple learning algorithms to obtain a better predictive performance than could be obtained from any of the constituent learning algorithms alone.
- 8.
- Step 4—Dispatch the ensemble model to local devices.
2.1. Federated Learning
2.2. Auto-Machine Learning
2.3. Meta-Ensemble Learning
3. Experiments and Results
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Nikoloudakis, Y.; Kefaloukos, I.; Klados, S.; Panagiotakis, S.; Pallis, E.; Skianis, C.; Markakis, E. Towards a Machine Learning Based Situational Awareness Framework for Cybersecurity: An SDN Implementation. Sensors 2021, 21, 4939. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Liu, H.; Lin, W.; Chen, Y.; Yang, J.-A. Investigation on Works and Military Applications of Artificial Intelligence. IEEE Access 2020, 8, 131614–131625. [Google Scholar] [CrossRef]
- Lukman, S.; Nazaruddin, Y.Y.; Joelianto, E.; Ai, B. Military 5G Mobile Networking as Driver of UAVs in Detecting RAM for Stealth Operation. In Proceedings of the 2019 IEEE 6th Asian Conference on Defence Technology (ACDT), Bali, Indonesia, 13–15 November 2019; pp. 91–96. [Google Scholar] [CrossRef]
- Ghataoura, D.; Ogbonnaya, S. Application of Image Captioning and Retrieval to Support Military Decision Making. In Proceedings of the 2021 International Conference on Military Communication and Information Systems (ICMCIS), The Hague, The Netherlands, 4–5 May 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Drosatos, G.; Rantos, K.; Karampatzakis, D.; Lagkas, T.; Sarigiannidis, P. Privacy-preserving solutions in the Industrial Internet of Things. In Proceedings of the 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS), Marina del Rey, CA, USA, 25–27 May 2020; pp. 219–226. [Google Scholar] [CrossRef]
- D’Acquisto, G.; Domingo-Ferrer, J.; Kikiras, P.; Torra, V.; de Montjoye, Y.-A.; Bourka, A. Privacy by design in big data: An overview of privacy enhancing technologies in the era of big data analytics. arXiv 2015, arXiv:1512.06000. [Google Scholar]
- Neumann, N.M.P.; van Heesch, M.P.P.; Phillipson, F.; Smallegange, A.A.P. Quantum Computing for Military Applications. In Proceedings of the 2021 International Conference on Military Communication and Information Systems (ICMCIS), The Hague, The Netherlands, 4–5 May 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Demertzis, K.; Tsiknas, K.; Takezis, D.; Skianis, C.; Iliadis, L. Darknet Traffic Big-Data Analysis and Network Management for Real-Time Automating of the Malicious Intent Detection Process by a Weight Agnostic Neural Networks Framework. Electronics 2021, 10, 781. [Google Scholar] [CrossRef]
- Demertzis, K.; Iliadis, L.; Pimenidis, E.; Tziritas, N.; Koziri, M.; Kikiras, P.; Tonkin, M. Federated Blockchained Supply Chain Management: A CyberSecurity and Privacy Framework. In Artificial Intelligence Applications and Innovations; Springer: Cham, Switzerland, 2021; pp. 769–779. [Google Scholar] [CrossRef]
- Demertzis, K.; Iliadis, L.; Pimenidis, E.; Tziritas, N.; Koziri, M.; Kikiras, P. Blockchained Adaptive Federated Auto MetaLearning BigData and DevOps CyberSecurity Architecture in Industry 4.0. In Proceedings of the 22nd Engineering Applications of Neural Networks Conference, Crete, Greece, 25–27 June 2021; Springer: Cham, Swizterland, 2021; pp. 345–363. [Google Scholar] [CrossRef]
- Chang, Y.; Fang, C.; Sun, W. A Blockchain-Based Federated Learning Method for Smart Healthcare. Comput. Intell. Neurosci. 2021, 2021, e4376418. [Google Scholar] [CrossRef]
- Alam, K.M.R.; Siddique, N.; Adeli, H. A dynamic ensemble learning algorithm for neural networks. Neural Comput. Appl. 2020, 32, 8675–8690. [Google Scholar] [CrossRef] [Green Version]
- Hou, R.; Tang, F.; Liang, S.; Ling, G. Multi-Party Verifiable Privacy-Preserving Federated k-Means Clustering in Outsourced Environment. Secur. Commun. Networks 2021, 2021, e3630312. [Google Scholar] [CrossRef]
- Xu, G.; Li, H.; Liu, S.; Yang, K.; Lin, X. VerifyNet: Secure and Verifiable Federated Learning. IEEE Trans. Inf. Forensics Secur. 2020, 15, 911–926. [Google Scholar] [CrossRef]
- Zheng, H.; Hu, H.; Han, Z. Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning? IEEE Intell. Syst. 2020, 35, 5–14. [Google Scholar] [CrossRef]
- Felbab, V.; Kiss, P.; Horváth, T. Optimization in Federated Learning. In Proceedings of the Conference on Theory and Practice of Information Technologies, Donovaly, Slovakia, 20–24 September 2019. [Google Scholar]
- Alsaleh, M.N.; Al-Shaer, E. Automated Cyber Risk Mitigation: Making Informed Cost-Effective Decisions. In Adaptive Autonomous Secure Cyber Systems; Jajodia, S., Cybenko, G., Subrahmanian, V.S., Swarup, V., Wang, C., Wellman, M., Eds.; Springer International Publishing: Cham, Swizterland, 2020; pp. 131–157. [Google Scholar] [CrossRef]
- Feurer, M.; Klein, A.; Eggensperger, K.; Springenberg, J.T.; Blum, M.; Hutter, F. Efficient and robust automated machine learning. Neural Inf. Process. Syst. 2015, 28, 9. [Google Scholar]
- Ferrag, M.A.; Friha, O.; Maglaras, L.; Janicke, H.; Shu, L. Federated Deep Learning for Cyber Security in the Internet of Things: Concepts, Applications, and Experimental Analysis. IEEE Access 2021, 9, 138509–138542. [Google Scholar] [CrossRef]
- Feurer, M.; Eggensperger, K.; Falkner, S.; Lindauer, M.; Hutter, F. Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning. arXiv 2021, arXiv:2007.04074, Sep. [Google Scholar] [CrossRef]
- Kotthoff, L.; Thornton, C.; Hoos, H.H.; Hutter, F.; Leyton-Brown, K. Auto-WEKA: Automatic Model Selection and Hyperparameter Optimization in WEKA. In Automated Machine Learning: Methods, Systems, Challenges; Hutter, F., Kotthoff, L., Vanschoren, J., Eds.; Springer International Publishing: Cham, Swizterland, 2019; pp. 81–95. [Google Scholar] [CrossRef] [Green Version]
- Bing, X. Critical infrastructure protection based on memory-augmented meta-learning framework. Neural Comput. Appl. 2020, 32, 17197–17208. [Google Scholar] [CrossRef]
- Romero, V.; Ly, M.; Rasheed, A.H.; Charrondière, R.; Lazarus, A.; Neukirch, S.; Bertails-Descoubes, F. Physical validation of simulators in computer graphics: A new framework dedicated to slender elastic structures and frictional contact. ACM Trans. Graph. 2021, 40, 66:1–66:19. [Google Scholar] [CrossRef]
- Garrett, A.J.M. Review: Probability Theory: The Logic of Science, by E. T. Jaynes. Law Probab. Risk 2004, 3, 243–246. [Google Scholar] [CrossRef] [Green Version]
- Platis, A.; Limnios, N.; Le Du, M. Dependability analysis of systems modeled by non-homogeneous Markov chains. Reliab. Eng. Syst. Saf. 1998, 61, 235–249. [Google Scholar] [CrossRef]
- Cavazos-Cadena, R.; Montes-De-Oca, R. The Value Iteration Algorithm in Risk-Sensitive Average Markov Decision Chains with Finite State Space. Math. Oper. Res. 2003, 28, 752–776. [Google Scholar] [CrossRef]
- Ghannoum, E.; Kieloch, Z. Use of modern technologies and software to deliver efficient design and optimization of 1380 km long bipole III 500 kV HVDC transmission line, Manitoba, Canada. In Proceedings of the PES T&D 2012, Orlando, FL, USA, 7–10 May 2012; pp. 1–6. [Google Scholar] [CrossRef]
- Gupta, S.; Al-Obaidi, S.; Ferrara, L. Meta-Analysis and Machine Learning Models to Optimize the Efficiency of Self-Healing Capacity of Cementitious Material. Materials 2021, 14, 4437. [Google Scholar] [CrossRef]
- Zioviris, G.; Kolomvatsos, K.; Stamoulis, G. On the Use of a Sequential Deep Learning Scheme for Financial Fraud Detection. In Intelligent Computing; Springer: Cham, Swizterland, 2021; pp. 507–523. [Google Scholar] [CrossRef]
- Demertzis, K.; Iliadis, L.; Anezakis, V.-D. A Dynamic Ensemble Learning Framework for Data Stream Analysis and Real-Time Threat Detection. In Artificial Neural Networks and Machine Learning–ICANN; Springer: Cham, Swizterland, 2018; pp. 669–681. [Google Scholar] [CrossRef]
- Zioviris, G.; Kolomvatsos, K.; Stamoulis, G. Credit card fraud detection using a deep learning multistage model. J. Supercomput. 2022, 78, 14571–14596. [Google Scholar] [CrossRef]
- Ahmadlou, M.; Adeli, H. Enhanced probabilistic neural network with local decision circles: A robust classifier. Integr. Comput. Aided Eng. 2010, 17, 197–210. [Google Scholar] [CrossRef]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.-Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Advances in Neural Information Processing Systems; 2017; Volume 30, Available online: https://papers.nips.cc/paper/2017/hash/6449f44a102fde848669bdd9eb6b76fa-Abstract.html (accessed on 30 May 2022).
- Bellare, M.; Goldreich, O. On Probabilistic versus Deterministic Provers in the Definition of Proofs of Knowledge. In Studies in Complexity and Cryptography. Miscellanea on the Interplay between Randomness and Computation; Avigad, L., Bellare, M., Brakerski, Z., Goldwasser, S., Halevi, S., Kaufman, T., Levin, L., Nisan, N., Ron, D., Sudan, M., et al., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 114–123. [Google Scholar] [CrossRef]
- Yu, Z.; Chen, F.; Liu, J.K. Sampling-Tree Model: Efficient Implementation of Distributed Bayesian Inference in Neural Networks. IEEE Trans. Cogn. Dev. Syst. 2020, 12, 497–510. [Google Scholar] [CrossRef] [Green Version]
- Melicher, V.; Slodička, M. Determination of missing boundary data for a steady-state Maxwell problem. Inverse Probl. 2006, 22, 297–310. [Google Scholar] [CrossRef]
- Wang, X.; Tu, S.; Zhao, W.; Shi, C. A novel energy-based online sequential extreme learning machine to detect anomalies over real-time data streams. Neural Comput. Appl. 2022, 34, 823–831. [Google Scholar] [CrossRef]
- Chauhan, R.; Heydari, S.S. Polymorphic Adversarial DDoS attack on IDS using GAN. In Proceedings of the 2020 International Symposium on Networks, Computers and Communications (ISNCC), Montreal, QC, Canada, 20–22 October 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Hellen, N.; Marvin, G. Explainable AI for Safe Water Evaluation for Public Health in Urban Settings. In Proceedings of the 2022 International Conference on Innovations in Science, Engineering and Technology (ICISET), Chittagong, Bangladesh, 26–27 February 2022; pp. 1–6. [Google Scholar] [CrossRef]
- Demertzis, K.; Iliadis, L. GeoAI: A Model-Agnostic Meta-Ensemble Zero-Shot Learning Method for Hyperspectral Image Analysis and Classification. Algorithms 2020, 13, 61. [Google Scholar] [CrossRef] [Green Version]
- Demertzis, K.; Iliadis, L. Detecting invasive species with a bio-inspired semi-supervised neurocomputing approach: The case of Lagocephalus sceleratus. Neural Comput. Appl. 2017, 28, 1225–1234. [Google Scholar] [CrossRef]
- Holloway, L.E.; Krogh, B.H.; Giua, A. A Survey of Petri Net Methods for Controlled Discrete Event Systems. Discret. Event Dyn. Syst. 1997, 7, 151–190. [Google Scholar] [CrossRef]
- Lu, Y.; Huang, X.; Li, D.; Zhang, Y. Collaborative Graph-Based Mechanism for Distributed Big Data Leakage Prevention. In Proceedings of the 2018 IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–7. [Google Scholar]
- Tsimenidis, S.; Lagkas, T.; Rantos, K. Deep Learning in IoT Intrusion Detection. J. Netw. Syst. Manag. 2021, 30, 8. [Google Scholar] [CrossRef]
- Iezzi, M. Practical Privacy-Preserving Data Science With Homomorphic Encryption: An Overview. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 3979–3988. [Google Scholar] [CrossRef]
- Apiecionek, L.; Makowski, W.; Biernat, D.; Lukasik, M. Practical implementation of AI for military airplane battlefield support system. In Proceedings of the 2015 8th International Conference on Human System Interaction (HSI), Warsaw, Poland, 25–27 June 2015; pp. 249–253. [Google Scholar] [CrossRef]
- Sharma, A.; Rani, R. C-HMOSHSSA: Gene selection for cancer classification using multi-objective meta-heuristic and machine learning methods. Comput. Methods Programs Biomed. 2019, 178, 219–235. [Google Scholar] [CrossRef]
- Huang, B.; Sun, Y.; Sun, Y.-M.; Zhao, C.-X. A hybrid heuristic search algorithm for scheduling FMS based on Petri net model. Int. J. Adv. Manuf. Technol. 2010, 48, 925–933. [Google Scholar] [CrossRef]
- Demertzis, K.; Iliadis, L.; Kikiras, P.; Pimenidis, E. An explainable semi-personalized federated learning model. Integr. Comput. AidedEng. 2022, Preprint, 1–16. [Google Scholar] [CrossRef]
Domain Alpha | |||||
---|---|---|---|---|---|
Model | Accuracy | AUC | Recall | Precision | F1-Score |
Light Gradient Boosting Machine | 0.879 | 0.926 | 0.876 | 0.879 | 0.879 |
Gradient Boosting Classifier | 0.878 | 0.926 | 0.875 | 0.878 | 0.878 |
K Neighbours Classifier | 0.876 | 0.927 | 0.873 | 0.876 | 0.876 |
Logistic Regression | 0.873 | 0.924 | 0.869 | 0.873 | 0.873 |
SVM—Linear Kernel | 0.870 | 0.925 | 0.867 | 0.870 | 0.870 |
Ada Boost Classifier | 0.868 | 0.000 | 0.865 | 0.868 | 0.868 |
Random Forest Classifier | 0.865 | 0.926 | 0.862 | 0.865 | 0.865 |
Linear Discriminant Analysis | 0.864 | 0.924 | 0.861 | 0.864 | 0.864 |
Ridge Classifier | 0.860 | 0.000 | 0.857 | 0.860 | 0.860 |
Extra Trees Classifier | 0.853 | 0.920 | 0.852 | 0.853 | 0.853 |
Decision Tree Classifier | 0.824 | 0.883 | 0.824 | 0.824 | 0.824 |
Naive Bayes | 0.747 | 0.904 | 0.733 | 0.770 | 0.734 |
Quadratic Discriminant Analysis | 0.367 | 0.900 | 0.405 | 0.575 | 0.321 |
Domain_Alpha | |
---|---|
Best Model | Best Parameters of the Winner Model |
LGBMClassifier | boosting_type = ‘gbdt’, class_weight = None, colsample_bytree = 1.0, importance_type = ‘split’, learning_rate = 0.1, max_depth = −1, min_child_samples = 20, min_child_weight = 0.001, min_split_gain = 0.0, n_estimators = 100, n_jobs = −1, num_leaves = 31, objective = None, random_state = 1599, reg_alpha = 0.0, reg_lambda = 0.0, silent = ’warn’,subsample = 1.0, subsample_for_bin = 200,000, subsample_freq = 0 |
Domain_Bravo | |||||
---|---|---|---|---|---|
Model | Accuracy | AUC | Recall | Precision | F1-Score |
Gradient Boosting Classifier | 0.877 | 0.926 | 0.875 | 0.877 | 0.877 |
Light Gradient Boosting Machine | 0.876 | 0.926 | 0.874 | 0.876 | 0.876 |
K Neighbours Classifier | 0.876 | 0.926 | 0.873 | 0.874 | 0.875 |
Ada Boost Classifier | 0.870 | 0.925 | 0.868 | 0.870 | 0.870 |
Random Forest Classifier | 0.870 | 0.923 | 0.868 | 0.870 | 0.870 |
Linear Discriminant Analysis | 0.865 | 0.923 | 0.863 | 0.865 | 0.865 |
SVM—Linear Kernel | 0.865 | 0.000 | 0.863 | 0.865 | 0.865 |
Logistic Regression | 0.863 | 0.925 | 0.861 | 0.863 | 0.862 |
Ridge Classifier | 0.861 | 0.000 | 0.859 | 0.862 | 0.861 |
Extra Trees Classifier | 0.849 | 0.920 | 0.849 | 0.849 | 0.849 |
Decision Tree Classifier | 0.816 | 0.878 | 0.816 | 0.816 | 0.815 |
Naive Bayes | 0.739 | 0.905 | 0.727 | 0.765 | 0.724 |
Quadratic Discriminant Analysis | 0.594 | 0.917 | 0.570 | 0.572 | 0.545 |
Domain_Bravo | |
---|---|
Best Model | Best Parameters of the Winner Model |
GradientBoostingClassifier | ccp_alpha = 0.0, criterion = ‘friedman_mse’, init = None, learning_rate = 0.1, loss = ‘deviance’, max_depth = 3, max_features = None, max_leaf_nodes = None, min_impurity_decrease = 0.0, min_impurity_split = None, min_samples_leaf = 1, min_samples_split = 2, min_weight_fraction_leaf = 0.0, n_estimators = 100, n_iter_no_change = None, presort = ‘deprecated’, random_state = 8515, subsample = 1.0, tol = 0.0001, validation_fraction = 0.1, verbose = 0, warm_start = False |
Domain_Charlie | |||||
---|---|---|---|---|---|
Model | Accuracy | AUC | Recall | Precision | F1-Score |
k-Neighbours Classifier | 0.866 | 0.927 | 0.864 | 0.867 | 0.866 |
Light Gradient Boosting Machine | 0.865 | 0.926 | 0.864 | 0.866 | 0.866 |
Gradient Boosting Classifier | 0.865 | 0.926 | 0.865 | 0.865 | 0.866 |
Ada Boost Classifier | 0.861 | 0.921 | 0.861 | 0.861 | 0.861 |
Logistic Regression | 0.860 | 0.922 | 0.860 | 0.861 | 0.860 |
SVM—Linear Kernel | 0.855 | 0.923 | 0.852 | 0.855 | 0.855 |
Random Forest Classifier | 0.853 | 0.925 | 0.851 | 0.853 | 0.853 |
Linear Discriminant Analysis | 0.851 | 0.923 | 0.849 | 0.852 | 0.851 |
Extra Trees Classifier | 0.847 | 0.921 | 0.847 | 0.848 | 0.849 |
Ridge Classifier | 0.847 | 0.920 | 0.848 | 0.849 | 0.848 |
Decision Tree Classifier | 0.819 | 0.880 | 0.821 | 0.820 | 0.819 |
Naive Bayes | 0.687 | 0.900 | 0.668 | 0.680 | 0.644 |
Quadratic Discriminant Analysis | 0.542 | 0.914 | 0.536 | 0.662 | 0.528 |
Domain_Charlie | |
---|---|
Best Model | Best Parameters of the Winner Model |
KNeighborsClassifier | algorithm = ‘auto’, leaf_size = 30, metric = ‘minkowski’, metric_params = None, n_jobs = −1, n_neighbors = 5, p = 2, weights = ‘uniform’ |
Domain_Alpha | |||||
---|---|---|---|---|---|
Model | Accuracy | AUC | Recall | Precision | F1-Score |
Ensemble model | 0.898 | 0.933 | 0.899 | 0.897 | 0.898 |
Light Gradient Boosting Machine | 0.879 | 0.926 | 0.876 | 0.879 | 0.879 |
Gradient Boosting Classifier | 0.878 | 0.926 | 0.875 | 0.878 | 0.878 |
k-Neighbors Classifier | 0.876 | 0.927 | 0.873 | 0.876 | 0.876 |
Logistic Regression | 0.873 | 0.924 | 0.869 | 0.873 | 0.873 |
SVM—Linear Kernel | 0.870 | 0.925 | 0.867 | 0.870 | 0.870 |
Ada Boost Classifier | 0.868 | 0.000 | 0.865 | 0.868 | 0.868 |
Random Forest Classifier | 0.865 | 0.926 | 0.862 | 0.865 | 0.865 |
Linear Discriminant Analysis | 0.864 | 0.924 | 0.861 | 0.864 | 0.864 |
Ridge Classifier | 0.860 | 0.000 | 0.857 | 0.860 | 0.860 |
Extra Trees Classifier | 0.853 | 0.920 | 0.852 | 0.853 | 0.853 |
Decision Tree Classifier | 0.824 | 0.883 | 0.824 | 0.824 | 0.824 |
Naive Bayes | 0.747 | 0.904 | 0.733 | 0.770 | 0.734 |
Quadratic Discriminant Analysis | 0.367 | 0.900 | 0.405 | 0.575 | 0.321 |
Domain_Bravo | |||||
---|---|---|---|---|---|
Model | Accuracy | AUC | Recall | Precision | F1-Score |
Ensemble model | 0.891 | 0.930 | 0.890 | 0.890 | 0.892 |
Gradient Boosting Classifier | 0.877 | 0.926 | 0.875 | 0.877 | 0.877 |
Light Gradient Boosting Machine | 0.876 | 0.926 | 0.874 | 0.876 | 0.876 |
k-Neighbors Classifier | 0.876 | 0.926 | 0.873 | 0.874 | 0.875 |
Ada Boost Classifier | 0.870 | 0.925 | 0.868 | 0.870 | 0.870 |
Random Forest Classifier | 0.870 | 0.923 | 0.868 | 0.870 | 0.870 |
Linear Discriminant Analysis | 0.865 | 0.923 | 0.863 | 0.865 | 0.865 |
SVM—Linear Kernel | 0.865 | 0.000 | 0.863 | 0.865 | 0.865 |
Logistic Regression | 0.863 | 0.925 | 0.861 | 0.863 | 0.862 |
Ridge Classifier | 0.861 | 0.000 | 0.859 | 0.862 | 0.861 |
Extra Trees Classifier | 0.849 | 0.920 | 0.849 | 0.849 | 0.849 |
Decision Tree Classifier | 0.816 | 0.878 | 0.816 | 0.816 | 0.815 |
Naive Bayes | 0.739 | 0.905 | 0.727 | 0.765 | 0.724 |
Quadratic Discriminant Analysis | 0.594 | 0.917 | 0.570 | 0.572 | 0.545 |
Domain_Charlie | |||||
---|---|---|---|---|---|
Model | Accuracy | AUC | Recall | Precision | F1-Score |
Ensemble model | 0.871 | 0.929 | 0.871 | 0.871 | 0.872 |
k-Neighbors Classifier | 0.866 | 0.927 | 0.864 | 0.867 | 0.866 |
Light Gradient Boosting Machine | 0.865 | 0.926 | 0.864 | 0.866 | 0.866 |
Gradient Boosting Classifier | 0.865 | 0.926 | 0.865 | 0.865 | 0.866 |
Ada Boost Classifier | 0.861 | 0.921 | 0.861 | 0.861 | 0.861 |
Logistic Regression | 0.860 | 0.922 | 0.860 | 0.861 | 0.860 |
SVM—Linear Kernel | 0.855 | 0.923 | 0.852 | 0.855 | 0.855 |
Random Forest Classifier | 0.853 | 0.925 | 0.851 | 0.853 | 0.853 |
Linear Discriminant Analysis | 0.851 | 0.923 | 0.849 | 0.852 | 0.851 |
Extra Trees Classifier | 0.847 | 0.921 | 0.847 | 0.848 | 0.849 |
Ridge Classifier | 0.847 | 0.920 | 0.848 | 0.849 | 0.848 |
Decision Tree Classifier | 0.819 | 0.880 | 0.821 | 0.820 | 0.819 |
Naive Bayes | 0.687 | 0.900 | 0.668 | 0.680 | 0.644 |
Quadratic Discriminant Analysis | 0.542 | 0.914 | 0.536 | 0.662 | 0.528 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Demertzis, K.; Kikiras, P.; Skianis, C.; Rantos, K.; Iliadis, L.; Stamoulis, G. Federated Auto-Meta-Ensemble Learning Framework for AI-Enabled Military Operations. Electronics 2023, 12, 430. https://doi.org/10.3390/electronics12020430
Demertzis K, Kikiras P, Skianis C, Rantos K, Iliadis L, Stamoulis G. Federated Auto-Meta-Ensemble Learning Framework for AI-Enabled Military Operations. Electronics. 2023; 12(2):430. https://doi.org/10.3390/electronics12020430
Chicago/Turabian StyleDemertzis, Konstantinos, Panayotis Kikiras, Charalabos Skianis, Konstantinos Rantos, Lazaros Iliadis, and George Stamoulis. 2023. "Federated Auto-Meta-Ensemble Learning Framework for AI-Enabled Military Operations" Electronics 12, no. 2: 430. https://doi.org/10.3390/electronics12020430
APA StyleDemertzis, K., Kikiras, P., Skianis, C., Rantos, K., Iliadis, L., & Stamoulis, G. (2023). Federated Auto-Meta-Ensemble Learning Framework for AI-Enabled Military Operations. Electronics, 12(2), 430. https://doi.org/10.3390/electronics12020430