Automated Sensor Node Malicious Activity Detection with Explainability Analysis
Abstract
:1. Introduction
- –
- We have proposed a systematic hybrid data balancing technique using cluster undersampling and the SMOTE oversampling method.
- –
- We have proposed an ensemble learning method that outperforms other state-of-the-art ML models in detecting malicious nodes.
- –
- We have also conducted a detailed explainability analysis of our model to determine which features are contributing to specific decisions and why.
2. Related Work
3. Dataset Overview and Data Balancing
3.1. Dataset Overview
- Features and Example of the Dataset
3.2. Data Visualization
3.2.1. Data Distribution
3.2.2. Data Distribution of the Features
3.3. Feature Selection
3.4. Data Balancing
Algorithm 1 Data Balancing Algorithm |
|
3.4.1. Cluster Undersampling
- Number of Optimum Clusters: We need to find the optimum number of clusters for the majority class. The elbow method is used to identify the optimum cluster numbers [27]. For the malicious sensor node dataset, the majority class is the non-malicious class. So, we have dropped the class label and applied the elbow method for the identification of the optimum cluster number. Figure 5 shows the implementation of the elbow method for finding the optimum cluster number. The X-axis of the figure represents the number of clusters denoted with K and the Y-axis represents the Distortion score (average squared distance from the clusters centroids) for the respective number of clusters. According to the elbow method, the optimum cluster number for the non-malicious class is 4 (K = 4). We create four clusters using the K-means clustering algorithm in the next step.
- Apply K-means for Cluster Creation: K-means is a simple and one of the most widely used state-of-the-art unsupervised machine learning algorithms [28]. We have utilized the algorithm for cluster creation and created clusters with the majority (non-malicious) class data, which has helped us to unleash the underlying categories inside the class. Figure 6 demonstrates the distribution of the four clusters of the majority class with a 2D t-SNE scatter plot.
- Systematic Data Extraction: We extracted data from each cluster in a stratified manner to reduce information loss. This approach ensures that we have data from each category.
- Size of Undersampled Data: Our dataset is a binary classification dataset. A binary dataset is considered balanced when the proportion of datapoints in each class (positive and negative) is approximately equal, usually around 50%. As most of the data in our dataset are non-malicious, we have kept the sample size (50–60)%.
3.4.2. SOMOTE Oversampling
- Apply SMOTE Oversampling: In our dataset, the malicious class is the minority class consisting of only 5% of the entire dataset. To resolve the overfitting and underfitting problems, we have applied SMOTE oversampling to generate synthetic data of the minority class.
- Size of Oversampling Data: When creating synthetic data, there are always possibilities of information loss and noise. So, we have kept the sample size of the oversampled minority class to (40–50)%.
3.4.3. Merging the Oversampled and Undersampled Data
4. Methodology
4.1. Dataset Collection and Preprocessing
4.2. Training and Testing Set Generation
4.3. Proposed Ensemble Classification Model
- –
- Logistic Regression (LR): Logistic Regression is a popular machine learning classification model for solving binary classification problems. It finds a relationship between the independent variables and the dependent (target) variable. With the help of the sigmoid function, it represents the probability of the occurrence of an event. In order to calculate the probability of observed data, it optimizes coefficients for independent variables. The impact of each feature is determined by the optimized coefficients of the independent variables [30].
- –
- Gaussian Naive Bayes (GNB): Gaussian Naive Bayes is a simple but effective machine learning classification algorithm. It considers that each independent feature is represented with Gaussian distribution for individual classes. The probability of a class is calculated using the Gaussian probability distribution for each feature based on the Naive Bayes theorem. It performs well for datasets where the features are continuous [31].
- –
- K-Nearest Neighbours (KNN): KNN is the simplest classifier algorithm but is effective for solving simple problems. It calculates the distance of all the training datapoints from the new datapoint that we want to predict. Finally, it considers the K number of nearest datapoints around the new datapoint and assigns the majority class to the new datapoint [32].
- –
- Linear Support Vector Machine (SVM): SVM is a binary classification algorithm. SVM finds the optimal decision boundary (hyperplane) between two different classes. In the training phase, the algorithm tries to maximize the margin between the classes. Thus, it finds the optimal hyperplane. Any new instance is classified based on the decision boundary [33].
- –
- Decision Tree (DT): DT is a tree-based predictive model. The data are partitioned recursively according to the most informative features to build the tree. The decision tree consists of nodes, branches, and leaves where nodes represent decision points, branches represent possible choices, and leaves represent outcomes [34].
Algorithm 2 Ensemble Malicious Node Classifier |
|
4.4. Technique for Explainability Analysis
5. Evaluation and Experimental Analysis
5.1. Evaluation Metrics
- True Positive (TP): TP is the outcome of the model’s correct prediction of the positive (e.g., malicious) class.
- False Positive (FP): FP occurs when the model predicts the negative (e.g., non-malicious) class as a positive class.
- True Negative (TN): TN is the outcome of the model’s correct prediction of the negative class.
- False Negative (FN): FN occurs when the model predicts the positive class as a negative class.
- Precision: Precision is the proportion of true positive (TP) prediction over the total number of positive predictions (TP + FP). Precision is important where the output of the positive class is crucial.
- Recall: Recall is the proportion of true positive (TP) prediction over the actual positive class (TP + FN). It is crucial for the application where the cost of false positives is very high.
- F1-Score: F1-score is the trade-off between precision and recall, and the harmonic mean of precision and recall. It is important for imbalanced datasets because a model trained on an imbalanced dataset might perform well for one class and poorly for another class. In that case, the F1-score considers both precision and recall value for performance evaluation.
- Accuracy: Accuracy is the metric to measure the overall performance of a classification algorithm. It is the ratio of total correction prediction to the total number of testing samples.
5.2. Model Evaluation without Data Balancing
5.3. Model Evaluation with Random under and SMOTE over Sampled Data
5.4. Model Evaluation with MSMOTE and ADASYN Data Balancing Technique
5.5. Model Evaluation with Proposed Data Balancing Technique
5.6. ROC Curve and AUC Score Analysis of the Models
5.7. Precision vs. Recall Graph Analysis of ML Models
5.8. Overall Comparison
6. Explainability Analysis
6.1. Local Explainability Analysis
6.2. Global Explanibility Analysis
7. Discussion
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Colombo, A.W.; Karnouskos, S.; Kaynak, O.; Shi, Y.; Yin, S. Industrial cyberphysical systems: A backbone of the fourth industrial revolution. IEEE Ind. Electron. Mag. 2017, 11, 6–16. [Google Scholar] [CrossRef]
- Kayan, H.; Nunes, M.; Rana, O.; Burnap, P.; Perera, C. Cybersecurity of industrial cyber-physical systems: A review. ACM Comput. Surv. 2022, 54, 1–35. [Google Scholar] [CrossRef]
- Javaid, M.; Haleem, A.; Rab, S.; Singh, R.P.; Suman, R. Sensors for daily life: A review. Sens. Int. 2021, 2, 100121. [Google Scholar] [CrossRef]
- Boubiche, D.E.; Athmani, S.; Boubiche, S.; Toral-Cruz, H. Cybersecurity issues in wireless sensor networks: Current challenges and solutions. Wirel. Pers. Commun. 2021, 117, 177–213. [Google Scholar] [CrossRef]
- Duobiene, S.; Ratautas, K.; Trusovas, R.; Ragulis, P.; Šlekas, G.; Simniškis, R.; Račiukaitis, G. Development of wireless sensor network for environment monitoring and its implementation using SSAIL technology. Sensors 2022, 22, 5343. [Google Scholar] [CrossRef] [PubMed]
- Apruzzese, G.; Laskov, P.; Montes de Oca, E.; Mallouli, W.; Brdalo Rapa, L.; Grammatopoulos, A.V.; Di Franco, F. The role of machine learning in cybersecurity. Digit. Threat. Res. Pract. 2023, 4, 1–38. [Google Scholar] [CrossRef]
- Raghunath, K.M.K.; Arvind, K.S. SensorNetGuard: A Dataset for Identifying Malicious Sensor Nodes. IEEEDataPort 2023. [Google Scholar] [CrossRef]
- Sarker, I.H. AI-Driven Cybersecurity and Threat Intelligence: Cyber Automation, Intelligent Decision-Making and Explainability; Springer Nature: Cham, Switzerland, 2024. [Google Scholar]
- Mokhtar, R.; Rohaizat, A. Cybercrimes and cyber security trends in the new normal. In The New Normal and Its Impact on Society: Perspectives from ASEAN and the European Union; Springer: Singapore, 2024; pp. 41–60. [Google Scholar]
- Sarker, I.H. Multi-aspects AI-based modeling and adversarial learning for cybersecurity intelligence and robustness: A comprehensive overview. Secur. Priv. 2023, 6, e295. [Google Scholar] [CrossRef]
- Makanju, A.; LaRoche, P.; Zincir-Heywood, A.N. A Comparison between Signature and Machine Learning Based Detectors; Dalhousie University: Halifax, NS, Canada, 2024. [Google Scholar]
- Tan, X.; Su, S.; Huang, Z.; Guo, X.; Zuo, Z.; Sun, X.; Li, L. Wireless sensor networks intrusion detection based on SMOTE and the Random Forest algorithm. Sensors 2019, 19, 203. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Huang, H.; Li, Q.; He, F.; Sha, C. Generalized intrusion detection mechanism for empowered intruders in wireless sensor networks. IEEE Access 2020, 8, 25170–25183. [Google Scholar] [CrossRef]
- Whelan, J.; Sangarapillai, T.; Minawi, O.; Almehmadi, A.; El-Khatib, K. Novelty-based intrusion detection of sensor attacks on unmanned aerial vehicles. In Proceedings of the 16th ACM Symposium on QoS and Security for Wireless and Mobile Networks, Alicante, Spain, 16–20 November 2020; pp. 23–28. [Google Scholar]
- Ding, H.; Chen, L.; Dong, L.; Fu, Z.; Cui, X. Imbalanced data classification: A KNN and generative adversarial networks-based hybrid approach for intrusion detection. Future Gener. Comput. Syst. 2022, 131, 240–254. [Google Scholar] [CrossRef]
- Fu, Y.; Du, Y.; Cao, Z.; Li, Q.; Xiang, W. A deep learning model for network intrusion detection with imbalanced data. Electronics 2022, 11, 898. [Google Scholar] [CrossRef]
- Moundounga, A.R.A.; Satori, H.; Boutazart, Y.; Abderrahim, E. Malicious attack detection based on continuous Hidden Markov Models in Wireless sensor networks. Microprocess. Microsyst. 2023, 101, 104888. [Google Scholar] [CrossRef]
- Saleh, H.M.; Marouane, H.; Fakhfakh, A. Stochastic Gradient Descent Intrusions Detection for Wireless Sensor Network Attack Detection System Using Machine Learning. IEEE Access 2024, 12, 3825–3836. [Google Scholar] [CrossRef]
- Salmi, S.; Oughdir, L. Performance evaluation of deep learning techniques for DoS attacks detection in wireless sensor network. J. Big Data 2023, 10, 17. [Google Scholar] [CrossRef]
- Almomani, I.; Al-Kasasbeh, B.; Al-Akhras, M. WSN-DS: A dataset for intrusion detection systems in wireless sensor networks. J. Sens. 2016, 2016, 4731953. [Google Scholar] [CrossRef]
- Taher, M.A.; Iqbal, H.; Tariq, M.; Sarwat, A.I. Recurrent neural network—Based sensor data attacks identification in distributed renewable energy—Based DC microgrid. In Proceedings of the 2024 IEEE Texas Power and Energy Conference (TPEC), College Station, TX, USA, 12–13 February 2024; pp. 1–6. [Google Scholar]
- Nouman, M.; Qasim, U.; Nasir, H.; Almasoud, A.; Imran, M.; Javaid, N. Malicious node detection using machine learning and distributed data storage using blockchain in WSNs. IEEE Access 2023, 11, 6106–6121. [Google Scholar] [CrossRef]
- Hasan, M.; Rahman, M.S.; Janicke, H.; Sarker, I.H. Detecting Anomalies in Blockchain Transactions using Machine Learning Classifiers and Explainability Analysis. arXiv 2024, arXiv:2401.03530. [Google Scholar] [CrossRef]
- Kilkenny, M.F.; Robinson, K.M. Data quality: Garbage in–garbage out. Health Inf. Manag. J. Health Inf. Manag. Assoc. Aust. 2018, 47, 183335831877435. [Google Scholar] [CrossRef] [PubMed]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Elssied, N.O.F.; Ibrahim, O.; Osman, A.H. A novel feature selection based on one-way anova f-test for e-mail spam classification. Res. J. Appl. Sci. Eng. Technol. 2014, 7, 625–638. [Google Scholar] [CrossRef]
- Humaira, H.; Rasyidah, R. Determining the appropiate cluster number using elbow method for k-means algorithm. In Proceedings of the 2nd Workshop on Multidisciplinary and Applications (WMA), Padang, Indonesia, 24–25 January 2018. [Google Scholar]
- Zubair, M.; Iqbal, M.A.; Shil, A.; Chowdhury, M.; Moni, M.A.; Sarker, I.H. An improved K-means clustering algorithm towards an efficient data-driven modeling. Ann. Data Sci. 2022. [Google Scholar] [CrossRef]
- Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
- Hosmer, D.W., Jr.; Lemeshow, S.; Sturdivant, R.X. Applied Logistic Regression; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
- Reddy, E.M.K.; Gurrala, A.; Hasitha, V.B.; Kumar, K.V.R. Introduction to Naive Bayes and a review on its subtypes with applications. In Bayesian Reasoning and Gaussian Processes for Machine Learning Applications; Chapman and Hall/CRC: Boca Raton, FL, USA, 2022. [Google Scholar]
- Géron, A. Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow; O’Reilly Media, Inc.: Newton, MA, USA, 2022. [Google Scholar]
- Hearst, M.A.; Dumais, S.T.; Osuna, E.; Platt, J.; Scholkopf, B. Support vector machines. IEEE Intell. Syst. Their Appl. 1998, 13, 18–28. [Google Scholar] [CrossRef]
- Song, Y.Y.; Ying, L. Decision tree methods: Applications for classification and prediction. Shanghai Arch. Psychiatry 2015, 27, 130. [Google Scholar] [PubMed]
- Sarker, I.H.; Janicke, H.; Mohsin, A.; Gill, A.; Maglaras, L. Explainable AI for cybersecurity automation, intelligence and trustworthiness in digital twin: Methods, taxonomy, challenges and prospects. ICT Express 2024. [Google Scholar] [CrossRef]
- Linardatos, P.; Papastefanopoulos, V.; Kotsiantis, S. Explainable ai: A review of machine learning interpretability methods. Entropy 2020, 23, 18. [Google Scholar] [CrossRef] [PubMed]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017): 31st Annual Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2024. [Google Scholar]
- Hu, S.; Liang, Y.; Ma, L.; He, Y. MSMOTE: Improving classification performance when training data is imbalanced. In Proceedings of the IEEE 2009 s International Workshop on Computer Science and Engineering, Qingdao, China, 28–30 October 2009; Volume 2, pp. 13–17. [Google Scholar]
- He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar]
- Bradley, A.P. The use of the area under the ROC curve in the evaluation of machine learning algorithms. Pattern Recognit. 1997, 30, 1145–1159. [Google Scholar] [CrossRef]
Class Label | Number of Instances 1 |
---|---|
Non-Malicious | 9513 |
Malicious | 487 |
Category | Features |
---|---|
General Metrics | Node ID |
Timestamp | |
IP Address | |
Network Traffic Metrics | Packet Rate |
Packet Drop Rate | |
Packet Duplication Rate | |
Data Throughput | |
Signal Metrics | Signal Strength |
Signal to Noise Ratio | |
Power Usage Metrics | Battery Level |
Energy Consumption Rate | |
Routing Metrics | Number of Neighbors |
Routing Request Frequency | |
Routing Reply Frequency | |
Behavioral Metrics | Data Transmission Frequency |
Data Reception Frequency | |
Error Rate | |
Miscellaneous Metrics | CPU Usage |
Memory Usage | |
Bandwidth | |
Metrics Specific to Attacks | Is Malicious |
Node ID | Timestamp | IP Address | Packet Rate | Packet Drop Rate | …… | Bandwidth | Is Malicious |
---|---|---|---|---|---|---|---|
1 | 01-02-23 0:00 | 192.168.119.138 | 52.018229 | 2.727317 | …… | 76.811986 | 0 |
2 | 01-02-23 0:01 | 192.168.225.56 | 59.504648 | 1.435058 | …… | 112.495912 | 0 |
15 | 01-02-23 0:14 | 192.168.133.9 | 72.790914 | 3.803897 | …… | 102.082282 | 1 |
78 | 01-02-23 1:17 | 192.168.148.225 | 85.585024 | 4.038405 | …… | 105.623986 | 1 |
Category | Algorithm | Precision | Recall | F1 Score | Accuracy |
---|---|---|---|---|---|
ML Models | LR | 0.751 | 0.537 | 0.376 | 0.537 |
GNB | 0.894 | 0.872 | 0.869 | 0.872 | |
DT | 0.894 | 0.872 | 0.869 | 0.872 | |
RF | 0.751 | 0.537 | 0.375 | 0.537 | |
XGB | 0.901 | 0.882 | 0.880 | 0.882 | |
SVM | 0.888 | 0.863 | 0.859 | 0.863 | |
KNN | 0.893 | 0.871 | 0.869 | 0.873 | |
DL Models | ANN | 0.872 | 0.834 | 0.826 | 0.834 |
1D CNN | 0.820 | 0.841 | 0.831 | 0.840 | |
RNN | 0.762 | 0.641 | 0.696 | 0.782 | |
LSTM | 0.752 | 0.541 | 0.380 | 0.541 | |
Proposed | Proposed | 0.854 | 0.687 | 0.761 | 0.978 |
Category | Algorithm | Precision | Recall | F1 Score | Accuracy |
---|---|---|---|---|---|
ML Models | LR | 0.664 | 0.652 | 0.652 | 0.652 |
GNB | 0.859 | 0.859 | 0.858 | 0.859 | |
DT | 0.966 | 0.968 | 0.966 | 0.966 | |
RF | 0.859 | 0.858 | 0.857 | 0.858 | |
XGB | 0.942 | 0.945 | 0.942 | 0.942 | |
SVM | 0.859 | 0.857 | 0.856 | 0.857 | |
KNN | 0.928 | 0.935 | 0.928 | 0.928 | |
DL Models | ANN | 0.698 | 0.673 | 0.671 | 0.673 |
1D CNN | 0.862 | 0.891 | 0.876 | 0.901 | |
RNN | 0.810 | 0.831 | 0.820 | 0.862 | |
LSTM | 0.824 | 0.825 | 0.824 | 0.825 | |
Proposed | Ensemble | 0.952 | 0.963 | 0.957 | 0.962 |
Category | Algorithm | MSMOTE | ADASYN | ||||||
---|---|---|---|---|---|---|---|---|---|
Pre. | Rec. | F1 | Acc. | Pre. | Rec. | F1 | Acc. | ||
ML Models | LR | 0.898 | 0.884 | 0.830 | 0.884 | 0.898 | 0.884 | 0.830 | 0.884 |
GNB | 0.965 | 0.966 | 0.965 | 0.966 | 0.972 | 0.973 | 0.972 | 0.973 | |
DT | 0.948 | 0.946 | 0.947 | 0.946 | 0.953 | 0.951 | 0.952 | 0.951 | |
RF | 0.965 | 0.966 | 0.965 | 0.966 | 0.898 | 0.884 | 0.830 | 0.884 | |
XGB | 0.951 | 0.951 | 0.951 | 0.951 | 0.961 | 0.961 | 0.961 | 0.961 | |
SVM | 0.963 | 0.964 | 0.963 | 0.964 | 0.971 | 0.972 | 0.971 | 0.972 | |
KNN | 0.959 | 0.960 | 0.959 | 0.960 | 0.971 | 0.971 | 0.971 | 0.971 | |
DL Models | ANN | 0.898 | 0.850 | 0.873 | 0.883 | 0.888 | 0.923 | 0.902 | 0.912 |
1D CNN | 0.931 | 0.943 | 0.937 | 0.949 | 0.960 | 0.931 | 0.945 | 0.955 | |
RNN | 0.902 | 0.898 | 0.900 | 0.910 | 0.919 | 0.901 | 0.910 | 0.922 | |
LSTM | 0.930 | 0.945 | 0.937 | 0.969 | 0.931 | 0.922 | 0.926 | 0.938 | |
Proposed | Ensemble | 0.940 | 0.943 | 0.941 | 0.947 | 0.975 | 0.964 | 0.970 | 0.973 |
Category | Algorithm | Precision | Recall | F1 Score | Accuracy |
---|---|---|---|---|---|
ML Models | LR | 0.666 | 0.643 | 0.639 | 0.643 |
GNB | 0.992 | 0.992 | 0.991 | 0.990 | |
DT | 0.995 | 0.994 | 0.994 | 0.994 | |
RF | 0.985 | 0.984 | 0.984 | 0.984 | |
XGB | 0.997 | 0.996 | 0.996 | 0.996 | |
SVM | 0.984 | 0.993 | 0.988 | 0.989 | |
KNN | 0.995 | 0.994 | 0.994 | 0.994 | |
DL Models | ANN | 0.852 | 0.846 | 0.845 | 0.846 |
1D CNN | 0.952 | 0.895 | 0.923 | 0.961 | |
RNN | 0.962 | 0.948 | 0.955 | 0.959 | |
LSTM | 0.973 | 0.945 | 0.959 | 0.968 | |
Proposed | Ensemble | 0.994 | 1.0 | 0.997 | 0.997 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zubair, M.; Janicke, H.; Mohsin, A.; Maglaras, L.; Sarker, I.H. Automated Sensor Node Malicious Activity Detection with Explainability Analysis. Sensors 2024, 24, 3712. https://doi.org/10.3390/s24123712
Zubair M, Janicke H, Mohsin A, Maglaras L, Sarker IH. Automated Sensor Node Malicious Activity Detection with Explainability Analysis. Sensors. 2024; 24(12):3712. https://doi.org/10.3390/s24123712
Chicago/Turabian StyleZubair, Md, Helge Janicke, Ahmad Mohsin, Leandros Maglaras, and Iqbal H. Sarker. 2024. "Automated Sensor Node Malicious Activity Detection with Explainability Analysis" Sensors 24, no. 12: 3712. https://doi.org/10.3390/s24123712
APA StyleZubair, M., Janicke, H., Mohsin, A., Maglaras, L., & Sarker, I. H. (2024). Automated Sensor Node Malicious Activity Detection with Explainability Analysis. Sensors, 24(12), 3712. https://doi.org/10.3390/s24123712