On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective
Abstract
:1. Introduction
- We not only highlight the unique characteristics of ML-based NIDSs, and their relevance to robustness (Section 2.2) but also conduct an analysis of existing survey papers encompassing ML robustness and ML-based NIDSs (Section 2.3).
- We systematically summarize a taxonomy of existing ML-based NIDSs’ robustness studies (Section 4.1). In our taxonomy, we arrange the robustness studies in six stages of the ML workflow. For each stage, we introduce research topics related to robustness challenges or robustness improvement methods for both adversarial attacks and distribution shifts aspects. In addition to the ML-based NIDS works, we also introduce some other fields’ advanced ML studies and techniques.
- Based on our analysis, we summarize the main takeaways. We give some future research directions about the robustness of ML-based NIDS.
2. Background of ML Robustness, ML-Based NIDSs, and Existing Surveys
2.1. The Concepts Related to ML Robustness
2.2. The Uniqueness of ML-Based NIDSs
2.3. Existing Surveys of the Robustness of ML
3. Research Methodology
3.1. Keywords for Collecting Literature
3.2. Expanding the Scope for a Comprehensive Coverage
3.3. Categorization and Workflow Mapping
4. Taxonomy, Models, and Uniqueness of NIDS Robustness
4.1. Taxonomy of NIDS Robustness Study
4.2. Adversarial Attacks
- Poisoning attacks: In the training stage of ML workflow, poisoning attacks aim to perturb the training dataset by changing the inputs or shifting the labels so that they influence the trained model’s future capability. If the attacker adds a trigger to the training data so that they can force the ML model to execute particular behaviors in the inference stage, those attacks are known as backdoor attacks.
- Evasion attacks: in the inference stage, evasion attacks refer to a type of attack that attempts to manipulate or exploit a machine learning model by perturbing input data in such a way that it confuses or misleads the model’s predictions.
- White-box attacks: The attackers know everything about the target ML models, such as the decision boundary. In this case, attackers can modify the inputs with the minimum perturbation but with a very high success rate [27].
- Gray-box attacks: the attackers only have part of the knowledge of target ML models and are able to access target models and observe their behaviors [28].
- Black-box attacks: the attackers do not have any information about the target ML models and cannot access the target models’ responses.
- Feature-based attacks: This type of adversarial attack against ML-based NIDSs focuses on perturbing the extracted features that represent a network traffic flow.
- Traffic-based attacks: Given the feature extraction component is included in NIDSs, it is impractical to directly modify the extracted features in real-world scenarios. Traffic-based attacks refer to those attack methods that focus on modifying the original network traffic [29].
4.3. Distribution Shifts
- A label shift arises when changes while remains constant.
- A covariate shift occurs when changes while remains constant.
- A concept drift manifests when changes while remains constant.
- Spurious correlations refer to statistical associations between features and labels that exhibit a predictive capability within the training distribution yet fail to constrain such predictive power within the test distribution [33].
- Temporal (concept) drift and knowledge extrapolation refers to language change and world knowledge change, which are unseen data far beyond the training distribution.
4.4. ML Robustness Model
5. Building in Robustness for Natural and Malicious Exploitation of Data Distribution Shift
5.1. Data Collection and Processing
5.1.1. Adversarial Challenges and Response
5.1.2. Distribution Shift Challenges and Response
- Mainstream data format—tabular data: Most ML-based NIDSs use statistical features in tabular format for detection. But modifying the features’ values is very risky, and it is hard to verify if the augmented samples are realistic or not.
- Structured raw packets: Network packets are designed for varying types of protocols and services. But within each individual type of traffic, the packet structure is clearly defined. That means unlike images, network data augmentation must only happen in the parts of raw packets that will not break the construction rules.
- Flexible raw packets: For network packets, not only the value of raw bytes can be modified but also the length of packets. The flexibility of each packet further exponentially affects the traffic flow they belong to. This flexibility makes it so hard to preserve the label-dependent features in the original data during augmentation.
5.2. Optimization
5.2.1. Adversarial Challenges and Response
5.2.2. Distribution Shift Challenges and Response
6. Patching Up Robustness for Natural and Malicious Exploitation of Data Distribution Shift
6.1. Fine-Tuning
6.1.1. Adversarial Challenges and Response
6.1.2. Distribution Shift Challenges and Response
6.2. Evaluation
6.2.1. Adversarial Challenges and Response
6.2.2. Distribution Shift Challenges and Response
6.3. Application Inferences
6.3.1. Adversarial Challenges and Response
6.3.2. Distribution Shift Challenges and Response
7. Research Summary and Future Directions
7.1. Main Takeaways
- Poisoning attacks are not easy to launch against ML-based NIDSs. However, online learning and distributed learning systems (such as federal learning and IoT scenarios) are more vulnerable (Section 5.1.1).
- Evasion attacks, not only feature-based but also traffic-based, against ML-based NIDS have already received a lot of attention. However, how to use those attack methods to practically benefit robustness against adversarial is still unclear (Section 6.3.1).
- Concept drift caused by temporal change has been comprehensively studied for ML-based NIDSs. The main solution is the life-cycle adaptation method, specifically retraining the ML model after the drift happens (Section 6.3.2).
- Distribution shifts caused by a network environment change have received less attention than concept drift for ML-based NIDSs. However, a pretrained NIDS model that is generalized across different network environments will greatly benefit from being deployed in a particular environment (Section 6.3.2).
7.2. Discussion on Future Directions
7.2.1. Contrastive Learning for NIDSs
7.2.2. Robustness Certification for NIDSs
7.2.3. Adversarial Example Detection for NIDSs
7.2.4. Data Augmentation for NIDSs
8. Summary and Conclusions
Author Contributions
Funding
Conflicts of Interest
Nomenclature
Acronyms | Meanings |
AE | Autoencoder |
ANT | Adversarial network traffic |
CL | Contrastive learning |
CNN | Convolutional neural network |
CV | Computer vision |
DANN | Domain-adversarial neural network |
DDoS | Distributed denial of service |
DL | Deep learning |
IF | Isolation Forest |
LSTM | Long short-term memory |
MAC | Media access control |
ML | Machine learning |
NLP | Natural language processing |
NIDSs | Network intrusion detection systems |
OOD | Out-of-domain generalization |
PCA | Principal component analysis |
RNN | Recurrent neural network |
References
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems 2012, Lake Tahoe, NA, USA, 3–6 December 2012; Volume 25. [Google Scholar]
- Hannun, A.; Case, C.; Casper, J.; Catanzaro, B.; Diamos, G.; Elsen, E.; Prenger, R.; Satheesh, S.; Sengupta, S.; Coates, A.; et al. Deep speech: Scaling up end-to-end speech recognition. arXiv 2014, arXiv:1412.5567. [Google Scholar]
- Li, X.; Chen, W.; Zhang, Q.; Wu, L. Building auto-encoder intrusion detection system based on random forest feature selection. Comput. Secur. 2020, 95, 101851. [Google Scholar] [CrossRef]
- Mirsky, Y.; Doitshman, T.; Elovici, Y.; Shabtai, A. Kitsune: An Ensemble of Autoencoders for Online Network Intrusion Detection. In Proceedings of the 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, CA, USA, 18–21 February 2018. [Google Scholar]
- Tocchetti, A.; Corti, L.; Balayn, A.; Yurrita, M.; Lippmann, P.; Brambilla, M.; Yang, J. AI Robustness: A Human-Centered Perspective on Technological Challenges and Opportunities. 2022. Available online: http://xxx.lanl.gov/abs/2210.08906 (accessed on 18 August 2023).
- Floridi, L. Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 2019, 1, 261–262. [Google Scholar] [CrossRef]
- Hoffman, W. Making AI Work for Cyber Defense; Center for Security and Emerging Technology: Georgetown, DC, USA, 2021. [Google Scholar]
- Viegas, E.K.; Santin, A.O.; Tedeschi, P. Toward a Reliable Evaluation of Machine Learning Schemes for Network-Based Intrusion Detection. IEEE Internet Things Mag. 2023, 6, 70–75. [Google Scholar] [CrossRef]
- Wei, F.; Li, H.; Zhao, Z.; Hu, H. XNIDS: Explaining Deep Learning-Based Network Intrusion Detection Systems for Active Intrusion Responses. In Proceedings of the 32nd USENIX Security Symposium (USENIX Security 23), Anaheim, CA, USA, 9–11 August 2023. [Google Scholar]
- Benzaïd, C.; Taleb, T. AI for beyond 5G networks: A cyber-security defense or offense enabler? IEEE Netw. 2020, 34, 140–147. [Google Scholar] [CrossRef]
- Sarker, I.H. Machine learning: Algorithms, real-world applications and research directions. SN Comput. Sci. 2021, 2, 160. [Google Scholar] [CrossRef]
- Xiong, P.; Buffett, S.; Iqbal, S.; Lamontagne, P.; Mamun, M.; Molyneaux, H. Towards a robust and trustworthy machine learning system development: An engineering perspective. J. Inf. Secur. Appl. 2022, 65, 103121. [Google Scholar] [CrossRef]
- Chen, P.Y.; Das, P. AI Maintenance: A Robustness Perspective. Computer 2023, 56, 48–56. [Google Scholar] [CrossRef]
- Drenkow, N.; Sani, N.; Shpitser, I.; Unberath, M. A systematic review of robustness in deep learning for computer vision: Mind the gap? arXiv 2021, arXiv:2112.00639. [Google Scholar]
- Teney, D.; Lin, Y.; Oh, S.J.; Abbasnejad, E. Id and ood performance are sometimes inversely correlated on real-world datasets. arXiv 2022, arXiv:2209.00613. [Google Scholar]
- Apruzzese, G.; Andreolini, M.; Ferretti, L.; Marchetti, M.; Colajanni, M. Modeling realistic adversarial attacks against network intrusion detection systems. Digit. Threat. Res. Pract. DTRAP 2022, 3, 1–19. [Google Scholar] [CrossRef]
- Mbow, M.; Sakurai, K.; Koide, H. Advances in Adversarial Attacks and Defenses in Intrusion Detection System: A Survey. In Proceedings of the International Conference on Science of Cyber Security, Matsue, Japan, 10–12 August 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 196–212. [Google Scholar]
- He, K.; Kim, D.D.; Asghar, M.R. Adversarial machine learning for network intrusion detection systems: A comprehensive survey. IEEE Commun. Surv. Tutor. 2023, 25, 538–566. [Google Scholar] [CrossRef]
- Jmila, H.; Khedher, M.I. Adversarial machine learning for network intrusion detection: A comparative study. Comput. Netw. 2022, 214, 109073. [Google Scholar] [CrossRef]
- Sarker, I.H. Multi-aspects AI-based modeling and adversarial learning for cybersecurity intelligence and robustness: A comprehensive overview. Secur. Priv. 2023, 6, e295. [Google Scholar] [CrossRef]
- Gama, J.; Žliobaitė, I.; Bifet, A.; Pechenizkiy, M.; Bouchachia, A. A survey on concept drift adaptation. ACM Comput. Surv. CSUR 2014, 46, 1–37. [Google Scholar] [CrossRef]
- Lu, J.; Liu, A.; Dong, F.; Gu, F.; Gama, J.; Zhang, G. Learning under concept drift: A review. IEEE Trans. Knowl. Data Eng. 2018, 31, 2346–2363. [Google Scholar] [CrossRef]
- Adnan, A.; Muhammed, A.; Abd Ghani, A.A.; Abdullah, A.; Hakim, F. An intrusion detection system for the internet of things based on machine learning: Review and challenges. Symmetry 2021, 13, 1011. [Google Scholar] [CrossRef]
- Nixon, C.; Sedky, M.; Hassan, M. Reviews in Online Data Stream and Active Learning for Cyber Intrusion Detection—A Systematic Literature Review. In Proceedings of the 2021 Sixth International Conference on Fog and Mobile Edge Computing (FMEC), Gandia, Spain, 6–9 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Li, B.; Qi, P.; Liu, B.; Di, S.; Liu, J.; Pei, J.; Yi, J.; Zhou, B. Trustworthy AI: From Principles to Practices. arXiv 2022, arXiv:2110.01167. [Google Scholar] [CrossRef]
- Kloft, M.; Laskov, P. Online Anomaly Detection under Adversarial Impact. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, JMLR Workshop and Conference Proceedings, Sardinia, Italy, 13–15 May 2010; pp. 405–412. [Google Scholar]
- Clements, J.; Yang, Y.; Sharma, A.A.; Hu, H.; Lao, Y. Rallying Adversarial Techniques against Deep Learning for Network Security. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 4–7 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–8. [Google Scholar]
- Wu, D.; Fang, B.; Wang, J.; Liu, Q.; Cui, X. Evading Machine Learning Botnet Detection Models via Deep Reinforcement Learning. In Proceedings of the ICC 2019—2019 IEEE International Conference on Communications (ICC), Shanghai, China, 20–24 May 2019; IEEE: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
- Sharon, Y.; Berend, D.; Liu, Y.; Shabtai, A.; Elovici, Y. Tantra: Timing-based adversarial network traffic reshaping attack. IEEE Trans. Inf. Forensics Secur. 2022, 17, 3225–3237. [Google Scholar] [CrossRef]
- Storkey, A. When training and test sets are different: Characterizing learning transfer. Dataset Shift Mach. Learn. 2009, 30, 6. [Google Scholar]
- Huyen, C. Designing Machine Learning Systems; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2022. [Google Scholar]
- Bommasani, R.; Hudson, D.A.; Adeli, E.; Altman, R.; Arora, S.; von Arx, S.; Bernstein, M.S.; Bohg, J.; Bosselut, A.; Brunskill, E.; et al. On the opportunities and risks of foundation models. arXiv 2021, arXiv:2108.07258. [Google Scholar]
- Sagawa, S.; Koh, P.W.; Hashimoto, T.B.; Liang, P. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. arXiv 2019, arXiv:1911.08731. [Google Scholar]
- Wang, W.; Sheng, Y.; Wang, J.; Zeng, X.; Ye, X.; Huang, Y.; Zhu, M. HAST-IDS: Learning hierarchical spatial-temporal features using deep neural networks to improve intrusion detection. IEEE Access 2017, 6, 1792–1806. [Google Scholar] [CrossRef]
- Doriguzzi-Corin, R.; Millar, S.; Scott-Hayward, S.; Martinez-del Rincon, J.; Siracusa, D. LUCID: A practical, lightweight deep learning solution for DDoS attack detection. IEEE Trans. Netw. Serv. Manag. 2020, 17, 876–889. [Google Scholar] [CrossRef]
- Dragoi, M.; Burceanu, E.; Haller, E.; Manolache, A.; Brad, F. AnoShift: A distribution shift benchmark for unsupervised anomaly detection. Adv. Neural Inf. Process. Syst. 2022, 35, 32854–32867. [Google Scholar]
- Jagielski, M.; Oprea, A.; Biggio, B.; Liu, C.; Nita-Rotaru, C.; Li, B. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. In Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20–24 May 2018; pp. 19–35. [Google Scholar] [CrossRef]
- Goldblum, M.; Tsipras, D.; Xie, C.; Chen, X.; Schwarzschild, A.; Song, D.; Mądry, A.; Li, B.; Goldstein, T. Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1563–1580. [Google Scholar] [CrossRef]
- Nguyen, T.D.; Rieger, P.; Miettinen, M.; Sadeghi, A.R. Poisoning Attacks on Federated Learning-Based IoT Intrusion Detection System. In Proceedings of the Decentralized IoT Systems and Security (DISS), San Diego, CA, USA, 23–26 February 2020; pp. 1–7. [Google Scholar]
- Zhang, Z.; Zhang, Y.; Guo, D.; Yao, L.; Li, Z. SecFedNIDS: Robust defense for poisoning attack against federated learning-based network intrusion detection system. Future Gener. Comput. Syst. 2022, 134, 154–169. [Google Scholar] [CrossRef]
- Lai, Y.C.; Lin, J.Y.; Lin, Y.D.; Hwang, R.H.; Lin, P.C.; Wu, H.K.; Chen, C.K. Two-phase Defense Against Poisoning Attacks on Federated Learning-based Intrusion Detection. Comput. Secur. 2023, 129, 103205. [Google Scholar] [CrossRef]
- Zhang, H.; Yu, X.; Ren, P.; Luo, C.; Min, G. Deep adversarial learning in intrusion detection: A data augmentation enhanced framework. arXiv 2019, arXiv:1901.07949. [Google Scholar]
- Yuan, D.; Ota, K.; Dong, M.; Zhu, X.; Wu, T.; Zhang, L.; Ma, J. Intrusion Detection for Smart Home Security Based on Data Augmentation with Edge Computing. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; IEEE: New York, NY, USA, 2020; pp. 1–6. [Google Scholar]
- Geirhos, R.; Rubisch, P.; Michaelis, C.; Bethge, M.; Wichmann, F.A.; Brendel, W. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv 2018, arXiv:1811.12231. [Google Scholar]
- Hendrycks, D.; Mu, N.; Cubuk, E.D.; Zoph, B.; Gilmer, J.; Lakshminarayanan, B. Augmix: A simple data processing method to improve robustness and uncertainty. arXiv 2019, arXiv:1912.02781. [Google Scholar]
- Hendrycks, D.; Basart, S.; Mu, N.; Kadavath, S.; Wang, F.; Dorundo, E.; Desai, R.; Zhu, T.; Parajuli, S.; Guo, M.; et al. The Many Faces of Robustness: A Critical Analysis of Uut-of-Distribution Generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville, TN, USA, 20–25 June 2021; pp. 8340–8349. [Google Scholar]
- Wei, J.; Zou, K. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv 2019, arXiv:1901.11196. [Google Scholar]
- Chen, J.; Yang, Z.; Yang, D. Mixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification. arXiv 2020, arXiv:2004.12239. [Google Scholar]
- Xie, R.; Cao, J.; Dong, E.; Xu, M.; Sun, K.; Li, Q.; Shen, L.; Zhang, M. Rosetta: Enabling Robust TLS Encrypted Traffic Classification in Diverse Network Environments with TCP-Aware Traffic Augmentation. In Proceedings of the ACM Turing Award Celebration Conference, Wuhan, China, 28–30 July 2023. [Google Scholar]
- Gao, I.; Sagawa, S.; Koh, P.W.; Hashimoto, T.; Liang, P. Out-of-Domain Robustness via Targeted Augmentations. arXiv 2023, arXiv:2302.11861. [Google Scholar]
- Divekar, A.; Parekh, M.; Savla, V.; Mishra, R.; Shirole, M. Benchmarking Datasets for Anomaly-Based Network Intrusion Detection: KDD CUP 99 Alternatives. In Proceedings of the 2018 IEEE 3rd International Conference on Computing, Communication and Security (ICCCS), Kathmandu, Nepal, 25–27 October 2018; IEEE: New York, NY, USA, 2018; pp. 1–8. [Google Scholar]
- Deng, L.; Zhao, Y.; Bao, H. A Self-supervised Adversarial Learning Approach for Network Intrusion Detection System. In Proceedings of the Cyber Security, Beijing, China, 16–17 August 2022; Springer Nature: Singapore, 2022; pp. 73–85. [Google Scholar]
- Bostani, H.; Zhao, Z.; Liu, Z.; Moonsamy, V. Level Up with RealAEs: Leveraging Domain Constraints in Feature Space to Strengthen Robustness of Android Malware Detection. 2023. Available online: http://xxx.lanl.gov/abs/2205.15128 (accessed on 12 August 2023).
- Liu, X.; Zhang, F.; Hou, Z.; Mian, L.; Wang, Z.; Zhang, J.; Tang, J. Self-Supervised Learning: Generative or Contrastive. IEEE Trans. Knowl. Data Eng. 2023, 35, 857–876. [Google Scholar] [CrossRef]
- Khosla, P.; Teterwak, P.; Wang, C.; Sarna, A.; Tian, Y.; Isola, P.; Maschinot, A.; Liu, C.; Krishnan, D. Supervised Contrastive Learning. In Advances in Neural Information Processing Systems; Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2020; Volume 33, pp. 18661–18673. [Google Scholar]
- Liu, L.; Wang, P.; Ruan, J.; Lin, J. ConFlow: Contrast Network Flow Improving Class-Imbalanced Learning in Network Intrusion Detection. Res. Sq. 2022; preprint. [Google Scholar] [CrossRef]
- Tong, L.; Li, B.; Hajaj, C.; Xiao, C.; Zhang, N.; Vorobeychik, Y. Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features. In Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA, 14–16 August 2019; pp. 285–302. [Google Scholar]
- Dodge, J.; Ilharco, G.; Schwartz, R.; Farhadi, A.; Hajishirzi, H.; Smith, N. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv 2020, arXiv:2002.06305. [Google Scholar]
- Wang, J.; Pan, J.; AlQerm, I.; Liu, Y. Def-IDS: An Ensemble Defense Mechanism Against Adversarial Attacks for Deep Learning-Based Network Intrusion Detection. In Proceedings of the 2021 International Conference on Computer Communications and Networks (ICCCN), Athens, Greece, 19–22 July 2021; pp. 1–9. [Google Scholar] [CrossRef]
- Du, T.; Ji, S.; Shen, L.; Zhang, Y.; Li, J.; Shi, J.; Fang, C.; Yin, J.; Beyah, R.; Wang, T. Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks. CCS 2021, 21, 15–19. [Google Scholar]
- Shi, Z.; Zhang, H.; Chang, K.W.; Huang, M.; Hsieh, C.J. Robustness verification for transformers. arXiv 2020, arXiv:2002.06622. [Google Scholar]
- Cohen, J.; Rosenfeld, E.; Kolter, Z. Certified Adversarial Robustness via Randomized Smoothing. In Proceedings of the International Conference on Machine Learning (PMLR), Long Beach, CA, USA, 9–15 June 2019; pp. 1310–1320. [Google Scholar]
- Yang, G.; Duan, T.; Hu, J.E.; Salman, H.; Razenshteyn, I.; Li, J. Randomized Smoothing of All Shapes and Sizes. In Proceedings of the International Conference on Machine Learning (PMLR), Virtual Event, 13–18 July 2020; pp. 10693–10705. [Google Scholar]
- Layeghy, S.; Baktashmotlagh, M.; Portmann, M. DI-NIDS: Domain invariant network intrusion detection system. Knowl.-Based Syst. 2023, 273, 110626. [Google Scholar] [CrossRef]
- Qu, Y.; Ma, H.; Jiang, Y.; Bu, Y. A Network Intrusion Detection Method Based on Domain Confusion. Electronics 2023, 12, 1255. [Google Scholar] [CrossRef]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning Transferable Visual Models from Natural Language Supervision. In Proceedings of the International Conference on Machine Learning (PMLR), Virtual Event, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Kumar, A.; Raghunathan, A.; Jones, R.; Ma, T.; Liang, P. Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. arXiv 2022, arXiv:2202.10054. [Google Scholar]
- Gunel, B.; Du, J.; Conneau, A.; Stoyanov, V. Supervised Contrastive Learning for Pre-Trained Language Model Fine-Tuning. 2021. Available online: http://xxx.lanl.gov/abs/2011.01403 (accessed on 7 July 2023).
- Yan, Y.; Li, R.; Wang, S.; Zhang, F.; Wu, W.; Xu, W. ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Online, 1–6 August 2021; pp. 5065–5075. [Google Scholar] [CrossRef]
- Li, L.; Weber, M.; Xu, X.; Rimanic, L.; Kailkhura, B.; Xie, T.; Zhang, C.; Li, B. Tss: Transformation-Specific Smoothing for Robustness Certification. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, Online, 15–19 November 2021; pp. 535–557. [Google Scholar]
- Wang, K.; Wang, Z.; Han, D.; Chen, W.; Yang, J.; Shi, X.; Yin, X. BARS: Local Robustness Certification for Deep Learning based Traffic Analysis Systems. In Proceedings of the NDSS, San Diego, CA, USA, 27 February–3 March 2023. [Google Scholar]
- Pal, A.; Sulam, J. Understanding Noise-Augmented Training for Randomized Smoothing. arXiv 2023, arXiv:2305.04746. [Google Scholar]
- Verkerken, M.; D’hooge, L.; Wauters, T.; Volckaert, B.; De Turck, F. Towards model generalization for intrusion detection: Unsupervised machine learning techniques. J. Netw. Syst. Manag. 2022, 30, 12. [Google Scholar] [CrossRef]
- Al-Riyami, S.; Coenen, F.; Lisitsa, A. A Re-Evaluation of Intrusion Detection Accuracy: Alternative Evaluation Strategy. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, Toronto, ON, Canada, 15–19 October 2018; pp. 2195–2197. [Google Scholar]
- Al-Riyami, S.; Lisitsa, A.; Coenen, F. Cross-Datasets Evaluation of Machine Learning Models for Intrusion Detection Systems. In Proceedings of the Sixth International Congress on Information and Communication Technology: ICICT 2021, London, UK, 25–26 February 2021; Springer: Berlin/Heidelberg, Germany, 2022; Volume 4, pp. 815–828. [Google Scholar]
- Apruzzese, G.; Pajola, L.; Conti, M. The cross-evaluation of machine learning-based network intrusion detection systems. IEEE Trans. Netw. Serv. Manag. 2022, 19, 5152–5169. [Google Scholar] [CrossRef]
- Layeghy, S.; Portmann, M. Explainable Cross-domain Evaluation of ML-based Network Intrusion Detection Systems. Comput. Electr. Eng. 2023, 108, 108692. [Google Scholar] [CrossRef]
- Peng, X.; Huang, W.; Shi, Z. Adversarial Attack against DoS Intrusion Detection: An Improved Boundary-Based Method. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), Portland, OR, USA, 4–6 November 2019; IEEE: New York, NY, USA, 2019; pp. 1288–1295. [Google Scholar]
- Sadeghzadeh, A.M.; Shiravi, S.; Jalili, R. Adversarial network traffic: Towards evaluating the robustness of deep-learning-based network traffic classification. IEEE Trans. Netw. Serv. Manag. 2021, 18, 1962–1976. [Google Scholar] [CrossRef]
- Han, D.; Wang, Z.; Zhong, Y.; Chen, W.; Yang, J.; Lu, S.; Shi, X.; Yin, X. Evaluating and improving adversarial robustness of machine learning-based network intrusion detectors. IEEE J. Sel. Areas Commun. 2021, 39, 2632–2647. [Google Scholar] [CrossRef]
- Tan, S.; Zhong, X.; Tian, Z.; Dong, Q. Sneaking Through Security: Mutating Live Network Traffic to Evade Learning-Based NIDS. IEEE Trans. Netw. Serv. Manag. 2022, 19, 2295–2308. [Google Scholar] [CrossRef]
- Peng, Y.; Fu, G.; Luo, Y.; Hu, J.; Li, B.; Yan, Q. Detecting Adversarial Examples for Network Intrusion Detection System with GAN. In Proceedings of the 2020 IEEE 11th International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 16–18 October 2020; pp. 6–10. [Google Scholar] [CrossRef]
- Donahue, J.; Krähenbühl, P.; Darrell, T. Adversarial feature learning. arXiv 2016, arXiv:1605.09782. [Google Scholar]
- Wang, N.; Chen, Y.; Hu, Y.; Lou, W.; Hou, Y.T. MANDA: On Adversarial Example Detection for Network Intrusion Detection System. In Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar] [CrossRef]
- Zhang, C.; Costa-Perez, X.; Patras, P. Adversarial attacks against deep learning-based network intrusion detection systems and defense mechanisms. IEEE/ACM Trans. Netw. 2022, 30, 1294–1311. [Google Scholar] [CrossRef]
- Bell, S.; Bala, K. Learning visual similarity for product design with convolutional neural networks. ACM Trans. Graph. TOG 2015, 34, 1–10. [Google Scholar] [CrossRef]
- Widmer, G.; Kubat, M. Learning in the presence of concept drift and hidden contexts. Mach. Learn. 1996, 23, 69–101. [Google Scholar] [CrossRef]
- Andresini, G.; Appice, A.; Loglisci, C.; Belvedere, V.; Redavid, D.; Malerba, D. A Network Intrusion Detection System for Concept Drifting Network Traffic Data. In Proceedings of the Discovery Science: 24th International Conference, DS 2021, Halifax, NS, Canada, 11–13 October 2021; Proceedings 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 111–121. [Google Scholar]
- Kadwe, Y.; Suryawanshi, V. A review on concept drift. IOSR J. Comput. Eng. 2015, 17, 20–26. [Google Scholar]
- Andresini, G.; Pendlebury, F.; Pierazzi, F.; Loglisci, C.; Appice, A.; Cavallaro, L. Insomnia: Towards Concept-Drift Robustness in Network Intrusion Detection. In Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, Virtual Event, 15 November 2021; pp. 111–122. [Google Scholar]
- Yang, L.; Guo, W.; Hao, Q.; Ciptadi, A.; Ahmadzadeh, A.; Xing, X.; Wang, G. CADE: Detecting and Explaining Concept Drift Samples for Security Applications. In Proceedings of the 30th USENIX Security Symposium (USENIX Security 21), Online, 11–13 August 2021; pp. 2327–2344. [Google Scholar]
- Tavallaee, M.; Bagheri, E.; Lu, W.; Ghorbani, A.A. A Detailed Analysis of the KDD CUP 99 Data Set. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence for Security and Defense Applications, Ottawa, ON, Canada, 8–10 July 2009; IEEE: New York, NY, USA, 2009; pp. 1–6. [Google Scholar]
- Perona, I.; Gurrutxaga, I.; Arbelaitz, O.; Martín, J.I.; Muguerza, J.; Pérez, J.M. Service-Independent Payload Analysis to Improve Intrusion Detection in Network Traffic. In Proceedings of the 7th Australasian Data Mining Conference, Glenelg/Adelaide, SA, Australia, 27–28 November 2008; Citeseer: State College, PA, USA, 2008; Volume 87, pp. 171–178. [Google Scholar]
Levels | Keywords |
---|---|
Core Topic | Robustness, adversarial, distribution shifts |
Scope and Scenario | Machine learning, deep learning, neural networks, NIDSs |
Technique | Poisoning attacks, evasion attacks, data augmentation, contrastive learning, adversarial training, fine-tuning, domain adaptation, robustness certification, cross-dataset evaluation, adversarial example |
Techniques | Impacts on ML Model/System’s Robustness | Stages in the Life Cycle | Degree of Study in NIDSs | Degree of Study in Other Fields |
---|---|---|---|---|
Poisoning attacks | Reduces model robustness | Data preparation | Moderate | Moderate |
Evasion attacks | Unclear | Inference | Comprehensive | Comprehensive |
Data augmentation | Improves model robustness | Data preparation | Limited | Comprehensive |
Contrastive learning | Improves model robustness | Pretraining | Limited | Comprehensive |
Adversarial training | Improves model robustness | Training/ retraining | Moderate | Comprehensive |
Fine-tuning | Based on the used data, could be beneficial or harmful | Retraining | Moderate | Comprehensive |
Domain adaptation | Improves system robustness (against concept drifts) | Retraining | Moderate | Comprehensive |
Robustness certification | Evaluates robustness (against adversarial attacks) | Evaluation | Limited | Moderate |
Cross-dataset evaluation | Evaluates robustness (against distribution shifts) | Evaluation | Moderate | Moderate |
Adversarial example detection | Improves system robustness (against adversarial attacks) | Inference | Limited | Comprehensive |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, M.; Yang, N.; Gunasinghe, D.H.; Weng, N. On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective. Computers 2023, 12, 209. https://doi.org/10.3390/computers12100209
Wang M, Yang N, Gunasinghe DH, Weng N. On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective. Computers. 2023; 12(10):209. https://doi.org/10.3390/computers12100209
Chicago/Turabian StyleWang, Minxiao, Ning Yang, Dulaj H. Gunasinghe, and Ning Weng. 2023. "On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective" Computers 12, no. 10: 209. https://doi.org/10.3390/computers12100209
APA StyleWang, M., Yang, N., Gunasinghe, D. H., & Weng, N. (2023). On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective. Computers, 12(10), 209. https://doi.org/10.3390/computers12100209