Machine Learning Integration with Cyber Security

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Cybersecurity".

Deadline for manuscript submissions: closed (30 April 2022) | Viewed by 24116

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computing and Cyber Security at the Texas A&M, San Antonio, TX 78224, USA
Interests: software engineering; software defined networking; software testing and cyber security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the continuous expansion of the roles of machine learning algorithms in making actionable decisions in our security and decision-making systems, attempts to attack those algorithms continue to expand as well. Adversarial attacks have recently been seen in online social network platforms, network traffic, email spam detections, financial services and many others. In this context, our call for papers welcomes contributions related to adversarial attacks and adversarial machine learning in all fields and applications, such as, but not limited to:

  1. Adversarial machine learning in text and national language processing;
  2. Adversarial machine learning in images and image processing;
  3. Adversarial attacks;
  4. Adversarial defense mechanisms;
  5. Misinformation in social networks and adversarial machine learning;
  6. Adversarial machine learning models;
  7. Social bots and trolls.

Dr. Izzat Alsmadi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • adversarial machine learning
  • adversarial attacks
  • social bots
  • social trolls

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1202 KiB  
Article
Securing the Smart City Airspace: Drone Cyber Attack Detection through Machine Learning
by Zubair Baig, Naeem Syed and Nazeeruddin Mohammad
Future Internet 2022, 14(7), 205; https://doi.org/10.3390/fi14070205 - 30 Jun 2022
Cited by 17 | Viewed by 4224
Abstract
Drones are increasingly adopted to serve a smart city through their ability to render quick and adaptive services. They are also known as unmanned aerial vehicles (UAVs) and are deployed to conduct area surveillance, monitor road networks for traffic, deliver goods and observe [...] Read more.
Drones are increasingly adopted to serve a smart city through their ability to render quick and adaptive services. They are also known as unmanned aerial vehicles (UAVs) and are deployed to conduct area surveillance, monitor road networks for traffic, deliver goods and observe environmental phenomena. Cyber threats posed through compromised drones contribute to sabotage in a smart city’s airspace, can prove to be catastrophic to its operations, and can also cause fatalities. In this contribution, we propose a machine learning-based approach for detecting hijacking, GPS signal jamming and denial of service (DoS) attacks that can be carried out against a drone. A detailed machine learning-based classification of drone datasets for the DJI Phantom 4 model, compromising both normal and malicious signatures, is conducted, and results obtained yield advisory to foster futuristic opportunities to safeguard a drone system against such cyber threats. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

17 pages, 662 KiB  
Article
A Vote-Based Architecture to Generate Classified Datasets and Improve Performance of Intrusion Detection Systems Based on Supervised Learning
by Diogo Teixeira, Silvestre Malta and Pedro Pinto
Future Internet 2022, 14(3), 72; https://doi.org/10.3390/fi14030072 - 25 Feb 2022
Cited by 3 | Viewed by 3808
Abstract
An intrusion detection system (IDS) is an important tool to prevent potential threats to systems and data. Anomaly-based IDSs may deploy machine learning algorithms to classify events either as normal or anomalous and trigger the adequate response. When using supervised learning, these algorithms [...] Read more.
An intrusion detection system (IDS) is an important tool to prevent potential threats to systems and data. Anomaly-based IDSs may deploy machine learning algorithms to classify events either as normal or anomalous and trigger the adequate response. When using supervised learning, these algorithms require classified, rich, and recent datasets. Thus, to foster the performance of these machine learning models, datasets can be generated from different sources in a collaborative approach, and trained with multiple algorithms. This paper proposes a vote-based architecture to generate classified datasets and improve the performance of supervised learning-based IDSs. On a regular basis, multiple IDSs in different locations send their logs to a central system that combines and classifies them using different machine learning models and a majority vote system. Then, it generates a new and classified dataset, which is trained to obtain the best updated model to be integrated into the IDS of the companies involved. The proposed architecture trains multiple times with several algorithms. To shorten the overall runtimes, the proposed architecture was deployed in Fed4FIRE+ with Ray to distribute the tasks by the available resources. A set of machine learning algorithms and the proposed architecture were assessed. When compared with a baseline scenario, the proposed architecture enabled to increase the accuracy by 11.5% and the precision by 11.2%. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

16 pages, 3951 KiB  
Article
The Framework of Cross-Domain and Model Adversarial Attack against Deepfake
by Haoxuan Qiu, Yanhui Du and Tianliang Lu
Future Internet 2022, 14(2), 46; https://doi.org/10.3390/fi14020046 - 29 Jan 2022
Cited by 2 | Viewed by 3202
Abstract
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples [...] Read more.
To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus on the adversarial examples generated by a model in a domain. To improve the generalization of adversarial examples and produce better attack effects on each domain of multiple deepfake models, this paper proposes a framework of Cross-Domain and Model Adversarial Attack (CDMAA). Firstly, CDMAA uniformly weights the loss function of each domain and calculates the cross-domain gradient. Then, inspired by the multiple gradient descent algorithm (MGDA), CDMAA integrates the cross-domain gradients of each model to obtain the cross-domain perturbation vector, which is used to optimize the adversarial example. Finally, we propose a penalty-based gradient regularization method to pre-process the cross-domain gradients to improve the success rate of attacks. CDMAA experiments on four mainstream deepfake models showed that the adversarial examples generated from CDMAA have the generalizability of attacking multiple models and multiple domains simultaneously. Ablation experiments were conducted to compare the CDMAA components with the methods used in existing studies and verify the superiority of CDMAA. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

22 pages, 774 KiB  
Article
Models versus Datasets: Reducing Bias through Building a Comprehensive IDS Benchmark
by Rasheed Ahmad, Izzat Alsmadi, Wasim Alhamdani and Lo’ai Tawalbeh
Future Internet 2021, 13(12), 318; https://doi.org/10.3390/fi13120318 - 17 Dec 2021
Cited by 3 | Viewed by 2930
Abstract
Today, deep learning approaches are widely used to build Intrusion Detection Systems for securing IoT environments. However, the models’ hidden and complex nature raises various concerns, such as trusting the model output and understanding why the model made certain decisions. Researchers generally publish [...] Read more.
Today, deep learning approaches are widely used to build Intrusion Detection Systems for securing IoT environments. However, the models’ hidden and complex nature raises various concerns, such as trusting the model output and understanding why the model made certain decisions. Researchers generally publish their proposed model’s settings and performance results based on a specific dataset and a classification model but do not report the proposed model’s output and findings. Similarly, many researchers suggest an IDS solution by focusing only on a single benchmark dataset and classifier. Such solutions are prone to generating inaccurate and biased results. This paper overcomes these limitations in previous work by analyzing various benchmark datasets and various individual and hybrid deep learning classifiers towards finding the best IDS solution for IoT that is efficient, lightweight, and comprehensive in detecting network anomalies. We also showed the model’s localized predictions and analyzed the top contributing features impacting the global performance of deep learning models. This paper aims to extract the aggregate knowledge from various datasets and classifiers and analyze the commonalities to avoid any possible bias in results and increase the trust and transparency of deep learning models. We believe this paper’s findings will help future researchers build a comprehensive IDS based on well-performing classifiers and utilize the aggregated knowledge and the minimum set of significantly contributing features. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Graphical abstract

18 pages, 361 KiB  
Article
DNS Firewall Based on Machine Learning
by Claudio Marques, Silvestre Malta and João Magalhães
Future Internet 2021, 13(12), 309; https://doi.org/10.3390/fi13120309 - 30 Nov 2021
Cited by 8 | Viewed by 5085
Abstract
Nowadays there are many DNS firewall solutions to prevent users accessing malicious domains. These can provide real-time protection and block illegitimate communications, contributing to the cybersecurity posture of the organizations. Most of these solutions are based on known malicious domain lists that are [...] Read more.
Nowadays there are many DNS firewall solutions to prevent users accessing malicious domains. These can provide real-time protection and block illegitimate communications, contributing to the cybersecurity posture of the organizations. Most of these solutions are based on known malicious domain lists that are being constantly updated. However, in this way, it is only possible to block malicious communications for known malicious domains, leaving out many others that are malicious but have not yet been updated in the blocklists. This work provides a study to implement a DNS firewall solution based on ML and so improve the detection of malicious domain requests on the fly. For this purpose, a dataset with 34 features and 90 k records was created based on real DNS logs. The data were enriched using OSINT sources. Exploratory analysis and data preparation steps were carried out, and the final dataset submitted to different Supervised ML algorithms to accurately and quickly classify if a domain request is malicious or not. The results show that the ML algorithms were able to classify the benign and malicious domains with accuracy rates between 89% and 96%, and with a classification time between 0.01 and 3.37 s. The contributions of this study are twofold. In terms of research, a dataset was made public and the methodology can be used by other researchers. In terms of solution, the work provides the baseline to implement an in band DNS firewall. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

14 pages, 2151 KiB  
Article
Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
by Li Fan, Wei Li and Xiaohui Cui
Future Internet 2021, 13(11), 288; https://doi.org/10.3390/fi13110288 - 17 Nov 2021
Cited by 7 | Viewed by 3558
Abstract
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is [...] Read more.
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, we propose the Poisson noise DeepFool (PNDF), an improved iterative adversarial examples generation method. This method can simply and effectively attack forensics detectors by adding perturbations to images in different directions. Our attacks can reduce its AUC from 0.9999 to 0.0331, and the detection accuracy of deepfake images from 0.9997 to 0.0731. Compared with state-of-the-art studies, our work provides an important defense direction for future research on deepfake-image detectors, by focusing on the generalization performance of detectors and their resistance to adversarial example attacks. Full article
(This article belongs to the Special Issue Machine Learning Integration with Cyber Security)
Show Figures

Figure 1

Back to TopTop