Topic Editors

Dr. Feiran Huang
College of Cyber Security/College of Information Science and Technology, Jinan University, Guangzhou 510632, China
Dr. Shuyuan Lin
College of Information Science and Technology / College of Cyber Security, Jinan University, Guangzhou 510632, China
Dr. Xiaoming Zhang
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
Dr. Yang Lu
School of Informatics, Xiamen University, Xiamen 361005, China

Adversarial Machine Learning: Theories and Applications

Abstract submission deadline
closed (31 January 2024)
Manuscript submission deadline
closed (31 March 2024)
Viewed by
5501

Topic Information

Dear Colleagues,

Adversarial Machine Learning has emerged as a critical and rapidly growing research area at the intersection of machine learning, cybersecurity, and artificial intelligence. It deals with the study of vulnerabilities and defenses of machine learning models against adversarial attacks. In recent years, machine learning has achieved remarkable success in various applications, including computer vision, natural language processing, speech recognition, and autonomous systems. However, as these models are increasingly deployed in safety-critical systems, there is a growing concern about their susceptibility to adversarial attacks. Adversarial attacks aim to deceive machine learning models into making incorrect predictions or decisions. These perturbations are often imperceptible to human eyes/insights but can cause significant changes in model outputs. The vulnerability of machine learning models to adversarial attacks has raised fundamental questions/problems about their robustness, reliability, and safety in real-world scenarios. This multidisciplinary topic aims to explore the recent advancements and applications of Adversarial Machine Learning. Adversarial Machine Learning poses significant challenges in various domains, including computer vision, natural language processing, and more. Adversarial attacks can lead to severe consequences, such as misclassification of images, manipulated data, or compromised model integrity. The development of intelligent defense techniques becomes crucial to safeguard the integrity and reliability of machine learning models in real-world applications. We invite researchers to submit original works that shed light on the theories and practical applications of Adversarial Machine Learning. We encourage submissions that contribute novel insights, methodologies, or empirical findings in this rapidly evolving field. The topics of interest include but are not limited to the following:

  • Interpretable/explainable adversarial machine learning
  • Adversarial attacks in computer vision and pattern recognition
  • Adversarial challenges in natural language processing
  • Adversarial scene Scenarios understanding: object segmentation / motion segmentation / visual tracking in video/image sequences by machine learning
  • Adversarial correspondence learning: enhancing robustness in image matching
  • Adversarial robustness in deep learning
  • Embedding adversarial learning
  • Violence/anomaly detection
  • Robustness estimation or benchmarking of machine learning models
  • Privacy and security concerns in adversarial machine learning
  • Real-world applications and case studies of adversarial machine learning

Dr. Feiran Huang
Dr. Shuyuan Lin
Dr. Xiaoming Zhang
Dr. Yang Lu
Topic Editors

Keywords

  • adversarial attacks
  • machine learning
  • robust estimation
  • computer vision
  • natural language processing
  • deep learning
  • privacy preservation
  • correspondence learning

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400
Machine Learning and Knowledge Extraction
make
4.0 6.3 2019 27.1 Days CHF 1800
Mathematics
mathematics
2.3 4.0 2013 17.1 Days CHF 2600
Remote Sensing
remotesensing
4.2 8.3 2009 24.7 Days CHF 2700

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (4 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
12 pages, 449 KiB  
Article
Polytomous Knowledge Structures Based on Entail Relations
by Zhaorong He
Mathematics 2024, 12(16), 2504; https://doi.org/10.3390/math12162504 - 14 Aug 2024
Viewed by 587
Abstract
In knowledge structure theory (KST), an individual’s knowledge state represents the items that the individual can completely solve. Based on the differences in individuals’ latent cognitive competence, polytomous knowledge states can be used to partially represent individuals to solve items. This paper explores [...] Read more.
In knowledge structure theory (KST), an individual’s knowledge state represents the items that the individual can completely solve. Based on the differences in individuals’ latent cognitive competence, polytomous knowledge states can be used to partially represent individuals to solve items. This paper explores the construction of polytomous knowledge states and polytomous knowledge structures on a polytomous knowledge domain Q×L. A quasi-ordinal polytomous knowledge space and a polytomous knowledge space can be induced by two different entail relations, respectively. When the polytomous knowledge structure (Q,L,K) on Q×L is determined, accurately evaluating an individual’s polytomous knowledge state is the key to providing learning guidance and taking teaching remedial measures for the individual. Therefore, we study the basic assessment procedure for a given polytomous knowledge structure, and a concrete example is designed to illustrate the method presented in this paper. Full article
(This article belongs to the Topic Adversarial Machine Learning: Theories and Applications)
Show Figures

Figure 1

19 pages, 614 KiB  
Article
A Parallel Optimization Method for Robustness Verification of Deep Neural Networks
by Renhao Lin, Qinglei Zhou, Xiaofei Nan and Tianqing Hu
Mathematics 2024, 12(12), 1884; https://doi.org/10.3390/math12121884 - 17 Jun 2024
Viewed by 702
Abstract
Deep neural networks (DNNs) have gained considerable attention for their expressive capabilities, but unfortunately they have serious robustness risks. Formal verification is an important technique to ensure network reliability. However, current verification techniques are unsatisfactory in time performance, which hinders the practical applications. [...] Read more.
Deep neural networks (DNNs) have gained considerable attention for their expressive capabilities, but unfortunately they have serious robustness risks. Formal verification is an important technique to ensure network reliability. However, current verification techniques are unsatisfactory in time performance, which hinders the practical applications. To address this issue, we propose an efficient optimization method based on parallel acceleration with more computing resources. The method involves the speedup configuration of a partition-based verification aligned with the structures and robustness formal specifications of DNNs. A parallel verification framework is designed specifically for neural network verification systems, which integrates various auxiliary modules and accommodates diverse verification modes. The efficient parallel scheduling of verification queries within the framework enhances resource utilization and enables the system to process a substantial volume of verification tasks. We conduct extensive experiments on multiple commonly used verification benchmarks to demonstrate the rationality and effectiveness of the proposed method. The results show that higher efficiency is achieved after parallel optimization integration. Full article
(This article belongs to the Topic Adversarial Machine Learning: Theories and Applications)
Show Figures

Figure 1

12 pages, 284 KiB  
Article
Analyticity of the Cauchy Problem for a Three-Component Generalization of Camassa–Holm Equation
by Cuiyun Shi, Maojun Bin and Zaiyun Zhang
Mathematics 2024, 12(7), 1085; https://doi.org/10.3390/math12071085 - 3 Apr 2024
Viewed by 704
Abstract
In this paper, we investigate the Cauchy problem for a three-component generalization of Camassa–Holm equation (G3CH equation henceforth) with analytic initial data. The analyticity of its solutions is proved in both variables, globally in space and locally in time. Full article
(This article belongs to the Topic Adversarial Machine Learning: Theories and Applications)
11 pages, 945 KiB  
Article
Improving Adversarial Robustness via Distillation-Based Purification
by Inhwa Koo, Dong-Kyu Chae and Sang-Chul Lee
Appl. Sci. 2023, 13(20), 11313; https://doi.org/10.3390/app132011313 - 15 Oct 2023
Viewed by 1627
Abstract
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an [...] Read more.
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an important research topic, and research has been conducted in various directions including adversarial training, image denoising, and adversarial purification. Among them, this paper focuses on adversarial purification, which is a kind of pre-processing that removes noise before AEs enter a classification model. The advantage of adversarial purification is that it can improve robustness without affecting the model’s nature, while another defense techniques like adversarial training suffer from a decrease in model accuracy. Our proposed purification framework utilizes a Convolutional Autoencoder as a base model to capture the features of images and their spatial structure.We further aim to improve the adversarial robustness of our purification model by distilling the knowledge from teacher models. To this end, we train two Convolutional Autoencoders (teachers), one with adversarial training and the other with normal training. Then, through ensemble knowledge distillation, we transfer the ability of denoising and restoring of original images to the student model (purification model). Our extensive experiments confirm that our student model achieves high purification performance(i.e., how accurately a pre-trained classification model classifies purified images). The ablation study confirms the positive effect of our idea of ensemble knowledge distillation from two teachers on performance. Full article
(This article belongs to the Topic Adversarial Machine Learning: Theories and Applications)
Show Figures

Figure 1

Back to TopTop