applsci-logo

Journal Browser

Journal Browser

Privacy and Security in Machine Learning and Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 10 June 2025 | Viewed by 4739

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Open University, 6401 DL Heerlen, The Netherlands
Interests: privacy and security; machine learning

Special Issue Information

Dear Colleagues,

The development of Artificial Intelligence (AI) and its learning techniques, such as Machine Learning (ML) and Deep Learning (DL), have revolutionized data processing and analysis. This transformation is rapidly changing human life and has allowed for many practical applications based on AI, including the Internet of Things/Vehicles (IoT/ IoV), smart grid and energy saving, fog/edge computing, face/image recognition, text/sentimental analysis, attack detection, and healthcare.

However, the potential benefits of AI are hindered by issues such as insecurity, bias, unreliability, and privacy violations in data processing and communication. This negative impact affects both AI applications and society as a consequence. To address these concerns, this Special Issue seeks novel ideas, findings, and envisions the future of private and secure machine learning and AI.

The Special Issue will focus on and welcome submissions on topics such as privacy-preserving machine learning, deep learning, federated learning, trustworthy machine learning, metrics in private, secure and trustworthy AI, adversarial attacks against AI models, cryptography and security protocols in AI, privacy by design in AI-based systems, and applications of private, secure, and trustworthy AI.

Dr. Mina Sheikhalishahi
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • privacy
  • trustworthy AI
  • federated learning
  • AI security

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 2285 KiB  
Article
A Privacy-Preserving Scheme for a Traffic Accident Risk Level Prediction System
by Pablo Marcillo, Gabriela Suntaxi and Myriam Hernández-Álvarez
Appl. Sci. 2024, 14(21), 9876; https://doi.org/10.3390/app14219876 - 29 Oct 2024
Viewed by 491
Abstract
Due to the expansion of Artificial Intelligence (AI), especially Machine Learning (ML), it is more common to face confidentiality regulations about using sensitive data in learning models generally hosted in cloud environments. Confidentiality regulations such as HIPAA and GDPR seek to guarantee the [...] Read more.
Due to the expansion of Artificial Intelligence (AI), especially Machine Learning (ML), it is more common to face confidentiality regulations about using sensitive data in learning models generally hosted in cloud environments. Confidentiality regulations such as HIPAA and GDPR seek to guarantee the confidentiality and privacy of personal information. Input and output data of a learning model may include sensitive data that must be protected. Adversaries could intercept and exploit this data to infer more sensitive data or even to determine the structure of the prediction model. To guarantee data privacy, one option could be encrypting data and making inferences over encrypted data. This strategy would be challenging for learning models that now must receive encrypted data, make inferences over encrypted data, and deliver encrypted data. To address this issue, this paper presents a privacy-preserving machine learning approach using Fully Homomorphic Encryption (FHE) for a model that predicts risk levels of suffering a traffic accident. Despite the limitations of experimenting with FHE on machine learning models using a low-performance computer, limitations that are undoubtedly overcome by using high-performance computational infrastructure, we built some encrypted models. Among the encrypted models based on Decision Trees, Random Forests, XGBoost, and Fully Connected Neural Networks (FCNN), the model based on FCNN reached the highest accuracy (80.1%) for the lowest inference time (8.476 s). Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

11 pages, 515 KiB  
Communication
A Novel Artificial General Intelligence Security Evaluation Scheme Based on an Analytic Hierarchy Process Model with a Generic Algorithm
by Guangyong Chen, Yiqun Zhang and Rui Jiang
Appl. Sci. 2024, 14(20), 9609; https://doi.org/10.3390/app14209609 - 21 Oct 2024
Viewed by 689
Abstract
The rapid development of Artificial General Intelligence (AGI) in recent years has provided many new opportunities and challenges for human social production. However, recent evaluation methods have some problems with regard to consistency, subjectivity and comprehensiveness. In order to solve the above problems, [...] Read more.
The rapid development of Artificial General Intelligence (AGI) in recent years has provided many new opportunities and challenges for human social production. However, recent evaluation methods have some problems with regard to consistency, subjectivity and comprehensiveness. In order to solve the above problems, in this paper, we propose an Artificial General Intelligence Security Evaluation scheme (AGISE), which is based on analytic hierarchy process (AHP) technology with a genetic algorithm, to comprehensively evaluate the AGI security based on multiple security risk styles and complex indicators. Firstly, our AGISE combines AHP technology with a genetic algorithm to realize reliable, consistent and objective evaluation for AGI security. Secondly, in our AGISE, we propose implementing more effective AGI security evaluation classification and indicator settings. Finally, we demonstrate the effectiveness of our AGISE through experiments. Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 383 KiB  
Article
Structure Estimation of Adversarial Distributions for Enhancing Model Robustness: A Clustering-Based Approach
by Bader Rasheed, Adil Khan and Asad Masood Khattak
Appl. Sci. 2023, 13(19), 10972; https://doi.org/10.3390/app131910972 - 5 Oct 2023
Cited by 1 | Viewed by 1030
Abstract
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial examples in isolation, our approach employs clustering algorithms in conjunction with dimensionality reduction [...] Read more.
In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial examples in isolation, our approach employs clustering algorithms in conjunction with dimensionality reduction techniques to group adversarial perturbations, effectively constructing a more intricate and structured feature space for model training. Our method incorporates density and boundary-aware clustering mechanisms to capture the inherent spatial relationships among adversarial examples. Furthermore, we introduce a strategy for utilizing adversarial perturbations to enhance the delineation between clusters, leading to the formation of more robust and compact clusters. To substantiate the method’s efficacy, we performed a comprehensive evaluation using well-established benchmarks, including MNIST and CIFAR-10 datasets. The performance metrics employed for the evaluation encompass the adversarial clean accuracy trade-off, demonstrating a significant improvement in both robust and standard test accuracy over traditional adversarial training methods. Through empirical experiments, we show that the proposed clustering-based adversarial training framework not only enhances the model’s robustness against a range of adversarial attacks, such as FGSM and PGD, but also improves generalization in clean data domains. Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 7360 KiB  
Article
Federated Learning for Clients’ Data Privacy Assurance in Food Service Industry
by Hamed Taheri Gorji, Mahdi Saeedi, Erum Mushtaq, Hossein Kashani Zadeh, Kaylee Husarik, Seyed Mojtaba Shahabi, Jianwei Qin, Diane E. Chan, Insuck Baek, Moon S. Kim, Alireza Akhbardeh, Stanislav Sokolov, Salman Avestimehr, Nicholas MacKinnon, Fartash Vasefi and Kouhyar Tavakolian
Appl. Sci. 2023, 13(16), 9330; https://doi.org/10.3390/app13169330 - 17 Aug 2023
Cited by 1 | Viewed by 1598
Abstract
The food service industry must ensure that service facilities are free of foodborne pathogens hosted by organic residues and biofilms. Foodborne diseases put customers at risk and compromise the reputations of service providers. Fluorescence imaging, empowered by state-of-the-art artificial intelligence (AI) algorithms, can [...] Read more.
The food service industry must ensure that service facilities are free of foodborne pathogens hosted by organic residues and biofilms. Foodborne diseases put customers at risk and compromise the reputations of service providers. Fluorescence imaging, empowered by state-of-the-art artificial intelligence (AI) algorithms, can detect invisible residues. However, using AI requires large datasets that are most effective when collected from actual users, raising concerns about data privacy and possible leakage of sensitive information. In this study, we employed a decentralized privacy-preserving technology to address client data privacy issues. When federated learning (FL) is used, there is no need for data sharing across clients or data centralization on a server. We used FL and a new fluorescence imaging technology and applied two deep learning models, MobileNetv3 and DeepLabv3+, to identify and segment invisible residues on food preparation equipment and surfaces. We used FedML as our FL framework and Fedavg as the aggregation algorithm. The model achieved training and testing accuracies of 95.83% and 94.94% for classification between clean and contamination frames, respectively, and resulted in intersection over union (IoU) scores of 91.23% and 89.45% for training and testing, respectively, of segmentation of the contaminated areas. The results demonstrated that using federated learning combined with fluorescence imaging and deep learning algorithms can improve the performance of cleanliness auditing systems while assuring client data privacy. Full article
(This article belongs to the Special Issue Privacy and Security in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop