applsci-logo

Journal Browser

Journal Browser

Security, Privacy and Application in New Intelligence Techniques

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 June 2025 | Viewed by 8105

Special Issue Editors


E-Mail Website
Guest Editor
Software College, Northeastern University, Shenyang 110169, China
Interests: network and information security; data security and privacy protection; artificial intelligence security; software security

E-Mail Website
Guest Editor
School of Information and Software Engineering, University of Electronic Science and Technology, Chengdu 611731, China
Interests: federated learning; information security; privacy computing; blockchain

Special Issue Information

Dear Colleagues,

In recent years, intelligence techniques have attracted extensive attention from research, industry and other fields, greatly expanding the ability of human beings to perceive, understand and control the physical world, and profoundly affecting the production and lifestyle of human beings. Nevertheless, their rapid and widespread deployment, along with their participation in the provisioning of potentially critical services, raise numerous issues related to the security and privacy of the performed operations and provided services. Every day, we use intelligence techniques to collect and analyze our personal, financial as well as health information on a regular basis. As these techniques are an often open and complex, they can be subjected to malicious attacks from both insiders and outsiders; the need to protect the security and privacy in these techniques becomes a critical issue. Consequently, research and development efforts in academia and industry have been increasingly focusing on security and privacy issues in intelligence techniques. Although recent advances in the security and privacy protection of intelligence techniques, such as fully homomorphic encryption, secure multiparty computation, and adversarial machine learning, are promising, more work is still needed to transform theoretical techniques into practical solutions that can be efficiently implemented in the new intelligence techniques. This Special Issue is dedicated to the most recent developments and research outcomes addressing the related theoretical and practical aspects on security, privacy and application in new intelligence techniques, and the goal is to provide worldwide researchers and practitioners an ideal platform to innovate new solutions targeting at the corresponding key challenges. Original and unpublished high-quality research results are solicited to explore various challenging topics which include, but are not limited to the ones listed below:

  • Intelligence techniques in cybersecurity;
  • New cryptographic techniques for intelligence techniques;
  • Privacy preserving machine learning;
  • Adversarial machine learning;
  • Deep Learning in security and privacy;
  • Big data intelligence in security and privacy;
  • Security and privacy in new intelligent computing technologies;
  • Security and privacy in intelligent data sharing, integration, and storage;
  • Security and privacy in Internet of Things;
  • Blockchain in intelligent applications and services;
  • Intelligent data processing, storage and sharing;
  • Intelligent application;
  • Security and privacy in graph neural network;
  • Risk assessment and prediction;
  • Prediction and early warning of security risk of intelligent system;
  • Secure federated learning.

Prof. Dr. Jian Xu
Dr. Ruijin Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • security
  • privacy
  • intelligence techniques
  • adversarial machine learning
  • blockchain
  • federated learning
  • Internet of Things

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1556 KiB  
Article
Intelligent and Secure Cloud–Edge Collaborative Industrial Information Encryption Strategy Based on Credibility Assessment
by Aiping Tan, Chenglong Dong, Yan Wang, Chang Wang and Changqing Xia
Appl. Sci. 2024, 14(19), 8812; https://doi.org/10.3390/app14198812 - 30 Sep 2024
Viewed by 615
Abstract
As industries develop and informatization accelerates, enterprise collaboration is increasing. However, current architectures face malicious attacks, data tampering, privacy issues, and security and efficiency problems in information exchange and enterprise credibility. Additionally, the complexity of cyber threats requires integrating intelligent security measures to [...] Read more.
As industries develop and informatization accelerates, enterprise collaboration is increasing. However, current architectures face malicious attacks, data tampering, privacy issues, and security and efficiency problems in information exchange and enterprise credibility. Additionally, the complexity of cyber threats requires integrating intelligent security measures to proactively defend against sophisticated attacks. To address these challenges, this paper introduces an intelligent and secure cloud–edge collaborative industrial information encryption strategy based on credibility assessment. The proposed strategy incorporates adaptive encryption specifically designed for cloud–edge and edge–edge architectures and utilizes attribute encryption to control access to user-downloaded data, ensuring secure information exchange. A mechanism for assessing enterprise credibility over a defined period helps maintain a trusted collaborative environment, crucial for identifying and mitigating risks from potentially malicious or unreliable entities. Furthermore, integrating intelligent threat detection and response systems enhances overall security by continuously monitoring and analyzing network traffic for anomalies. Experimental analysis evaluates the security of communication paths and examines how enterprise integrity influences collaboration outcomes. Simulation results show that this approach enhances enterprise integrity, reduces losses caused by harmful actors, and promotes efficient collaboration without compromising security. This intelligent and secure strategy not only safeguards sensitive data but also ensures the resilience and trustworthiness of the collaborative network. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

17 pages, 1538 KiB  
Article
2FAKA-C/S: A Robust Two-Factor Authentication and Key Agreement Protocol for C/S Data Transmission in Federated Learning
by Chao Huang, Bin Wang, Zhaoyang Bao and Wenhao Qi
Appl. Sci. 2024, 14(15), 6664; https://doi.org/10.3390/app14156664 - 30 Jul 2024
Viewed by 976
Abstract
As a hot technology trend, the federated learning (FL) cleverly combines data utilization and privacy protection by processing data locally on the client and only sharing model parameters with the server, embodying an efficient and secure collaborative learning model between clients and aggregated [...] Read more.
As a hot technology trend, the federated learning (FL) cleverly combines data utilization and privacy protection by processing data locally on the client and only sharing model parameters with the server, embodying an efficient and secure collaborative learning model between clients and aggregated Servers. During the process of uploading parameters in FL models, there is susceptibility to unauthorized access threats, which can result in training data leakage. To ensure data security during transmission, the Authentication and Key Agreement (AKA) protocols are proposed to authenticate legitimate users and safeguard training data. However, existing AKA protocols for client–server (C/S) architecture show security deficiencies, such as lack of user anonymity and susceptibility to password guessing attacks. In this paper, we propose a robust 2FAKA-C/S protocol based on ECC and Hash-chain technology. Our security analysis shows that the proposed protocol ensures the session keys are semantically secure and can effectively resist various attacks. The performance analysis indicates that the proposed protocol achieves a total running time of 62.644 ms and requires only 800 bits of communication overhead, showing superior computational efficiency and lower communication costs compared to existing protocols. In conclusion, the proposed protocol securely protects the training parameters in a federated learning environment and provides a reliable guarantee for data transmission. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

19 pages, 7921 KiB  
Article
A Dynamic Parameter Tuning Strategy for Decomposition-Based Multi-Objective Evolutionary Algorithms
by Jie Zheng, Jiaxu Ning, Hongfeng Ma and Ziyi Liu
Appl. Sci. 2024, 14(8), 3481; https://doi.org/10.3390/app14083481 - 20 Apr 2024
Cited by 2 | Viewed by 893
Abstract
The penalty-based boundary cross-aggregation (PBI) method is a common decomposition method of the MOEA/D algorithm, but the strategy of using a fixed penalty parameter in the boundary cross-aggregation function affects the convergence of the populations to a certain extent and is not conducive [...] Read more.
The penalty-based boundary cross-aggregation (PBI) method is a common decomposition method of the MOEA/D algorithm, but the strategy of using a fixed penalty parameter in the boundary cross-aggregation function affects the convergence of the populations to a certain extent and is not conducive to the maintenance of the diversity of boundary solutions. To address the above problems, this paper proposes a penalty boundary crossing strategy (DPA) for MOEA/D to adaptively adjust the penalty parameter. The strategy adjusts the penalty parameter values according to the state of uniform distribution of solutions around the weight vectors in the current iteration period, thus helping the optimization process to balance convergence and diversity. In the experimental part, we tested the MOEA/D-DPA algorithm with several MOEA/D improved algorithms on the classical test set. The results show that the MOEA/D with the DPA has better performance than the MOEA/D with the other decomposition strategies. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

16 pages, 1419 KiB  
Article
IG-Based Method for Voiceprint Universal Adversarial Perturbation Generation
by Meng Bi, Xianyun Yu, Zhida Jin and Jian Xu
Appl. Sci. 2024, 14(3), 1322; https://doi.org/10.3390/app14031322 - 5 Feb 2024
Cited by 1 | Viewed by 1068
Abstract
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP method is provided, outlining its framework and approach. The method [...] Read more.
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP method is provided, outlining its framework and approach. The method leverages a greedy iteration approach to formulate an optimization problem for solving acoustic universal adversarial perturbations, with a new objective function designed to ensure that the attack has higher accuracy in terms of minimizing the perceptibility of adversarial perturbations and increasing the accuracy of successful attacks. The perturbation generation process is described in detail, and the resulting acoustic universal adversarial perturbation is evaluated in both target-attack and no-target-attack scenarios. Experimental analysis and testing were carried out using comparable techniques and dissimilar target models. The findings reveal that the acoustic generality adversarial perturbation produced by the IG-UAP method can obtain effective attack results even when the audio training data sample size is minimal, i.e., one for each category. Moreover, the human ear finds it difficult to detect the loss of original data information and the addition of adversarial perturbation (for the case of a target attack, the ASR values range from 82.4% to 90.2% for the small sample data set). The success rates for untargeted and targeted attacks average 85.8% and 84.9%, respectively. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

18 pages, 865 KiB  
Article
FLGQM: Robust Federated Learning Based on Geometric and Qualitative Metrics
by Shangdong Liu, Xi Xu, Musen Wang, Fei Wu, Yimu Ji, Chenxi Zhu and Qurui Zhang
Appl. Sci. 2024, 14(1), 351; https://doi.org/10.3390/app14010351 - 30 Dec 2023
Cited by 2 | Viewed by 1412
Abstract
Federated learning is a distributed learning method that seeks to train a shared global model by aggregating contributions from multiple clients. This method ensures that each client’s local data are not shared with others. However, research has revealed that federated learning is vulnerable [...] Read more.
Federated learning is a distributed learning method that seeks to train a shared global model by aggregating contributions from multiple clients. This method ensures that each client’s local data are not shared with others. However, research has revealed that federated learning is vulnerable to poisoning attacks launched by compromised or malicious clients. Many defense mechanisms have been proposed to mitigate the impact of poisoning attacks, but there are still some limitations and challenges. The defense methods are either performing malicious model removal from the geometric perspective to measure the geometric direction of the model or adding an additional dataset to the server for verifying local models. The former is prone to failure when facing advanced poisoning attacks, while the latter goes against the original intention of federated learning as it requires an independent dataset; thus, both of these defense methods have some limitations. To solve the above problems, we propose a robust federated learning method based on geometric and qualitative metrics (FLGQM). Specifically, FLGQM aims to metricize local models in both geometric and qualitative aspects for comprehensive defense. Firstly, FLGQM evaluates all local models from both direction and size aspects based on similarity calculated by cosine and the Euclidean distance, which we refer to as geometric metrics. Next, we introduce a union client set to assess the quality of all local models by utilizing the union client’s local dataset, referred to as quality metrics. By combining the results of these two metrics, FLGQM is able to use information from multiple views for accurate poisoning attack identification. We conducted experimental evaluations of FLGQM using the MNIST and CIFAR-10 datasets. The experimental results demonstrate that, under different kinds of poisoning attacks, FLGQM can achieve similar performance to FedAvg in non-adversarial environments. Therefore, FLGQM has better robustness and poisoning attack defense performance. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

15 pages, 4371 KiB  
Article
RepVGG-SimAM: An Efficient Bad Image Classification Method Based on RepVGG with Simple Parameter-Free Attention Module
by Zengyu Cai, Xinyang Qiao, Jianwei Zhang, Yuan Feng, Xinhua Hu and Nan Jiang
Appl. Sci. 2023, 13(21), 11925; https://doi.org/10.3390/app132111925 - 31 Oct 2023
Cited by 4 | Viewed by 1957
Abstract
With the rapid development of Internet technology, the number of global Internet users is rapidly increasing, and the scale of the Internet is also expanding. The huge Internet system has accelerated the spread of bad information, including bad images. Bad images reflect the [...] Read more.
With the rapid development of Internet technology, the number of global Internet users is rapidly increasing, and the scale of the Internet is also expanding. The huge Internet system has accelerated the spread of bad information, including bad images. Bad images reflect the vulgar culture of the Internet. They will not only pollute the Internet environment and impact the core culture of society but also endanger the physical and mental health of young people. In addition, some criminals use bad images to induce users to download software containing computer viruses, which also greatly endanger the security of cyberspace. Cyberspace governance faces enormous challenges. Most existing methods for classifying bad images face problems such as low classification accuracy and long inference times, and these limitations are not conducive to effectively curbing the spread of bad images and reducing their harm. To address this issue, this paper proposes a classification method (RepVGG-SimAM) based on RepVGG and a simple parameter-free attention mechanism (SimAM). This method uses RepVGG as the backbone network and embeds the SimAM attention mechanism in the network so that the neural network can obtain more effective information and suppress useless information. We used pornographic images publicly disclosed by data scientist Alexander Kim and violent images collected from the internet to construct the dataset for our experiment. The experimental results prove that the classification accuracy of the method proposed in this paper can reach 94.5% for bad images, that the false positive rate of bad images is only 4.3%, and that the inference speed is doubled compared with the ResNet101 network. Our proposed method can effectively identify bad images and provide efficient and powerful support for cyberspace governance. Full article
(This article belongs to the Special Issue Security, Privacy and Application in New Intelligence Techniques)
Show Figures

Figure 1

Back to TopTop