sensors-logo

Journal Browser

Journal Browser

AI Technology for Cybersecurity and IoT Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: 25 November 2024 | Viewed by 14602

Special Issue Editors


E-Mail Website
Guest Editor
Graduate School of Information, Production and Systems, Waseda University, Shinjuku City 1698050, Japan
Interests: Internet of Things; artificial intelligence; data privacy; blockchains; 5G/6G

E-Mail Website
Guest Editor
Department of Systems Innovation, The University of Tokyo, Tokyo 113-0033, Japan
Interests: network intelligence; Internet of Things; next-generation communication; privacy preservation

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) technology is emerging in the cybersecurity and Internet of Things (IoT) areas with great promise. The continuous emergence of novel, invisible, and complex cyber-attacks, such as advanced persistent threats (APT), fuels the demands for intelligent discovery and prevention of cybersecurity threats. To deal with the aforementioned complex threats, AI technology for novel cybersecurity includes the construction of a dynamic cyber-attack model, intelligent defense, as well as fine-grained preserved privacy. On the other hand, AI technologies for IoT can be clustered into intelligent environment sensing, edge computing, and communications. AI technology for IoT supports the intelligent management and efficient control of heterogeneous IoT sensors in the process of data collection and edge computing for decentralized big data. In addition, the novel communications in IoT (e. g. Terahertz in 6G) are envisioned to be implemented and deployed through AI-enabled allocation and scheduling technologies. Together with recent advances in AI technology, the applications of AI for both cybersecurity and IoT are still open and require immediate studies.

This Special Issue focuses on the new challenges, technologies, solutions, and applications in the field of AI technology for cybersecurity and IoT. Potential topics include, but are not limited to:

  1. AI architectures and models for cyber-attacks/threats sensing, classification, and detection;
  2. Cybersecurity defense theory and methodology inspired by AI;
  3. AI-driven software and hardware security technologies;
  4. Privacy and learning protection for AI models;
  5. AI-driven blockchain technologies;
  6. Intelligent sensing paradigm design in IoT;
  7. AI-based organization, orchestration, and optimization for IoT;
  8. Edge computing framework for IoT using AI technology;
  9. Resource allocation and scheduling for IoT based on AI;
  10. AI algorithms for Terahertz and configurable communications in IoT.

Prof. Dr. Jun Wu
Dr. Qianqian Pan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • hardware security
  • software security
  • data security
  • privacy preserving
  • blockchain
  • Internet of Things
  • edge computing and intelligence
  • terahertz
  • smart sensing

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 1089 KiB  
Article
ViTDroid: Vision Transformers for Efficient, Explainable Attention to Malicious Behavior in Android Binaries
by Toqeer Ali Syed, Mohammad Nauman, Sohail Khan, Salman Jan and Megat F. Zuhairi
Sensors 2024, 24(20), 6690; https://doi.org/10.3390/s24206690 - 17 Oct 2024
Viewed by 626
Abstract
Smartphones are intricately connected to the modern society. The two widely used mobile phone operating systems, iOS and Android, profoundly affect the lives of millions of people. Android presently holds a market share of close to 71% among these two. As a result, [...] Read more.
Smartphones are intricately connected to the modern society. The two widely used mobile phone operating systems, iOS and Android, profoundly affect the lives of millions of people. Android presently holds a market share of close to 71% among these two. As a result, if personal information is not securely protected, it is at tremendous risk. On the other hand, mobile malware has seen a year-on-year increase of more than 42% globally in 2022 mid-year. Any group of human professionals would have a very tough time detecting and removing all of this malware. For this reason, deep learning in particular has been used recently to overcome this problem. Deep learning models, however, were primarily created for picture analysis. Despite the fact that these models have shown promising findings in the field of vision, it has been challenging to fully comprehend what the characteristics recovered by deep learning models are in the area of malware. Furthermore, the actual potential of deep learning for malware analysis has not yet been fully realized due to the translation invariance trait of well-known models based on CNN. In this paper, we present ViTDroid, a novel model based on vision transformers for the deep learning-based analysis of opcode sequences of Android malware samples from large real-world datasets. We have been able to achieve a false positive rate of 0.0019 as compared to the previous best of 0.0021. However, this incremental improvement is not the major contribution of our work. Our model aims to make explainable predictions, i.e., it not only performs the classification of malware with high accuracy, but it also provides insights into the reasons for this classification. The model is able to pinpoint the malicious behavior-causing instructions in the malware samples. This means that our model can actually aid in the field of malware analysis itself by providing insights to human experts, thus leading to further improvements in this field. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

21 pages, 2795 KiB  
Article
Malware Identification Method in Industrial Control Systems Based on Opcode2vec and CVAE-GAN
by Yuchen Huang, Jingwen Liu, Xuanyi Xiang, Pan Wen, Shiyuan Wen, Yanru Chen, Liangyin Chen and Yuanyuan Zhang
Sensors 2024, 24(17), 5518; https://doi.org/10.3390/s24175518 - 26 Aug 2024
Viewed by 839
Abstract
Industrial Control Systems (ICSs) have faced a significant increase in malware threats since their integration with the Internet. However, existing machine learning-based malware identification methods are not specifically optimized for ICS environments, resulting in suboptimal identification performance. In this work, we propose an [...] Read more.
Industrial Control Systems (ICSs) have faced a significant increase in malware threats since their integration with the Internet. However, existing machine learning-based malware identification methods are not specifically optimized for ICS environments, resulting in suboptimal identification performance. In this work, we propose an innovative method explicitly tailored for ICSs to enhance the performance of malware classifiers within these systems. Our method integrates the opcode2vec method based on preprocessed features with a conditional variational autoencoder–generative adversarial network, enabling classifiers based on Convolutional Neural Networks to identify malware more effectively and with some degree of increased stability and robustness. Extensive experiments validate the efficacy of our method, demonstrating the improved performance of malware classifiers in ICSs. Our method achieved an accuracy of 97.30%, precision of 92.34%, recall of 97.44%, and F1-score of 94.82%, which are the highest reported values in the experiment. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

19 pages, 3537 KiB  
Article
Integral-Valued Pythagorean Fuzzy-Set-Based Dyna Q+ Framework for Task Scheduling in Cloud Computing
by Bhargavi Krishnamurthy and Sajjan G. Shiva
Sensors 2024, 24(16), 5272; https://doi.org/10.3390/s24165272 - 14 Aug 2024
Viewed by 507
Abstract
Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, [...] Read more.
Task scheduling is a critical challenge in cloud computing systems, greatly impacting their performance. Task scheduling is a nondeterministic polynomial time hard (NP-Hard) problem that complicates the search for nearly optimal solutions. Five major uncertainty parameters, i.e., security, traffic, workload, availability, and price, influence task scheduling decisions. The primary rationale for selecting these uncertainty parameters lies in the challenge of accurately measuring their values, as empirical estimations often diverge from the actual values. The integral-valued Pythagorean fuzzy set (IVPFS) is a promising mathematical framework to deal with parametric uncertainties. The Dyna Q+ algorithm is the updated form of the Dyna Q agent designed specifically for dynamic computing environments by providing bonus rewards to non-exploited states. In this paper, the Dyna Q+ agent is enriched with the IVPFS mathematical framework to make intelligent task scheduling decisions. The performance of the proposed IVPFS Dyna Q+ task scheduler is tested using the CloudSim 3.3 simulator. The execution time is reduced by 90%, the makespan time is also reduced by 90%, the operation cost is below 50%, and the resource utilization rate is improved by 95%, all of these parameters meeting the desired standards or expectations. The results are also further validated using an expected value analysis methodology that confirms the good performance of the task scheduler. A better balance between exploration and exploitation through rigorous action-based learning is achieved by the Dyna Q+ agent. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

24 pages, 732 KiB  
Article
Software-Defined-Networking-Based One-versus-Rest Strategy for Detecting and Mitigating Distributed Denial-of-Service Attacks in Smart Home Internet of Things Devices
by Neder Karmous, Mohamed Ould-Elhassen Aoueileyine, Manel Abdelkader, Lamia Romdhani and Neji Youssef
Sensors 2024, 24(15), 5022; https://doi.org/10.3390/s24155022 - 3 Aug 2024
Cited by 2 | Viewed by 1317
Abstract
The number of connected devices or Internet of Things (IoT) devices has rapidly increased. According to the latest available statistics, in 2023, there were approximately 17.2 billion connected IoT devices; this is expected to reach 25.4 billion IoT devices by 2030 and grow [...] Read more.
The number of connected devices or Internet of Things (IoT) devices has rapidly increased. According to the latest available statistics, in 2023, there were approximately 17.2 billion connected IoT devices; this is expected to reach 25.4 billion IoT devices by 2030 and grow year over year for the foreseeable future. IoT devices share, collect, and exchange data via the internet, wireless networks, or other networks with one another. IoT interconnection technology improves and facilitates people’s lives but, at the same time, poses a real threat to their security. Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) attacks are considered the most common and threatening attacks that strike IoT devices’ security. These are considered to be an increasing trend, and it will be a major challenge to reduce risk, especially in the future. In this context, this paper presents an improved framework (SDN-ML-IoT) that works as an Intrusion and Prevention Detection System (IDPS) that could help to detect DDoS attacks with more efficiency and mitigate them in real time. This SDN-ML-IoT uses a Machine Learning (ML) method in a Software-Defined Networking (SDN) environment in order to protect smart home IoT devices from DDoS attacks. We employed an ML method based on Random Forest (RF), Logistic Regression (LR), k-Nearest Neighbors (kNN), and Naive Bayes (NB) with a One-versus-Rest (OvR) strategy and then compared our work to other related works. Based on the performance metrics, such as confusion matrix, training time, prediction time, accuracy, and Area Under the Receiver Operating Characteristic curve (AUC-ROC), it was established that SDN-ML-IoT, when applied to RF, outperforms other ML algorithms, as well as similar approaches related to our work. It had an impressive accuracy of 99.99%, and it could mitigate DDoS attacks in less than 3 s. We conducted a comparative analysis of various models and algorithms used in the related works. The results indicated that our proposed approach outperforms others, showcasing its effectiveness in both detecting and mitigating DDoS attacks within SDNs. Based on these promising results, we have opted to deploy SDN-ML-IoT within the SDN. This implementation ensures the safeguarding of IoT devices in smart homes against DDoS attacks within the network traffic. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

19 pages, 1411 KiB  
Article
Malware Detection for Internet of Things Using One-Class Classification
by Tongxin Shi, Roy A. McCann, Ying Huang, Wei Wang and Jun Kong
Sensors 2024, 24(13), 4122; https://doi.org/10.3390/s24134122 - 25 Jun 2024
Cited by 3 | Viewed by 828
Abstract
The increasing usage of interconnected devices within the Internet of Things (IoT) and Industrial IoT (IIoT) has significantly enhanced efficiency and utility in both personal and industrial settings but also heightened cybersecurity vulnerabilities, particularly through IoT malware. This paper explores the use of [...] Read more.
The increasing usage of interconnected devices within the Internet of Things (IoT) and Industrial IoT (IIoT) has significantly enhanced efficiency and utility in both personal and industrial settings but also heightened cybersecurity vulnerabilities, particularly through IoT malware. This paper explores the use of one-class classification, a method of unsupervised learning, which is especially suitable for unlabeled data, dynamic environments, and malware detection, which is a form of anomaly detection. We introduce the TF-IDF method for transforming nominal features into numerical formats that avoid information loss and manage dimensionality effectively, which is crucial for enhancing pattern recognition when combined with n-grams. Furthermore, we compare the performance of multi-class vs. one-class classification models, including Isolation Forest and deep autoencoder, that are trained with both benign and malicious NetFlow samples vs. trained exclusively on benign NetFlow samples. We achieve 100% recall with precision rates above 80% and 90% across various test datasets using one-class classification. These models show the adaptability of unsupervised learning, especially one-class classification, to the evolving malware threats in the IoT domain, offering insights into enhancing IoT security frameworks and suggesting directions for future research in this critical area. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

16 pages, 526 KiB  
Article
The Improved Biometric Identification of Keystroke Dynamics Based on Deep Learning Approaches
by Łukasz Wyciślik, Przemysław Wylężek and Alina Momot
Sensors 2024, 24(12), 3763; https://doi.org/10.3390/s24123763 - 9 Jun 2024
Viewed by 1197
Abstract
In an era marked by escalating concerns about digital security, biometric identification methods have gained paramount importance. Despite the increasing adoption of biometric techniques, keystroke dynamics analysis remains a less explored yet promising avenue. This study highlights the untapped potential of keystroke dynamics, [...] Read more.
In an era marked by escalating concerns about digital security, biometric identification methods have gained paramount importance. Despite the increasing adoption of biometric techniques, keystroke dynamics analysis remains a less explored yet promising avenue. This study highlights the untapped potential of keystroke dynamics, emphasizing its non-intrusive nature and distinctiveness. While keystroke dynamics analysis has not achieved widespread usage, ongoing research indicates its viability as a reliable biometric identifier. This research builds upon the existing foundation by proposing an innovative deep-learning methodology for keystroke dynamics-based identification. Leveraging open research datasets, our approach surpasses previously reported results, showcasing the effectiveness of deep learning in extracting intricate patterns from typing behaviors. This article contributes to the advancement of biometric identification, shedding light on the untapped potential of keystroke dynamics and demonstrating the efficacy of deep learning in enhancing the precision and reliability of identification systems. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

17 pages, 9322 KiB  
Article
Research on Fault Detection by Flow Sequence for Industrial Internet of Things in Sewage Treatment Plant Case
by Dongfeng Lei, Liang Zhao and Dengfeng Chen
Sensors 2024, 24(7), 2210; https://doi.org/10.3390/s24072210 - 29 Mar 2024
Viewed by 915
Abstract
Classifying the flow subsequences of sensor networks is an effective way for fault detection in the Industrial Internet of Things (IIoT). Traditional fault detection algorithms identify exceptions by a single abnormal dataset and do not pay attention to the factors such as electromagnetic [...] Read more.
Classifying the flow subsequences of sensor networks is an effective way for fault detection in the Industrial Internet of Things (IIoT). Traditional fault detection algorithms identify exceptions by a single abnormal dataset and do not pay attention to the factors such as electromagnetic interference, network delay, sensor sample delay, and so on. This paper focuses on fault detection by continuous abnormal points. We proposed a fault detection algorithm within the module of sequence state generated by unsupervised learning (SSGBUL) and the module of integrated encoding sequence classification (IESC). Firstly, we built a network module based on unsupervised learning to encode the flow sequence of the different network cards in the IIoT gateway, and then combined the multiple code sequences into one integrated sequence. Next, we classified the integrated sequence by comparing the integrated sequence with the encoding fault type. The results obtained from the three IIoT datasets of a sewage treatment plant show that the accuracy of the SSGBUL–IESC algorithm exceeds 90% with subsequence length 10, which is significantly higher than the accuracies of the dynamic time warping (DTW) algorithm and the time series forest (TSF) algorithm. The proposed algorithm reaches the classification requirements for fault detection for the IIoT. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

21 pages, 2841 KiB  
Article
Detection of Malicious Threats Exploiting Clock-Gating Hardware Using Machine Learning
by Nuri Alperen Kose, Razaq Jinad, Amar Rasheed, Narasimha Shashidhar, Mohamed Baza and Hani Alshahrani
Sensors 2024, 24(3), 983; https://doi.org/10.3390/s24030983 - 2 Feb 2024
Viewed by 1540
Abstract
Embedded system technologies are increasingly being incorporated into manufacturing, smart grid, industrial control systems, and transportation systems. However, the vast majority of today’s embedded platforms lack the support of built-in security features which makes such systems highly vulnerable to a wide range of [...] Read more.
Embedded system technologies are increasingly being incorporated into manufacturing, smart grid, industrial control systems, and transportation systems. However, the vast majority of today’s embedded platforms lack the support of built-in security features which makes such systems highly vulnerable to a wide range of cyber-attacks. Specifically, they are vulnerable to malware injection code that targets the power distribution system of an ARM Cortex-M-based microcontroller chipset (ARM, Cambridge, UK). Through hardware exploitation of the clock-gating distribution system, an attacker is capable of disabling/activating various subsystems on the chip, compromising the reliability of the system during normal operation. This paper proposes the development of an Intrusion Detection System (IDS) capable of detecting clock-gating malware deployed on ARM Cortex-M-based embedded systems. To enhance the robustness and effectiveness of our approach, we fully implemented, tested, and compared six IDSs, each employing different methodologies. These include IDSs based on K-Nearest Classifier, Random Forest, Logistic Regression, Decision Tree, Naive Bayes, and Stochastic Gradient Descent. Each of these IDSs was designed to identify and categorize various variants of clock-gating malware deployed on the system. We have analyzed the performance of these IDSs in terms of detection accuracy against various types of clock-gating malware injection code. Power consumption data collected from the chipset during normal operation and malware code injection attacks were used for models’ training and validation. Our simulation results showed that the proposed IDSs, particularly those based on K-Nearest Classifier and Logistic Regression, were capable of achieving high detection rates, with some reaching a detection rate of 0.99. These results underscore the effectiveness of our IDSs in protecting ARM Cortex-M-based embedded systems against clock-gating malware. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

22 pages, 1017 KiB  
Article
On-Demand Centralized Resource Allocation for IoT Applications: AI-Enabled Benchmark
by Ran Zhang, Lei Liu, Mianxiong Dong and Kaoru Ota
Sensors 2024, 24(3), 980; https://doi.org/10.3390/s24030980 - 2 Feb 2024
Cited by 2 | Viewed by 1766
Abstract
The development of emerging information technologies, such as the Internet of Things (IoT), edge computing, and blockchain, has triggered a significant increase in IoT application services and data volume. Ensuring satisfactory service quality for diverse IoT application services based on limited network resources [...] Read more.
The development of emerging information technologies, such as the Internet of Things (IoT), edge computing, and blockchain, has triggered a significant increase in IoT application services and data volume. Ensuring satisfactory service quality for diverse IoT application services based on limited network resources has become an urgent issue. Generalized processor sharing (GPS), functioning as a central resource scheduling mechanism guiding differentiated services, stands as a key technology for implementing on-demand resource allocation. The performance prediction of GPS is a crucial step that aims to capture the actual allocated resources using various queue metrics. Some methods (mainly analytical methods) have attempted to establish upper and lower bounds or approximate solutions. Recently, artificial intelligence (AI) methods, such as deep learning, have been designed to assess performance under self-similar traffic. However, the proposed methods in the literature have been developed for specific traffic scenarios with predefined constraints, thus limiting their real-world applicability. Furthermore, the absence of a benchmark in the literature leads to an unfair performance prediction comparison. To address the drawbacks in the literature, an AI-enabled performance benchmark with comprehensive traffic-oriented experiments showcasing the performance of existing methods is presented. Specifically, three types of methods are employed: traditional approximate analytical methods, traditional machine learning-based methods, and deep learning-based methods. Following that, various traffic flows with different settings are collected, and intricate experimental analyses at both the feature and method levels under different traffic conditions are conducted. Finally, insights from the experimental analysis that may be beneficial for the future performance prediction of GPS are derived. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

23 pages, 3362 KiB  
Article
CTSF: An Intrusion Detection Framework for Industrial Internet Based on Enhanced Feature Extraction and Decision Optimization Approach
by Guangzhao Chai, Shiming Li, Yu Yang, Guohui Zhou and Yuhe Wang
Sensors 2023, 23(21), 8793; https://doi.org/10.3390/s23218793 - 28 Oct 2023
Viewed by 1432
Abstract
The traditional Transformer model primarily employs a self-attention mechanism to capture global feature relationships, potentially overlooking local relationships within sequences and thus affecting the modeling capability of local features. For Support Vector Machine (SVM), it often requires the joint use of feature selection [...] Read more.
The traditional Transformer model primarily employs a self-attention mechanism to capture global feature relationships, potentially overlooking local relationships within sequences and thus affecting the modeling capability of local features. For Support Vector Machine (SVM), it often requires the joint use of feature selection algorithms or model optimization methods to achieve maximum classification accuracy. Addressing the issues in both models, this paper introduces a novel network framework, CTSF, specifically designed for Industrial Internet intrusion detection. CTSF effectively addresses the limitations of traditional Transformers in extracting local features while compensating for the weaknesses of SVM. The framework comprises a pre-training component and a decision-making component. The pre-training section consists of both CNN and an enhanced Transformer, designed to capture both local and global features from input data while reducing data feature dimensions. The improved Transformer simultaneously decreases certain training parameters within CTSF, making it more suitable for the Industrial Internet environment. The classification section is composed of SVM, which receives initial classification data from the pre-training phase and determines the optimal decision boundary. The proposed framework is evaluated on an imbalanced subset of the X-IIOTID dataset, which represent Industrial Internet data. Experimental results demonstrate that with SVM using both “linear” and “rbf” kernel functions, CTSF achieves an overall accuracy of 0.98875 and effectively discriminates minor classes, showcasing the superiority of this framework. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

11 pages, 731 KiB  
Article
MCW: A Generalizable Deepfake Detection Method for Few-Shot Learning
by Lei Guan, Fan Liu, Ru Zhang, Jianyi Liu and Yifan Tang
Sensors 2023, 23(21), 8763; https://doi.org/10.3390/s23218763 - 27 Oct 2023
Cited by 2 | Viewed by 2471
Abstract
With the development of deepfake technology, deepfake detection has received widespread attention. Although some deepfake forensics techniques have been proposed, they are still very difficult to implement in real-world scenarios. This is due to the differences in different deepfake technologies and the compression [...] Read more.
With the development of deepfake technology, deepfake detection has received widespread attention. Although some deepfake forensics techniques have been proposed, they are still very difficult to implement in real-world scenarios. This is due to the differences in different deepfake technologies and the compression or editing of videos during the propagation process. Considering the issue of sample imbalance with few-shot scenarios in deepfake detection, we propose a multi-feature channel domain-weighted framework based on meta-learning (MCW). In order to obtain outstanding detection performance of a cross-database, the proposed framework improves a meta-learning network in two ways: it enhances the model’s feature extraction ability for detecting targets by combining the RGB domain and frequency domain information of the image and enhances the model’s generalization ability for detecting targets by assigning meta weights to channels on the feature map. The proposed MCW framework solves the problems of poor detection performance and insufficient data compression resistance of the algorithm for samples generated by unknown algorithms. The experiment was set in a zero-shot scenario and few-shot scenario, simulating the deepfake detection environment in real situations. We selected nine detection algorithms as comparative algorithms. The experimental results show that the MCW framework outperforms other algorithms in cross-algorithm detection and cross-dataset detection. The MCW framework demonstrates its ability to generalize and resist compression with low-quality training images and across different generation algorithm scenarios, and it has better fine-tuning potential in few-shot learning scenarios. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

Back to TopTop