Current Issue and Future Directions in Multimedia Hiding and Signal Processing

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (31 March 2024) | Viewed by 44500

Special Issue Editors


E-Mail Website
Guest Editor
Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amaravati, Andhra Pradesh 522503, India
Interests: multi-media forensics; image watermarking and steganography; deep learning-based data hiding

E-Mail Website
Guest Editor
Computer Science Department, Faculty of Sciences and Technology, Artificial Intelligence and information Technology Laboratory (LINATI), University of Kasdi Merbah, 30000 Ouargla, Algeria
Interests: medical image watermarking and data hiding

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, SRM University-AP, Amaravati 522502, India
Interests: computer vision; machine learning; deep learning

Special Issue Information

Dear Colleagues,

Computer and computing devices have become more intelligent and necessary with the recent developments in digital infrastructure. Anyone who wishes to communicate information can easily do so. Data communication has become much easier and faster. At the same time, communication privacy has been negatively impacted by these developments. Normally, data security techniques such as encryption, watermarking, and steganography are among the central choices for researchers of signal processing and data communication. Recently, traditional data communication techniques have become less effective. Therefore, recent studies suggest that data researchers are now looking forward towards intelligent data hiding techniques, in addition to the conventional data hiding techniques. In this regard, advances in machine learning and deep learning techniques will be beneficial for secure data communication. In addition, the discovery of manipulated contents and the recovery of tampered regions would be quite effective for the network community.

This Special Issue aims to bring together researchers of multi-media forensics, image processing, data hiding, and artificial intelligence to exchange their ideas and knowledge to address these issues. The following potential research topics are covered in this Special Issue:

  • Intelligent watermarking;
  • Multi-media watermarking;
  • Copyright protection and integrity verification;
  • Forgery detection and localization;
  • Reversible data hiding;
  • Data hiding using fuzzy logic;
  • Data hiding using deep learning;
  • Deep learning-based steganalysis;
  • Deepfake detection;
  • Adversarial machine learning for security applications.

Dr. Aditya Kumar Sahu
Dr. Amine Khaldi
Dr. Jatindra Kumar Dash
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • multi-media hiding
  • watermarking
  • deep learning for data hiding
  • deep fake detection
  • tamper detection and localization

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 849 KiB  
Article
Audio Deep Fake Detection with Sonic Sleuth Model
by Anfal Alshehri, Danah Almalki, Eaman Alharbi and Somayah Albaradei
Computers 2024, 13(10), 256; https://doi.org/10.3390/computers13100256 - 8 Oct 2024
Viewed by 1332
Abstract
Information dissemination and preservation are crucial for societal progress, especially in the technological age. While technology fosters knowledge sharing, it also risks spreading misinformation. Audio deepfakes—convincingly fabricated audio created using artificial intelligence (AI)—exacerbate this issue. We present Sonic Sleuth, a novel AI model [...] Read more.
Information dissemination and preservation are crucial for societal progress, especially in the technological age. While technology fosters knowledge sharing, it also risks spreading misinformation. Audio deepfakes—convincingly fabricated audio created using artificial intelligence (AI)—exacerbate this issue. We present Sonic Sleuth, a novel AI model designed specifically for detecting audio deepfakes. Our approach utilizes advanced deep learning (DL) techniques, including a custom CNN model, to enhance detection accuracy in audio misinformation, with practical applications in journalism and social media. Through meticulous data preprocessing and rigorous experimentation, we achieved a remarkable 98.27% accuracy and a 0.016 equal error rate (EER) on a substantial dataset of real and synthetic audio. Additionally, Sonic Sleuth demonstrated 84.92% accuracy and a 0.085 EER on an external dataset. The novelty of this research lies in its integration of datasets that closely simulate real-world conditions, including noise and linguistic diversity, enabling the model to generalize across a wide array of audio inputs. These results underscore Sonic Sleuth’s potential as a powerful tool for combating misinformation and enhancing integrity in digital communications. Full article
Show Figures

Figure 1

21 pages, 430 KiB  
Article
Real-Time Detection of Face Mask Usage Using Convolutional Neural Networks
by Athanasios Kanavos, Orestis Papadimitriou, Khalil Al-Hussaeni, Manolis Maragoudakis and Ioannis Karamitsos
Computers 2024, 13(7), 182; https://doi.org/10.3390/computers13070182 - 22 Jul 2024
Viewed by 1078
Abstract
The widespread adoption of face masks has been a crucial strategy in mitigating the spread of infectious diseases, particularly in communal settings. However, ensuring compliance with mask-wearing directives remains a significant challenge due to inconsistencies in usage and the difficulty in monitoring adherence [...] Read more.
The widespread adoption of face masks has been a crucial strategy in mitigating the spread of infectious diseases, particularly in communal settings. However, ensuring compliance with mask-wearing directives remains a significant challenge due to inconsistencies in usage and the difficulty in monitoring adherence in real time. This paper addresses these challenges by leveraging advanced deep learning techniques within computer vision to develop a real-time mask detection system. We have designed a sophisticated convolutional neural network (CNN) model, trained on a diverse and comprehensive dataset that includes various environmental conditions and mask-wearing behaviors. Our model demonstrates a high degree of accuracy in detecting proper mask usage, thereby significantly enhancing the ability of organizations and public health authorities to enforce mask-wearing rules effectively. The key contributions of this research include the development of a robust real-time monitoring system that can be integrated into existing surveillance infrastructures to improve public health safety measures during ongoing and future health crises. Furthermore, this study lays the groundwork for future advancements in automated compliance monitoring systems, extending their applicability to other areas of public health and safety. Full article
Show Figures

Figure 1

27 pages, 6904 KiB  
Article
Deep Convolutional Generative Adversarial Networks in Image-Based Android Malware Detection
by Francesco Mercaldo, Fabio Martinelli and Antonella Santone
Computers 2024, 13(6), 154; https://doi.org/10.3390/computers13060154 - 19 Jun 2024
Cited by 3 | Viewed by 1337
Abstract
The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce [...] Read more.
The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce a method to assess whether images generated by generative adversarial networks, using a dataset of real-world Android malware applications, can be distinguished from actual images. Our experiments involved two types of deep convolutional generative adversarial networks, and utilize images derived from both static analysis (which does not require running the application) and dynamic analysis (which does require running the application). After generating the images, we trained several supervised machine learning models to determine if these classifiers can differentiate between real and generated malicious applications. Our results indicate that, despite being visually indistinguishable to the human eye, the generated images were correctly identified by a classifier with an F-measure of approximately 0.8. While most generated images were accurately recognized as fake, some were not, leading them to be considered as images produced by real applications. Full article
Show Figures

Figure 1

18 pages, 4771 KiB  
Article
Multiclass AI-Generated Deepfake Face Detection Using Patch-Wise Deep Learning Model
by Muhammad Asad Arshed, Shahzad Mumtaz, Muhammad Ibrahim, Christine Dewi, Muhammad Tanveer and Saeed Ahmed
Computers 2024, 13(1), 31; https://doi.org/10.3390/computers13010031 - 21 Jan 2024
Cited by 8 | Viewed by 7224
Abstract
In response to the rapid advancements in facial manipulation technologies, particularly facilitated by Generative Adversarial Networks (GANs) and Stable Diffusion-based methods, this paper explores the critical issue of deepfake content creation. The increasing accessibility of these tools necessitates robust detection methods to curb [...] Read more.
In response to the rapid advancements in facial manipulation technologies, particularly facilitated by Generative Adversarial Networks (GANs) and Stable Diffusion-based methods, this paper explores the critical issue of deepfake content creation. The increasing accessibility of these tools necessitates robust detection methods to curb potential misuse. In this context, this paper investigates the potential of Vision Transformers (ViTs) for effective deepfake image detection, leveraging their capacity to extract global features. Objective: The primary goal of this study is to assess the viability of ViTs in detecting multiclass deepfake images compared to traditional Convolutional Neural Network (CNN)-based models. By framing the deepfake problem as a multiclass task, this research introduces a novel approach, considering the challenges posed by Stable Diffusion and StyleGAN2. The objective is to enhance understanding and efficacy in detecting manipulated content within a multiclass context. Novelty: This research distinguishes itself by approaching the deepfake detection problem as a multiclass task, introducing new challenges associated with Stable Diffusion and StyleGAN2. The study pioneers the exploration of ViTs in this domain, emphasizing their potential to extract global features for enhanced detection accuracy. The novelty lies in addressing the evolving landscape of deepfake creation and manipulation. Results and Conclusion: Through extensive experiments, the proposed method exhibits high effectiveness, achieving impressive detection accuracy, precision, and recall, and an F1 rate of 99.90% on a multiclass-prepared dataset. The results underscore the significant potential of ViTs in contributing to a more secure digital landscape by robustly addressing the challenges posed by deepfake content, particularly in the presence of Stable Diffusion and StyleGAN2. The proposed model outperformed when compared with state-of-the-art CNN-based models, i.e., ResNet-50 and VGG-16. Full article
Show Figures

Figure 1

15 pages, 2214 KiB  
Article
A Comprehensive Approach to Image Protection in Digital Environments
by William Villegas-Ch, Joselin García-Ortiz and Jaime Govea
Computers 2023, 12(8), 155; https://doi.org/10.3390/computers12080155 - 2 Aug 2023
Cited by 1 | Viewed by 1628
Abstract
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop [...] Read more.
Protecting the integrity of images has become a growing concern due to the ease of manipulation and unauthorized dissemination of visual content. This article presents a comprehensive approach to safeguarding images’ authenticity and reliability through watermarking techniques. The main goal is to develop effective strategies that preserve the visual quality of images and are resistant to various attacks. The work focuses on developing a watermarking algorithm in Python, implemented with embedding in the spatial domain, transformation in the frequency domain, and pixel modification techniques. A thorough evaluation of efficiency, accuracy, and robustness is performed using numerical metrics and visual assessment to validate the embedded watermarks. The results demonstrate the algorithm’s effectiveness in protecting the integrity of the images, although some attacks may cause visible degradation. Likewise, a comparison with related works is made to highlight the relevance and effectiveness of the proposed techniques. It is concluded that watermarks are presented as an additional layer of protection in applications where the authenticity and integrity of the image are essential. In addition, the importance of future research that addresses perspectives for improvement and new applications to strengthen the protection of the goodness of pictures and other digital media is highlighted. Full article
Show Figures

Figure 1

22 pages, 9858 KiB  
Article
High-Capacity Reversible Data Hiding Based on Two-Layer Embedding Scheme for Encrypted Image Using Blockchain
by Arun Kumar Rai, Hari Om, Satish Chand and Chia-Chen Lin
Computers 2023, 12(6), 120; https://doi.org/10.3390/computers12060120 - 12 Jun 2023
Cited by 4 | Viewed by 1708
Abstract
In today’s digital age, ensuring the secure transmission of confidential data through various means of communication is crucial. Protecting the data from malicious attacks during transmission poses a significant challenge. To achieve this, reversible data hiding (RDH) and encryption methods are often used [...] Read more.
In today’s digital age, ensuring the secure transmission of confidential data through various means of communication is crucial. Protecting the data from malicious attacks during transmission poses a significant challenge. To achieve this, reversible data hiding (RDH) and encryption methods are often used in combination to safeguard confidential data from intruders. However, existing secure reversible hybrid hiding techniques are facing challenges related to low data embedding capacity. To address these challenges, the proposed research presents a solution that utilizes block-wise encryption and a two-layer embedding scheme to enhance the embedding capacity of the cover image. Additionally, this technique incorporates a blockchain-enabled RDH method to ensure traceability and integrity by storing confidential data alongside the hash value of the stego image. The proposed work is divided into three phases. First, the cover image is encrypted. Second, the data are embedded in the encrypted cover image using a two-layer embedding scheme. Finally, the stego image along with the hash value are deployed through blockchain technology. The proposed method reduces challenges associated with traceability and integrity while increasing the embedding capacity of images compared to traditional methods. Full article
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 2681 KiB  
Review
Deepfake Attacks: Generation, Detection, Datasets, Challenges, and Research Directions
by Amal Naitali, Mohammed Ridouani, Fatima Salahdine and Naima Kaabouch
Computers 2023, 12(10), 216; https://doi.org/10.3390/computers12100216 - 23 Oct 2023
Cited by 15 | Viewed by 28698
Abstract
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is [...] Read more.
Recent years have seen a substantial increase in interest in deepfakes, a fast-developing field at the nexus of artificial intelligence and multimedia. These artificial media creations, made possible by deep learning algorithms, allow for the manipulation and creation of digital content that is extremely realistic and challenging to identify from authentic content. Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. This survey paper provides a general understanding of deepfakes and their creation; it also presents an overview of state-of-the-art detection techniques, existing datasets curated for deepfake research, as well as associated challenges and future research trends. By synthesizing existing knowledge and research, this survey aims to facilitate further advancements in deepfake detection and mitigation strategies, ultimately fostering a safer and more trustworthy digital environment. Full article
Show Figures

Figure 1

Back to TopTop