Next Article in Journal
OADE-Net: Original and Attention-Guided DenseNet-Based Ensemble Network for Person Re-Identification Using Infrared Light Images
Previous Article in Journal
Novel Uncertainty Principles Concerning Linear Canonical Wavelet Transform
Previous Article in Special Issue
Meaningful Secret Image Sharing with Uniform Image Quality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel RDA-Based Network to Conceal Image Data and Prevent Information Leakage

1
College of Air Defense and Anti-Missile, Air Force Engineering University, Xi’an 710051, China
2
China Satellite Maritime Tracking and Control Department, Jiangyin 214430, China
3
College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(19), 3501; https://doi.org/10.3390/math10193501
Submission received: 16 August 2022 / Revised: 18 September 2022 / Accepted: 22 September 2022 / Published: 26 September 2022

Abstract

:
Image data play an important role in our daily lives, and scholars have recently leveraged deep learning to design steganography networks to conceal and protect image data. However, the complexity of computation and the running speed have been neglected in their model designs, and steganography security still has much room for improvement. For this purpose, this paper proposes an RDA-based network, which can achieve higher security with lower computation complexity and faster running speed. To improve the hidden image’s quality and ensure that the hidden image and cover image are as similar as possible, a residual dense attention (RDA) module was designed to extract significant information from the cover image, thus assisting in reconstructing the salient target of the hidden image. In addition, we propose an activation removal strategy (ARS) to avoid undermining the fidelity of low-level features and to preserve more of the raw information from the input cover image and the secret image, which significantly boosts the concealing and revealing performance. Furthermore, to enable comprehensive supervision for the concealing and revealing processes, a mixed loss function was designed, which effectively improved the hidden image’s visual quality and enhanced the imperceptibility of secret content. Extensive experiments were conducted to verify the effectiveness and superiority of the proposed approach.

1. Introduction

The rapid development of big data technology and Internet technology has greatly improved the efficiency of data transmission, management, and storage, but has also brought serious data security problems, including unintentional data leakage and malicious data theft [1]. In all kinds of massive data, image data play important or dominant roles in our daily lives. However, some image data may contain personal private information, confidential business information, or even military information. Protection of these secret image data and prevention of information leakage has aroused increasing concern in various fields. An interesting security technology known as steganography has attracted wide attention and undergone tremendous development in recent years [2], this technology aims to embed secret messages into digital media carriers in an imperceptible fashion, thus guaranteeing information security. Compared with widely used encryption technology [3] that generally encodes the secret message into an incomprehensible cipher text, steganography technology encodes the secret message into an imperceptible form, which can conceal the existence of secret content, thus avoiding arousing the attention and suspicion of third-party attackers. Currently, various digital carriers such as audio [4], images [5], and video [6] have been adapted to implement steganography. Among these, digital imagery serves as the most prevalent carrier, relying on its properties of large capacity and rich texture. Many image steganography algorithms have been proposed to date, and we can roughly divide these algorithms into two categories: traditional image steganography algorithms and deep learning-based algorithms.
Among traditional image-steganography algorithms, the least significant bit (LSB) [7] algorithm is the most representative, embedding secret data into the lowest bits of each pixel to keep unchanged the carrier’s visual appearance. Owing to its widespread applicability, the LSB algorithm has been widely applied to conceal various kinds of secret data, including plain text data and image data. However, when the embedding payload is large, this algorithm leaves visible tampering artifacts in the carrier’s smooth regions. Subsequently, Ge et al. [8] proposed a multi-level embedding framework based on the single-level embedding approach, to achieve a much higher embedding capacity. Combined with encryption technology, Qian et al. [9] proposed a novel framework for hiding reversible data in an encrypted JPEG bitstream, thus achieving high content security. Recently, Su et al. [10] further proposed a fast algorithm to accelerate distortion calculations and the embedding process.
Although the abovementioned traditional steganography algorithms have greatly improved steganography performance in terms of embedding capacity and efficiency, they continue to encounter certain problems. (1) They are always designed for concealing small quantities of bit-level messages, and the limited payload capacity greatly impedes their practical application and popularization. (2) They are usually heuristically proposed based on professional domain knowledge and manually extracted features, which is expensive, time-consuming, and laborious.
The past few years have witnessed the rapid development of deep learning in various intelligent tasks, such as machine translation [11] and computer vision [12]. Deep learning has been proven capable of automatically extracting high-level potential features. To enable a breakthrough in steganography performance and eliminate dependence on hand-crafted features, scholars have attempted to leverage deep learning to achieve image steganography.
Tang et al. [13] pioneered the combination of deep learning and steganography, and designed an automatic distortion learning framework. However, their method relied heavily on traditional embedding algorithms. To this end, Zhang et al. [14] utilized generative adversarial networks (GAN) to construct an end-to-end image steganography framework called SteganoGAN. To further improve steganography security, Zhou et al. [15] leveraged GAN to generate adversarial images for embedding secret information. However, such deep learning-based algorithms were designed for carrying small amounts of bit-level information. Recently, scholars have identified a potential route to conceal image data using convolution neural networks (CNN). Specifically, Kumar et al. [16] applied CNN to design a steganography framework for concealing image data with minimal changes in the hidden image. Sharma et al. [14] combined neural networks with cryptography techniques to improve content security. However, due to premature model design [16,17], secret images recovered via these methods contained severe content distortions. Rehman et al. [18] designed an encoder–decoder framework to embed an image into another cover image, although the recovered secret image still exhibited some color distortion. Duan et al. [19] proposed a steganography network with a deep structure and achieved perceptually pleasing performance in secret image restoration. Subsequently, Baluja [20] considered the issue of visual security and indicated a direction for steganography performance improvement, but the quality of the synthetic hidden image was not fundamentally improved. Chen et al. [21] tried to widen the network using various strategies, and fundamentally improved the concealment performance by boosting the quality of the hidden image.
Although these various deep learning-based algorithms have obtained more competitive results than traditional algorithms in terms of security and payload capacity, they remain in a developmental stage and the study of image-to-image steganography has not identified suitable unified evaluation criteria. The computation complexity and running speed have been consistently neglected in the model designs. Steganography performance still has much room for improvement, especially in terms of security.
Motivated by the above issues, this paper presents a design for a low-complexity fast steganography network based on deep learning to automatically conceal and protect image data. Compared with traditional algorithms, our approach eliminates dependence on handcrafted features and requires no human labor. Compared with recent deep learning-based algorithms, our approach achieves higher security with lower computation complexity and faster running speed.
The major contributions of this paper are summarized as follows:
  • Inspired by the attention mechanism, we designed a residual dense attention (RDA) module to extract the significant information from the cover image, thus assisting in reconstructing the salient target of the hidden image and improving the concealment performance.
  • We explore and propose an activation removal strategy (ARS) to avoid undermining the fidelity of the low-level features and to preserve more of the raw information from the input cover image and secret image, significantly boosting the concealing and revealing performance.
  • Study of image-to-image steganography is still in an embryonic stage, and we introduce visual security as a unified evaluation criterion. We designed a mixed steganography loss function to further enhance steganography security.

2. Methods

In this section, we first describe briefly the overall architecture of our approach, then we give a detailed introduction and analysis of specific designs, including the RDA module, the ARS strategy, and the mixed steganography loss function.

2.1. Overall Architecture

To reduce memory consumption, we adopted a down–up structure to construct the backbone network, thus reducing the resolution of intermediate feature maps. Figure 1 displays the overall architecture of our approach.
As shown in Figure 1, the steganography framework consists of a concealing network and a revealing network. The sender conceals the secret image within a common cover image using the concealing network, and synthesizes a hidden image that appears the same as the cover image. The receiver is able to extract the secret image from the hidden image using the revealing network, without any prior secret information. For the concealing network, a sequence of down-sampling operations is employed to compress the input cover image and the secret image, thus reducing the resolution of intermediate feature maps and obtaining small-scale feature maps that contain rich semantic information. Then, symmetrical up-sampling operations are applied to expand the size of feature maps to the original resolution. Finally, cover information and secret information extracted by the two branches are fused through channel concatenation to produce the hidden image. The whole process of the revealing network is similar to that of the concealing network. Generally, a sequence of down-sampling operations would inevitably discard some detailed information contained in the input cover image and the secret image, thus severely deteriorating the steganography performance. To alleviate this deterioration, we have applied residual connections [22] to connect the symmetrical convolution and deconvolution layers, passing more detailed information to the upper layers.
Note that the down-sampling and up-sampling operations are separately achieved by convolution and deconvolution layers with kernel size of 4 × 4 and a stride of 2. Numbers 3, 64, and 128 represent the channel numbers of feature maps.
To guarantee imperceptibility, it is expected that the synthetic hidden image is as similar to the cover image as possible. For this purpose, the attention mechanism was introduced and an RDA module was designed to extract the cover image’s salient information and obtain an attention mask, thus assisting in reconstructing the hidden image’s salient target and boosting the image quality.

2.2. RDA Module

The attention mechanism plays a key role in the human visual perception system (HVPS), guiding the HVPS to focus on the salient target and to ignore useless background information [23]. In recent years, the attention mechanism has been widely applied in various fields, including person identification [24], image classification [25], and video captioning [26]. Inspired by these works, this study sought to introduce the attention mechanism into the image steganography task, aiming to improve the similarity between the hidden image and the cover image.
Recently, a novel parameter-free attention mechanism (PFAM) was proposed [27], designed based on neuroscience theories to calculate attention weights by deriving an energy function. In visual neuroscience, the informative or active neuron usually displays a distinctive firing pattern within its surrounding neurons. Thus, the informative neuron that displays spatial suppression effects is of high importance and should be given more attention in visual processing. Based on this neuroscientific finding, an energy function was defined to weigh the importance of each neuron. The energy function for each neuron is defined as:
e t = ( y t t ^ ) 2 + 1 M N 1 i = 1 M N 1 ( y 0 x ^ i ) 2 + η w t 2
where t ^ = w t t + b t and x ^ i = w t x i + b t indicate the linear transform; w t , b t are the transform weight and bias; t represents the target neuron; x i represents other neurons on one channel of feature maps F R C × H × W ; y t and y 0 are two different values; η w t 2 is the regularization term and M N indicates the number of neurons on that channel. For simplicity, binary labels (i.e., 1 and −1) are adopted for y t and y 0 , and the final energy function can be obtained as:
e t = 1 M N 1 i = 1 M N 1 ( 1 ( w t x i + b t ) ) 2 + ( 1 ( w t t + b t ) ) 2 + η w t 2
We can derive the minimal energy from the above equation as:
e t * = 4 ( δ 2 + η ) ( t u ) 2 + 2 δ 2 + 2 η
where u = 1 M N 1 i = 1 M N 1 x i , δ 2 = 1 M N 1 i = 1 M N 1 ( x i u ) 2 . It can be found that the lower the energy e t * , the larger the target neuron t , meaning that the target neuron’s performance is distinct from that of its surrounding neurons. Thus, the importance of a neuron can be evaluated by 1 / e t * . To obtain the weighted value of importance in the range (0, 1), the Sigmoid function is performed on 1 / e t * .
Based on PFAM, we designed an RDA module as an auxiliary branch to densely extract the prominent feature information from the cover image, thus reconstructing the hidden image’s salient target. The specific framework of the RDA module is presented in Figure 2.
As shown in Figure 2, to enhance the nonlinear representation ability while avoiding dramatically increasing parameters, we adopted Conv 1 × 1 (the small-scale convolution with kernel size of 1 × 1) followed by the ReLU function and PFAM to construct the RDA module. To fully integrate multi-level feature information, we also applied numerous residual connections [22] to densely connect multi-level features without increasing any parameters. In addition, the skip channel concatenation was adopted for sending the original input information directly to the output, thus preserving the integrity of raw information. Numbers 32 and 29 represent the channel numbers of feature maps.

2.3. Activation Removal Strategy (ARS)

The activation operation plays a key role in most high-level computer vision tasks, such as target recognition, object detection, and semantic segmentation. The activation operation can enhance the nonlinear representation ability, in order to extract high-level features and attributes conductive to the abovementioned high-level tasks. However, the particular image steganography task belongs to a mixed regime of high- and low-level tasks. Although the activation operation is conductive to the extraction of high-level features, it may modify the detailed information and undermine the fidelity of some of the original information passing through the residual connection and the skip channel concatenation. For this reason, we propose an ARS to remove the activation operation in the last layers of the concealing network and the revealing network, respectively, to preserve more detailed information including the raw information sent through the residual connection and the skip channel concatenation, thus further improving concealing performance and revealing performance.

2.4. Mixed Loss Function

Most previous studies on image steganography have adopted mean squared error (MSE) as the loss function to conduct the network training, while structural similarity (SSIM) has been used as a metric to evaluate image quality. However, it has been proven that MSE usually generates images of poor perceptual quality, which is particularly disadvantageous to the imperceptibility and security of steganography. To address this issue, we introduced SSIM into the steganography loss function, allowing the network to focus on the structural quality of the generated hidden image and the recovered secret image, thus improving the visual quality and the issue of imperceptibility. In addition, we also introduced Kullback–Leibler (KL) divergence to construct a mixed loss function, thus comprehensively improving the quality of the generated hidden image and the recovered secret image. Given two images X and Y with size of H × W , we can obtain:
M S E ( X , Y ) = 1 H W i = 1 H j = 1 W ( X i , j Y i , j ) 2
S S I M ( X , Y ) = 1 ( 2 u X u X + C 1 ) ( 2 δ X Y + C 2 ) ( u X 2 + u X 2 + C 1 ) ( δ X 2 + δ Y 2 + C 2 )
K L ( X , Y ) = 1 H W i = 1 H j = 1 W [ ρ i , j log ( ρ i , j ) ρ i , j log ( q i , j ) ]
where X i , j , Y i , j represent the image pixels; u X , u Y represent the pixel means; δ X 2 , δ Y 2 , and δ X Y represent the variances and covariance; C 1 = ( k 1 L ) 2 , C 2 = ( k 2 L ) 2 ; L denotes the max pixel value; k 1 = 0.01 and k 2 = 0.03 by default [28]; ρ i , j and q i , j represent the probability distributions of the images X and Y , respectively.
Combining the abovementioned MSE, SSIM and KL, we designed the mixed steganography loss function as:
Mixed   loss = Concealing   loss + Revealing   loss = α MSE ( H i , j , C i , j ) + β SSIM ( H i , j , C i , j ) + γ KL ( H i , j , C i , j ) + λ α MSE ( R i , j , S i , j ) + β SSIM ( R i , j , S i , j ) + γ KL ( R i , j , S i , j ) .
where H i , j and C i , j represent the generated hidden image and the original cover image; R i , j and S i , j indicate the recovered image and the original secret image; α , β , γ , and λ are hyperparameters to balance various losses.

3. Experimental Results and Analysis

In this section, we first briefly introduce our experimental setup, then we describe an ablation study that was conducted to evaluate the effectiveness of our designs, followed by a comparison experiment carried out to demonstrate the performance of the proposed approach.

3.1. Experimental Setup

In this study, we selected 30,000 and 1000 images from ImageNet [29] as the respective training and testing sets of cover images, and then selected 30,000 and 1000 images from NWPU-RESISC45 [30] as the respective training and testing sets of secret images.
The experiments were conducted on a workstation equipped with an Intel Xeon(R) Bronze 3104 CPU and Nvidia Titan RTX 2080 Ti GPU. This workstation used an Ubuntu 18.04 operating system with Python 3.6 programming languages and Pytorch 1.3.0 frameworks. The batch size was set as four (i.e., four pairs of cover images and secret images). We applied the Adam optimizer with an initial learning rate of 0.001 to optimize the proposed model, and used the ReduceLROnPlateau tool to adjust the learning rate automatically and achieve effective optimization. The additional regularization term η w t 2 should necessarily be small, thus we set η = 0.0001 . Compared with the recovered secret image, more attention should be paid to the quality of the hidden image, because this largely determines steganography security. Thus, we set λ = 0.6 < 1 to ensure that the concealing loss accounted for a larger proportion of the total loss. In addition, the generated image’s visual appearance is mainly determined by its pixel values, thus we set α = 1 , β = γ = 0.2 so that MSE loss represented larger proportions of the concealing loss and the revealing loss.

3.2. Ablation Study on RDA and ARS

To evaluate the effectiveness of our proposed RDA module and ARS strategy, we conducted an ablation study to analyze their influence on network performance. For simplicity, here we assessed only MSE loss. The following abbreviations denote the corresponding frameworks: (1) BL: The baseline of our approach; (2) BL-RDA: The baseline with the RDA module; (3) BL-RDA-ARS: The baseline with the RDA module and ARS strategy. Figure 3 shows the network performances of different frameworks.
By observing Figure 3, we can obtain the following conclusions.
(1)
BL and BL-RDA demonstrated that the proposed RDA module significantly reduced the concealing loss, meaning that it greatly boosted the hidden image’s quality.
(2)
BL-RDA and BL-RDA-ARS demonstrated that the proposed ARS strategy effectively reduced the concealing loss as well as the reveling loss, thus improving the quality of the generated hidden image and also the recovered secret image.
(3)
Due to the elaborate design, the proposed RDA module and ARS strategy significantly improved concealing and revealing performances, with almost no increase of parameters.

3.3. Ablation Study on the Mixed Loss Function

To further evaluate the effectiveness of our mixed steganography loss function design (composed of MSE, SSIM, and KL), we conducted an ablation study on the mixed loss function to analyze its influence on the concealing and revealing performances. The results are shown in Figure 4.
Observing Figure 4, we can obtain the following conclusions:
(1) MSE loss and MSE-SSIM loss demonstrate that introducing SSIM into the steganography loss function effectively improved the SSIM values of the hidden image and the recovered secret image, meaning that the structural quality of the hidden image and the recovered image were greatly improved. Introducing SSIM also improved the hidden image’s PSNR value, but slightly reduced the recovered image’s PSNR value. However, it is worth sacrificing the recovered image’s PSNR value for the abovementioned improvements. Here we give our descriptions and considerations.
Different from previous single-object tasks, our image steganography research represented a kind of double-object task, aiming to pursue two goals; one of these was to improve the hidden image’s quality as much as possible, and the other to improve the recovered image’s quality as much as possible. However, it proved difficult to achieve both goals simultaneously. In practical application, greater attention should be paid to the hidden image’s quality because this largely determines steganography security. More attention should also be paid to SSIM values, because the image’s information is mainly expressed through its visual appearance, and a higher SSIM value means higher visual quality.
(2) MSE-SSIM loss and mixed loss demonstrated that the proposed mixed loss function further improved the quality of the hidden image and the recovered secret image. Particularly, it significantly boosted the SSIM value of the recovered secret image.

3.4. Comparison Results

To further demonstrate the superiority of our approach, we compared it with a typical representative of traditional steganography algorithms, namely LSB, and deep learning-based image-to-image steganography algorithms [18,19,20,21]. Note that other algorithms [16,17] were not included because their recovered secret images had severe content distortions. To guarantee fairness of comparison, we used the same training set and batch size described in this paper to pre-train the abovementioned methods, and then we verified the well-trained models with the same testing set used previously in this study. The average results are shown in Table 1.
From Table 1, we can see that the traditional LSB method performed poorly when applied to large image data; its generated hidden images and recovered secret images were of poor quality. Relying on powerful feature-extraction and self-learning abilities, the deep learning-based methods [18,19,20,21] not only overcome dependence on human labor and hand-crafted features, but can also greatly boost concealing and revealing performance as well as the synthesis quality of the hidden image and the restoration quality of the secret image. However, as shown by the last row in Table 1, our proposed approach achieved the best concealing and revealing performance, and our generated hidden images and recovered secret images achieved the highest PSNR and SSIM values. More importantly, compared with the advanced method [21], the computation complexity of our proposed approach was significantly reduced by 75%, demonstrating that our approach consumed less memory resource during practical application. In the last column, we present the computation time (per batch) of different methods; it can be seen that our approach was 10× faster than the advanced method [21].
To demonstrate intuitively the performance difference between the abovementioned methods, we further conducted a visualization comparison between different methods, as shown in Figure 5. To guarantee fairness of comparison, we selected the same cover image and secret image for visual comparison. Due to limited article space, only two visualization examples are displayed in Figure 5.
The four columns to the left of Figure 5 display the original cover image, the original secret image, the generated hidden image, and the recovered secret image, respectively. The three columns to the right display the residual image obtained by pixel errors between the hidden image and the cover image, as well as its 10× and 30× enhanced versions. From Figure 5, we can see that the traditional LSB algorithm attained poor imperceptibility when applied to conceal image data. It produced obvious modification traces in the hidden image (highlighted by the red box), which failed to meet the imperceptibility requirement. In addition, its recovered secret image also displayed obvious content distortion (highlighted by the red box). Although the deep learning-based algorithm [18] further improved the concealing performance, its recovered secret image still had color distortions. It can be seen that the recent algorithms [19,20,21] achieved high steganography performance, their generated hidden images and recovered secret images were of high visual quality, and it was difficult to distinguish their performance differences by only observing their generated hidden images and recovered secret images, or even their residual images. However, after the residual image was magnified 10 or 30 times, an interesting phenomenon occurred; algorithms [19,20] obviously exposed the secret content in their 10× enhanced residual images, and algorithm [21] obviously exposed the secret content in its 30× enhanced residual image. However, as shown by the final two rows in Figure 5, our proposed approach greatly reduced the secret content exposure in the enhanced residual images and achieved higher visual security than the algorithms, even when trained with the general MSE loss function. As shown by the last row, due to the mixed loss design, our approach completely eliminated the secret content and displayed only the cover image’s contour in the enhanced residual images. This indicates that our proposed approach can well meet security requirements for practical application. Even if attackers can obtain the original cover image from the public Internet, they would remain unable to discover the secret content. If attackers have access to the original cover images, they could use residual information from between the hidden image and the cover image to decipher the embedded secret content.

4. Generalization and Application Analysis

To demonstrate that our proposed method has excellent generalization ability and broad application prospects, we tested and verified the pre-trained model on two new datasets, namely VOC2007 [31] and HRSC2016 [32]. We selected cover images from the VOC2007 dataset and secret images from the HRSC2016 dataset. The HRSC2016 dataset is particularly different from the above-mentioned NWPU-RESISC45 dataset in terms of data type and style, which presented severe challenges to our approach. Due to the limitation of article space, we present only the results of three groups, shown in Figure 6.
Figure 6 clearly shows that our approach obtained excellent steganography performance using the new datasets. The generated hidden image and the extracted image both had high visual quality. To further emphasize that our method offers high security, the residual image between the hidden image and the cover image was enhanced 50 times, shown in the last column in Figure 6. This dramatically enhanced residual image again displayed only the cover image’s contour, and exposed no secret content, verifying that our approach has excellent generalization ability for real application.

5. Conclusions

This paper proposes a novel RDA-based network to protect important or secret image data. The method can automatically embed a secret image into another common cover image to prevent secret information leakage. The designed RDA module can effectively extract significant information from the cover image for reconstructing the hidden image’s salient target, and can make the hidden image appear similar to the cover image, thus improving the concealing performance. The ARS strategy is proposed to avoid undermining the fidelity of low-level features and to preserve more raw input information, which significantly boosts concealing and revealing performance. Furthermore, the designed mixed loss function can further boost the hidden image’s quality and completely eliminate the exposure of secret content in the residual image, thus effectively preventing information leakage and guaranteeing the security of secret content.
According to the above results, it can be concluded that an appropriate network structure and loss function can greatly improve the performance of the method. In future, we will improve the network structure and loss function to enable further performance improvement, and we will extend the algorithm for use with video or other carriers, thus extending its range of application.

Author Contributions

Conceptualization, F.C. and H.L.; methodology, F.C. and X.Y.; validation, F.C., Q.X. and B.S.; writing—original draft preparation, F.C. and X.Y.; writing—review and editing, B.S. and H.L.; supervision, Q.X. and B.S.; funding acquisition, Q.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the National Natural Science Foundation of China (Grant Number: 72071209, 72001214).

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hauer, B. Data and Information Leakage Prevention within the Scope of Information Security. IEEE Access 2015, 3, 2554–2565. [Google Scholar] [CrossRef]
  2. Zhou, H.; Chen, K.; Zhang, W.; Yu, N. Comments on “Steganography Using Reversible Texture Synthesis. IEEE Trans. Image Process. 2017, 26, 1623–1625. [Google Scholar] [CrossRef] [PubMed]
  3. Xiong, H.; Yao, T.; Wang, H.; Feng, J.; Yu, S. A Survey of Public-Key Encryption with Search Functionality for Cloud-Assisted IoT. IEEE Internet Things J. 2022, 9, 401–418. [Google Scholar] [CrossRef]
  4. Yi, X.; Yang, K.; Zhao, X.; Wang, Y.; Yu, H. AHCM: Adaptive Huffman Code Mapping for Audio Steganography Based on Psychoacoustic Model. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2217–2231. [Google Scholar] [CrossRef]
  5. Su, W.; Ni, J.; Hu, X.; Fridrich, J. Image Steganography with Symmetric Embedding Using Gaussian Markov Random Field Model. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 1001–1015. [Google Scholar] [CrossRef]
  6. Rabie, T.; Baziyad, M. The Pixogram: Addressing High Payload Demands for Video Steganography. IEEE Access 2019, 7, 21948–21962. [Google Scholar] [CrossRef]
  7. Mielikainen, J. LSB matching revisited. IEEE Signal Processing Letters. IEEE Signal Process. Lett. 2006, 13, 285–287. [Google Scholar] [CrossRef]
  8. Ge, H.; Chen, Y.; Qian, Z.; Wang, J. A High Capacity Multi-Level Approach for Reversible Data Hiding in Encrypted Images. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2285–2295. [Google Scholar] [CrossRef]
  9. Qian, Z.; Xu, H.; Luo, X.; Zhang, X. New Framework of Reversible Data Hiding in Encrypted JPEG Bitstreams. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 351–362. [Google Scholar] [CrossRef]
  10. Su, A.; Ma, S.; Zhao, X. Fast and Secure Steganography Based on J-UNIWARD. IEEE Signal Process. Lett. 2020, 27, 221–225. [Google Scholar] [CrossRef]
  11. Chen, K.; Wang, R.; Utiyama, M.; Sumita, E.; Zhao, T. Neural Machine Translation with Sentence-Level Topic Context. IEEE/ACM Trans. Audio Speech, Lang. Process. 2019, 27, 1970–1984. [Google Scholar] [CrossRef]
  12. Ding, C.; Tao, D. Trunk-Branch Ensemble Convolutional Neural Networks for Video-Based Face Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1002–1014. [Google Scholar] [CrossRef] [PubMed]
  13. Tang, W.; Tan, S.; Li, B.; Huang, J. Automatic Steganographic Distortion Learning Using a Generative Adversarial Network. IEEE Signal Process. Lett. 2017, 24, 1547–1551. [Google Scholar] [CrossRef]
  14. Zhang, K.A.; Cuesta, A.; Infante, L.X.; Veeramachaneni, K. SteganoGAN: High Capacity Image Steganography with GANs. arXiv 2019, arXiv:1901.03892. [Google Scholar]
  15. Zhou, L.; Feng, G.; Shen, L.; Zhang, X. On Security Enhancement of Steganography via Generative Adversarial Image. IEEE Signal Process. Lett. 2020, 27, 166–170. [Google Scholar] [CrossRef]
  16. Kumar, V.; Laddha, S.; Aniket, N.D. Steganography Techniques Using Convolutional Neural Networks. Rev. Comput. Eng. Stud. 2020, 7, 66–73. [Google Scholar] [CrossRef]
  17. Sharma, K.; Aggarwal, A.; Singhania, T.; Gupta, D.; Khanna, A. Hiding Data in Images Using Cryptography and Deep Neural Network. arXiv 2019, arXiv:1912.10413. [Google Scholar] [CrossRef]
  18. Rahim, R.; Nadeem, S. End-to-end trained CNN encode-decoder networks for image steganography. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 723–729. [Google Scholar]
  19. Duan, X.; Jia, K.; Li, B.; Guo, D.; Zhang, E.; Qin, C. Reversible Image Steganography Scheme Based on a U-Net Structure. IEEE Access 2019, 7, 9314–9323. [Google Scholar] [CrossRef]
  20. Baluja, S. Hiding Images within Images. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1685–1697. [Google Scholar] [CrossRef]
  21. Chen, F.; Xing, Q.; Liu, F. Technology of Hiding and Protecting the Secret Image Based on Two-Channel Deep Hiding Network. IEEE Access 2020, 8, 21966–21979. [Google Scholar] [CrossRef]
  22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, CA, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  23. Corbetta, M.; Shulman, G.L. Control of goal-directed and stimulus driven attention in the brain. Nat. Rev. Neurosci. 2002, 3, 201–215. [Google Scholar] [CrossRef] [PubMed]
  24. Huang, Y.; Lian, S.; Zhang, S.; Hu, H.; Chen, D.; Su, T. Three-Dimension Transmissible Attention Network for Person Re-Identification. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 4540–4553. [Google Scholar] [CrossRef]
  25. Zhu, M.; Jiao, L.; Liu, F.; Yang, S.; Wang, J. Residual Spectral–Spatial Attention Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 449–462. [Google Scholar] [CrossRef]
  26. Deng, J.; Li, L.; Zhang, B.; Wang, S.; Zha, Z.; Huang, Q. Syntax-Guided Hierarchical Attention Network for Video Captioning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 880–892. [Google Scholar] [CrossRef]
  27. Yang, L.; Zhang, R.; Li, L.; Xie, X. SimAM: A Simple, Parameter-Free Attention Module for Convolutional Neural Networks. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
  28. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
  29. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  30. Cheng, G.; Han, J.; Lu, X. Remote Sensing Image Scene Classification: Benchmark and State of the Art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  31. Everingham, M.; VanGool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The Pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  32. Liu, Z.; Wang, H.; Weng, L.; Yang, Y. Ship Rotated Bounding Box Space for Ship Extraction from High-Resolution Optical Satellite Images with Complex Backgrounds. IEEE Trans. Geosci. Remote Sens. 2016, 13, 1074–1078. [Google Scholar] [CrossRef]
Figure 1. Overall architecture of our approach.
Figure 1. Overall architecture of our approach.
Mathematics 10 03501 g001
Figure 2. Specific framework of the RDA module.
Figure 2. Specific framework of the RDA module.
Mathematics 10 03501 g002
Figure 3. Network performance of different frameworks. (a) Concealing loss, (b) revealing loss, (c) total parameters of different frameworks.
Figure 3. Network performance of different frameworks. (a) Concealing loss, (b) revealing loss, (c) total parameters of different frameworks.
Mathematics 10 03501 g003
Figure 4. Influence of different loss functions. (a) PSNR values of the hidden images, (b) SSIM values of the hidden images, (c) PSNR values of the recovered secret images, (d) SSIM values of the recovered secret images.
Figure 4. Influence of different loss functions. (a) PSNR values of the hidden images, (b) SSIM values of the hidden images, (c) PSNR values of the recovered secret images, (d) SSIM values of the recovered secret images.
Mathematics 10 03501 g004
Figure 5. Visual performance of different methods.
Figure 5. Visual performance of different methods.
Mathematics 10 03501 g005
Figure 6. Steganography performance using the new datasets.
Figure 6. Steganography performance using the new datasets.
Mathematics 10 03501 g006
Table 1. Average quantitative indicators of different methods.
Table 1. Average quantitative indicators of different methods.
MethodsPSNR
( H i , j C i , j )
SSIM
( H i , j C i , j )
PSNR
( R i , j S i , j )
SSIM
( R i , j S i , j )
Computation
Complexity/GMac
Computation Time/s
LSB33.5710.904728.1260.9073- -- -
Rehman [18]34.2750.948128.4370.94083.10.05
Duan [19]36.9750.954336.152.0.970466.80.18
Baluja [20]38.4830.967436.4620.973630.90.13
Chen [21]44.5730.992140.0270.990187.60.74
Our44.6020.995740.1100.993521.30.07
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, F.; Xing, Q.; Sun, B.; Yan, X.; Lu, H. A Novel RDA-Based Network to Conceal Image Data and Prevent Information Leakage. Mathematics 2022, 10, 3501. https://doi.org/10.3390/math10193501

AMA Style

Chen F, Xing Q, Sun B, Yan X, Lu H. A Novel RDA-Based Network to Conceal Image Data and Prevent Information Leakage. Mathematics. 2022; 10(19):3501. https://doi.org/10.3390/math10193501

Chicago/Turabian Style

Chen, Feng, Qinghua Xing, Bing Sun, Xuehu Yan, and Huan Lu. 2022. "A Novel RDA-Based Network to Conceal Image Data and Prevent Information Leakage" Mathematics 10, no. 19: 3501. https://doi.org/10.3390/math10193501

APA Style

Chen, F., Xing, Q., Sun, B., Yan, X., & Lu, H. (2022). A Novel RDA-Based Network to Conceal Image Data and Prevent Information Leakage. Mathematics, 10(19), 3501. https://doi.org/10.3390/math10193501

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop