Next Article in Journal
Optimization of Sensor Placement for a Measurement System for the Determination of Local Magnetic Material Properties
Next Article in Special Issue
A Secure Authentication Scheme with Local Differential Privacy in Edge Intelligence-Enabled VANET
Previous Article in Journal
Deterministic Modeling of the Issue of Dental Caries and Oral Bacterial Growth: A Brief Review
Previous Article in Special Issue
Design of Secure and Privacy-Preserving Data Sharing Scheme Based on Key Aggregation and Private Set Intersection in Medical Information System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Covert Communication for Dual Images with Two-Tier Bits Flipping

1
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407102, Taiwan
2
Information and Communication Security Research Center, Feng-Chia University, Taichung 407102, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2219; https://doi.org/10.3390/math12142219
Submission received: 15 June 2024 / Revised: 9 July 2024 / Accepted: 15 July 2024 / Published: 16 July 2024

Abstract

:
Data hiding in digital images is a potent solution for covert communication, embedding sensitive data into cover images. However, most existing methods are tailored for one-to-one scenarios, which present security risks. To mitigate this vulnerability, we introduce an innovative one-to-two data hiding scheme that employs a two-tier bit-flipping strategy to embed sensitive data in dual images. This process produces two stego images which are then transmitted to two distinct recipients who cannot extract any sensitive data alone. The sensitive data can only be extracted when the two recipients trust each other. Through this method, we can secure the stego images. The experimental results illustrate that our method achieves an excellent data payload while maintaining high visual quality.

1. Introduction

With the proliferation of sensitive data being shared online, covert communication becomes imperative to ensure secure transmission. In response, data hiding technology has emerged as a solution to embed sensitive data within cover media [1]. However, traditional data hiding methods often cause irreversible damage to the cover media. To address this issue, reversible data hiding (RDH) schemes have been developed to allow the restoration of the original cover media after extracting sensitive data.
As digital images are ubiquitous in multimedia applications, numerous reversible data hiding (RDH) techniques have emerged in recent years. These techniques can be categorized into four types based on the processing domain: spatial domain-based methods, transform domain-based methods, compressed domain-based methods, and encryption domain-based methods. The spatial domain-based techniques [2,3,4,5,6,7,8,9,10,11,12,13] involve modifying pixel values directly to embed secret data into images. Transform domain-based techniques [14,15,16,17] convert plaintext images into frequency coefficients first and embed secret data by modifying these coefficient values. Compressed domain-based approaches [18,19,20,21,22] leverage the compressed codes of digital images as carriers for data embedding. With the rapid evolution of cloud technology, encryption domain-based methods have emerged. These techniques involve encrypting the cover image before transmission through a public channel and ensuring the confidentiality of images while concealing sensitive data [23,24,25].
However, the methods mentioned above are designed for a one-to-one model where there is only one sender and one recipient throughout the entire process. One significant concern with these approaches is that they grant the recipient considerable influence over the stego image, potentially compromising the confidentiality and integrity of the embedded data. To address this problem, the dual-image RDH method has been introduced. In 2015, Qin et al. [26] proposed a dual-image reversible data hiding (RDH) method based on the exploiting modification direction (EMD) technique. In their method, secret data are embedded into the pixels of the first cover image using the EMD technique, while the pixels of the second cover image are adaptively adjusted according to the modifications made to the first image. Later, Lu et al. [27] introduced a method to mitigate image distortion, where the value of the secret data are reduced using the center folding strategy. The reduced secret data are then embedded into two cover images using the averaging method. On this basis, Yao et al. [28] further enhanced the embedding capacity without introducing any additional distortion. In 2018, Jana et al. [29] introduced a novel RDH method for dual-image utilizing the (7, 4) Hamming code. Their approach involves adjusting the redundant Least Significant Bits (LSBs) of a block of seven pixels with an odd parity. The secret data are then embedded by introducing errors caused by tampering in suitable positions, and leveraging the properties of the Hamming code. Based on the characteristics of the EMD matrix, Chen et al. [30] embedded one secret bit and a base-5 digit into a cover pixel to generate a pixel pair and assigned to two stego images. In 2023, Kim et al. [31] employed a (3, 1) Hamming code and an optimal pixel adjustment process (OPAP) to embed secret data into the dual images. In this one-to-two model, two stego images are generated and transmitted to two recipients. In this case, the two recipients can extract the embedded sensitive data only if they establish trust in each other. By distributing sensitive data across two stego images and requiring mutual trust between the two recipients for extraction, the dual-image RDH method enhances security in covert communication scenarios. However, existing methods often have limitations in embedding capacity, which may not be sufficient for applications requiring large data payloads. Therefore, there is an urgent need for advancements in dual-image reversible data hiding techniques to accommodate these high-capacity application demands.
Given the necessity for greater embedding capacity, we present a novel two-tier bit-flipping embedding strategy within the dual-image RDH framework. The original image is transformed to form two cover images first. Then, pixels from the corresponding positions of the cover images are paired. Afterward, sensitive data are embedded using a two-tier bit-flipping strategy where the bits of the pixel pairs undergo a bit-flipping process. The resulting stego images are transmitted to the recipients who cannot extract private information from the stego image alone. Sensitive data can only be extracted when they trust each other. Our method is applicable to common public communication channels such as email and Facebook. An application example is given in Figure 1. The contributions of the proposed scheme are outlined below:
(1)
We introduce a two-tier bit-flipping embedding strategy for covert communication.
(2)
The proposed dual-image RDH method is better suited for covert communication scenarios than single-image RDH methods.
(3)
Our scheme surpasses existing methods in embedding capacity while maintaining high image quality.
The subsequent sections of this paper are structured as follows: Section 2 introduces a state-of-the-art dual-image-based RDH method. Section 3 elaborates on the proposed two-tier bit-flipping embedding strategy. Experimental results and discussions are provided in Section 4. Finally, Section 5 concludes this scheme.

2. Related Works

In this section, we introduce a reversible data hiding scheme based on dual images proposed by Kim et al. [31]. Their scheme employs a (3, 1) Hamming code and an optimal pixel adjustment process (OPAP) [32] in data embedding. It consists of two essential components: cover image generation and data embedding.

2.1. Generation of Two Cover Images

Given a grayscale image I of size M × N where the pixels are denoted as p ( i ) ,   i = 1 , 2 , ,   M × N , two cover images C I 1 and C I 2 are generated based on this grayscale image. The details are outlined below. The Equations (1)–(5) are all derived from [31].
Step 1: Read a pixel p in the grayscale image I and assign p into pixel p 1 and p 2 , where p 1 = ( p 1 1 p 1 2 p 1 3 p 1 4 p 1 5 p 1 6 p 1 7 p 1 8 ) and p 2 = ( p 2 1 p 2 2 p 2 3 p 2 4 p 2 5 p 2 6 p 2 7 p 2 8 ) .
Step 2: Construct a 3-bits codeword γ from the pixel pair ( p 1 , p 2 ) by
γ = ( γ 1 ,   γ 2 ,   γ 3 ) = ( mod ( p 1 ,   2 ) ,   mod ( p 2 2 , 2 ) , mod ( p 2 , 2 ) ) .
Step 3: Compute the syndrome δ of the codeword γ using
δ = ( γ 1 ,   γ 2 ,   γ 3 ) × H T ,
where H = [ 0 1 1 1 0 1 ] .
Step 4: Flip the γ δ -th bit of the codeword γ = ( γ 1 ,   γ 2 ,   γ 3 ) when δ 0 to obtain the error correction codeword γ ˜ = ( γ ˜ 1 ,   γ ˜ 2 ,   γ ˜ 3 ) with δ = 0 .
Step 5: Update the pixel pair ( p 1 , p 2 ) to the cover pixel pair ( p 1 , p 2 ) by
p 1 = p 1 ( p 1 mod   2 ) + γ ˜ 1 ,
p 2 = { p 2 1 ,   i f ( p 2 7 = 0     p 2 8 = 0 )   γ ˜ 2 = 1 p 2 + 1 ,   i f ( p 2 7 = 0     p 2 8 = 1 )   γ ˜ 2 = 1 p 2 1 ,   i f ( p 2 7 = 1     p 2 8 = 0 )   γ ˜ 2 = 0 p 2 + 1 ,   i f ( p 2 7 = 1     p 2 8 = 1 )   γ ˜ 2 = 0 ,  
and
p 2 = { p 2 1 ,   i f ( p 2 7 = 1     p 2 8 = 1 )   γ ˜ 3 = 1 p 2 + 1 ,   i f ( p 2 7 = 1     p 2 8 = 0 )   γ ˜ 3 = 0 p 2 1 ,   i f ( p 2 7 = 0     p 2 8 = 1 )   γ ˜ 3 = 0 p 2 + 1 ,   i f ( p 2 7 = 0     p 2 8 = 0 )   γ ˜ 3 = 1 .
Step 6: Backfill the updated pixel pair ( p 1 , p 2 ) to their corresponding positions.
Step 7: Repeat Steps 1–6 until all pixel pairs have been processed, resulting in the generation of two cover images, C I 1 and C I 2 .

2.2. Secret Data Embedding

After obtaining these two cover images C I 1 and C I 2 , they are utilized for embedding secret data. The details of this process are explained below.
Step 1: Read pixels from the corresponding positions of the two cover images C I 1 and C I 2 to form the pixel pair ( p 1 , p 2 ) , where p 1 = ( p 1 1 p 1 2 p 1 3 p 1 4 p 1 5 p 1 6 p 1 7 γ ˜ 1 ) and p 2 = ( p 2 1 p 2 2 p 2 3 p 2 4 p 2 5 p 2 6 γ ˜ 2 γ ˜ 3 ) .
Step 2: Divide the secret data into 2-bit segments.
Step 3: Construct a 3-bits codeword γ of the pixel pair ( p 1 , p 2 ) by Equation (1).
Step 4: Compute the syndrome δ of the codeword γ using Equation (2).
Step 5: Perform an exclusive-OR operation between the syndrome δ and a secret segment to obtain a re-computed syndrome δ ˜ .
Step 6: Flip the γ δ ˜ bit of the codeword γ = ( γ 1 ,   γ 2 ,   γ 3 ) to obtain the error-corrected codeword γ ˜ = ( γ ˜ 1 ,   γ ˜ 2 ,   γ ˜ 3 ) .
Step 7: Update the pixel pair ( p 1 , p 2 ) to the stego pixel pair ( p 1 , p 2 ) by Equations (3)–(5).
Step 8: Backfill the updated pixel pair ( p 1 , p 2 ) to their corresponding positions.
Step 9: Repeat Steps 1–8 until all pixel pairs have been processed, resulting in the generation of two stego images, S I 1 and S I 2 .
Essentially, this method reconstructs the least significant bits (LSBs) of original pixel pairs into the error correction codeword of (3, 1) Hamming code to generate cover pixel pairs. During the embedding process, the error correction codeword is adjusted based on the secret segment, resulting in the creation of stego pixel pairs. Additionally, the image distortion is reduced by modifying pixel pairs using OPAP technology.

3. Proposed Scheme

In this section, the proposed covert communication scheme for dual images is described in detail. Its flowchart is given in Figure 2. In this scheme, the image owner embeds secret data into a grayscale image through a two-tier bit-flipping embedding strategy to generate two stego images. Subsequently, these two stego images are independently transmitted to two separate recipients. In cases where mutual trust is established between the two recipients, they can accurately extract the secret data and losslessly recover the cover image.

3.1. Two-Tier Bit-Flipping Embedding Strategy

Given a grayscale image of size × N , the pixels are represented by p ( i , j ) ,   i = 1 , 2 , ,   ,   j = 1 , 2 , , N . As the grayscale image allows 256 distinct intensities, each pixel p ( i , j ) can be transformed as an 8-bit pixel binary bitstream p k ( i , j ) ,   k = 1 , 2 ,   , 8 . The secret data are embedded into the grayscale image using a two-tier bit-flipping embedding strategy as follows:
Step 1: Assign the grayscale image into two cover images 1 and 2 .
Step 2: Divide the secret data into two-bit secret segments.
Step 3: Read pixels from the corresponding locations of the two cover images 1 and 2 to form the pixel pairs ( p 1 ,   p 2 ) .
Step 4: Embed two secret segments into each pixel pair using the two-tier bit-flipping embedding strategy.
Step 4.1: Embed the first secret segment s 1 into the pixel pair ( p 1 ,   p 2 ) by the first-tier embedding resulting in the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) .
Step 4.2: Embed the second secret segment s 2 into the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) by the second-tier embedding resulting in the stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) .
Step 5: Repeat Step 4 until all pixel pairs are processed.
Step 6: Backfill with the stego pixel pairs to their corresponding positions in the two stego images, ˇ 1 and ˇ 2 .
The rules for the first-tier and the second-tier embeddings are provided in Algorithms 1 and 2, respectively. Here, F ( α ) flips the bit α , L ( α ,   β ) sets the bit α to the value β , ( α β ) returns the updated value α after processing β , and represents the exclusive-or operation. In essence, our two-tier bit-flipping embedding strategy involves executing the bit-flipping within a pixel pair to embed secret data. In the first-tier embedding, we may flip bit p 1 7 , bit p 1 8 , or both p 2 7 and p 2 8 bits depending on the values of the secret bits illustrated in Figure 3a. Consequently, the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) is obtained. Moving to the second-tier embedding, bit flips occurred among p ˜ 1 6 , p ˜ 1 7 , p ˜ 2 6 and p ˜ 2 7 bits are depicted in Figure 3b. Note that the 7-th bits of the pixel pair may undergo bit flipping in both tiers. To indicate the state of the bit 7 after the first-tier embedding, p ˜ 2 5 serves as a label before the bit-flipping in the second tier by checking whether p ˜ 1 7 and p ˜ 2 7 are the same or not. That is, p ˜ 2 5 = 0 when p ˜ 1 7 = p ˜ 2 7 and p ˜ 2 5 = 1 when p ˜ 1 7 p ˜ 2 7 . The second-tier embedding results a stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) . After processing all pixel pairs, the resulting stego pixel pairs are backfilled into their corresponding positions in the stego images ˇ 1 and ˇ 2 . The two stego images are totally transmitted to two recipients 1 and 2 afterwards, each of whom is aware of the specific stego image they receive (see Algorithms 1 and 2).
Algorithm 1: The first-tier embedding.
Input:Original pixel pair ( p 1 ,   p 2 ) and the first secret segment s 1 .
Output:Updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) .
1:If s 1 = = 00
2: ( p ˜ 1 ,   p ˜ 2 ) = ( p 1 ,   p 2 ) .
3:Else if s 1 = = 01
4: ( p ˜ 1 ,   p ˜ 2 ) = ( ( p 1 F ( p 1 8 ) ) ,   p 2 ) .
5:Else if s 1 = = 10
6: ( p ˜ 1 ,   p ˜ 2 ) = ( ( p 1 F ( p 1 7 ) ) ,   p 2 ) .
7:Else if s 1 = = 11
8: ( p ˜ 1 ( i , j ) ,   p ˜ 2 ( i , j ) ) = ( p 1 ,   ( p 2 F ( p 2 7 ) & F ( p 2 8 ) ) ) .
9:End
Algorithm 2: The second-tier embedding.
Input:Updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) and the second secret segment s 2 .
Output:Stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) .
1:If s 2 = = 00
2: ( p ˇ 1 ,   p ˇ 2 ) = ( p ˜ 1 ,   ( p ˜ 2 L ( p ˜ 2 5 ,   ( p ˜ 1 7 p ˜ 2 7 ) ) ) ) .
3:Else if s 2 = = 01
4: ( p ˇ 1 ,   p ˇ 2 ) = ( p ˜ 1 ,   ( p ˜ 2 L ( p ˜ 2 5 ,   ( p ˜ 1 7 p ˜ 2 7 ) )   &   F ( p ˜ 2 7 ) ) ) .
5:Else if s 2 = = 10
6: ( p ˇ 1 ,   p ˇ 2 ) = ( p ˜ 1 ,   ( p ˜ 2 L ( p ˜ 2 5 ,   ( p ˜ 1 7 p ˜ 2 7 ) )   &   F ( p ˜ 2 6 ) ) ) .
7:Else if s 2 = = 11
8: ( p ˇ 1 ,   p ˇ 2 ) = ( ( p ˜ 1 F ( p ˜ 1 6 ) & F ( p ˜ 1 7 ) ) , ( p ˜ 2 L ( p ˜ 2 5 ,   ( p ˜ 1 7 p ˜ 2 7 ) ) ) ) .
9:End
To illustrate the process of the two-tier bit-flipping embedding, we present two examples in Figure 4. In Figure 4a, the given pixel pair ( p 1 , p 2 ) = ( 115 , 115 ) is utilized to embed the secret segments “01” and “10”. We first embed the secret segment “01” into the pixel pair by flipping p ˜ 1 8 . The first tier embedding leads to the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) = ( 114 , 115 ) . Subsequently, we set p ˜ 2 5 = 0 , indicating that p ˜ 1 7 = p ˜ 2 7 . Following this, the secret segment “10” is embedded by flipping p ˜ 2 6 . The second-tier embedding yields the stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) = ( 114 ,   119 ) . In Figure 4b, the given pixel pair ( p 1 ,   p 2 ) = ( 67 ,   67 ) is employed to embed the secret segments “10” and “11”. The embedding process begins by flipping p ˜ 1 7 to embed the secret segments “10”. This operation results in the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) = ( 65 ,   67 ) . Since p ˜ 1 7 p ˜ 2 7 , we then set p ˜ 2 5 = 1 . The stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) = ( 70 ,   75 ) is attained by flipping p ˜ 1 6 and p ˜ 1 7 .

3.2. Data Extraction and Image Recovering

Despite the recipients receiving their stego images, they are incapable of discerning any information about the secret data. Only when mutual trust is established between the two recipients, they can extract the secret segments accurately and recover the original grayscale image losslessly. In this section, it is assumed that the two recipients 1 and 2 have established mutual trust. The process of data extraction and image recovery is as follows:
Step 1: Read pixels from the corresponding positions of the two stego images ˇ 1 and ˇ 2 to form pixel pairs ( p ˇ 1 ,   p ˇ 2 ) .
Step 2: Extract two secret segments from each stego pixel pair and retrieve the original pixel pair.
Step 2.1: Extract the second secret segment s 2 from the stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) and retrieve the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) .
Step 2.2: Extract the first secret segment s 1 from the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) and retrieve the original pixel pair ( p 1 ,   p 2 ) .
Step 3: Repeat Step 2 until all pixel pairs have been processed.
Step 4: Concatenate all the extracted secret segments to obtain the secret data.
Step 5: Backfill the original pixel pairs ( p 1 ,   p 2 ) into their corresponding positions to generate two cover images 1 and 2 .
Step 6: Choose one of the cover images to serve as the original grayscale image .
The rules of data extraction and image recovery are provided in Algorithms 3 and 4, respectively. Essentially, the data extraction and image recovery are the inverse process of data embedding. Therefore, we initially extract the secret segments and retrieve the pixel pair from the second tier, followed by the first tier. In the second tier, we begin by evaluating the patten of p ˇ 1 6 , p ˇ 1 7 , p ˇ 2 5 , p ˇ 2 6 and p ˇ 2 7 to determine the flipped bits within p ˇ 1 6 , p ˇ 1 7 , p ˇ 2 6 and p ˇ 2 7 , as illustrated in Figure 5a. Note that p ˇ 2 5 indicates the state of the 7-th bits before the second-tier embedding. After confirming the pattern, we extract the embedded secret segment s 2 , reflip the flipped bits, and set p ˇ 2 5 to the value of p ˇ 1 5 . These steps lead to the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) . In the first-tier, we extract the embedded secret segment s 1 and reflip the flipped bits of p ˜ 1 7 , p ˜ 1 8 , p ˜ 2 7 and p ˜ 2 8 to obtain the original pixel pair ( p 1 ,   p 2 ) , as shown in Figure 5b. After processing all pixel pairs, the resulting stego pixel pairs are backfilled into their corresponding positions, thus generating the cover images 1 and 2 . Finally, the secret segments are concatenated to form secret data and one of the cover images 1 and 2 are used as the original cover image .
It is important to emphasize that neither recipient can independently extract the secret data or recover the original image. Both recipients must collaborate and share their stego images ˇ 1 and ˇ 2 to accurately retrieve the embedded secret segments and restore the original pixel pairs. Additionally, any attempt by either recipient to alter the pixel values within their respective stego images will lead to failures of the data extraction and image recovery processes. This interdependent mechanism significantly enhances the reliability and security of our method, ensuring that the embedded secret data remain protected and the original image can only be accurately recovered through the cooperation and mutual trust of both recipients (see Algorithms 3 and 4).
Algorithm 3: The second-tier secret data extraction and pixel pair retrieve.
Input:Stego pixel pair ( p ˇ 1 ,   p ˇ 2 ) .
Output:Updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) and the second secret segments s 2 .
1:If ( p ˇ 2 5 = 0   &   p ˇ 1 6 = p ˇ 2 6   &   p ˇ 1 7 = p ˇ 2 7 ) || ( p ˇ 2 5 = 1   &   p ˇ 1 6 = p ˇ 2 6   &   p ˇ 1 7 p ˇ 2 7 )
2: s 2 = 00 .
3: ( p ˜ 1 ,   p ˜ 2 ) = ( p ˇ 1 ,   ( p ˇ 2 L ( p ˇ 2 5 ,   p ˇ 1 5 ) ) ) .  
4:Else if ( p ˇ 2 5 = 0   &   p ˇ 1 6 = p ˇ 2 6   &   p ˇ 1 7 p ˇ 2 7 ) || ( p ˇ 2 5 = 1   &   p ˇ 1 6 = p ˇ 2 6   &   p ˇ 1 7 = p ˇ 2 7 )
5: s 2 = 01 .
6: ( p ˜ 1 ,   p ˜ 2 ) = ( p ˇ 1 ,   ( p ˇ 2 L ( p ˇ 2 5 ,   p ˇ 1 5 )   &   F ( p ˇ 2 7 ) ) ) .
7:Else if ( p ˇ 2 5 = 0   &   p ˇ 1 6 p ˇ 2 6   &   p ˇ 1 7 = p ˇ 2 7 ) || ( p ˇ 2 5 = 1   &   p ˇ 1 6 p ˇ 2 6   &   p ˇ 1 7 p ˇ 2 7 )
8: s 2 = 10 .
9: ( p ˜ 1 ,   p ˜ 2 ) = ( p ˇ 1 ,   ( p ˇ 2 L ( p ˇ 2 5 ,   p ˇ 1 5 )   &   F ( p ˇ 2 6 ) ) ) .
10:Else if ( p ˇ 2 5 = 0   &   p ˇ 1 6 p ˇ 2 6   &   p ˇ 1 7 p ˇ 2 7 ) || ( p ˇ 2 5 = 1   &   p ˇ 1 6 p ˇ 2 6   &   p ˇ 1 7 = p ˇ 2 7 )
11: s 2 = 11 .
12: ( p ˜ 1 ,   p ˜ 2 ) = ( ( p ˇ 1   F ( p ˇ 1 6 ) &   F ( p ˇ 1 7 ) ) ,   ( p ˇ 2 L ( p ˇ 2 5 ,   p ˇ 1 5 ) ) ) .
13:End
Algorithm 4: The first-tier secret data extraction and pixel pair retrieve.
Input:Updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) .
Output:Original pixel pair ( p 1 ,   p 2 ) and the first secret segments s 1 .
1:If p ˜ 1 8 = p ˜ 2 8   &   p ˜ 1 7 = p ˜ 2 7
2: s 1 = 00 .
3: ( p 1 ,   p 2 ) = ( p ˜ 1 ,   p ˜ 2 ) .  
4:Else if p ˜ 1 8 = p ˜ 2 8   &   p ˜ 1 7 p ˜ 2 7
5: s 1 = 01 .
6: ( p 1 ,   p 2 ) = ( ( p ˜ 1 F ( p ˜ 1 8 ) ) ,   p ˜ 2 ) .
7:Else if p ˜ 1 8 p ˜ 2 8   &   p ˜ 1 7 = p ˜ 2 7
8: s 1 = 10 .
9: ( p 1 ,   p 2 ) = ( ( p ˜ 1 F ( p ˜ 1 7 ) ) ,   p ˜ 2 ) .
10:Else if p ˜ 1 8 p ˜ 2 8   &   p ˜ 1 7 p ˜ 2 7
11: s 1 = 11 .
12: ( p 1 ,   p 2 ) = ( p ˜ 1 , ( p ˜ 2 F ( p ˜ 2 7 )   &   F ( p ˜ 2 8 ) ) .
13:End
To illustrate the process of the data extraction and image recovery, we present two examples in Figure 6. In Figure 6a, we extract the secret segments and retrieve original pixel pair from the given pixel pair ( p ˇ 1 ,   p ˇ 2 ) = ( 114 ,   119 ) . Due to p ˇ 2 5 = 0 , p ˇ 1 6 p ˇ 2 6 and p ˇ 1 7 = p ˇ 2 7 , we extract the secret segment “10” and flip p ˇ 2 6 . Subsequently, we set p ˇ 2 5 to the value of p ˇ 1 5 . Through these steps, we obtain the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) = ( 114 ,   115 ) . Based on p ˜ 1 7 = p ˜ 2 7 and p ˜ 1 8 p ˜ 2 8 , we then extract the secret segment “01” and flip p ˜ 1 8 and generate the original pixel pair ( p 1 , p 2 ) = ( 115 ,   115 ) . For the pixel pair ( p ˇ 1 ,   p ˇ 2 ) = ( 70 ,   75 ) in Figure 6b, we first extract the secret segment “11”, flip p ˜ 1 6 and p ˜ 1 7 , and set p ˇ 2 5 to the value of p ˇ 1 5 . After these procedures, we obtain the updated pixel pair ( p ˜ 1 ,   p ˜ 2 ) = ( 65 ,   67 ) . Based on p ˜ 1 7 p ˜ 2 7 and p ˜ 1 8 = p ˜ 2 8 , we extract the secret segment “10” and flip p ˜ 1 7 and generate the original pixel pair ( p 1 , p 2 ) = ( 67 ,   67 ) .

3.3. Distortion Discussion

In essence, our approach accomplishes data embedding by flipping the 6th, 7th, and 8th bits of the pixel pairs obtained from the cover images 1 and 2 . In our embedding strategy, we assume a uniform distribution of 0s and 1s in the secret data, implying that the probabilities of our secret segments being “00”, “01”, “10”, and “11” are equal. For our method, the maximum distortion for the pixel p 1 is 7 occurring when “01” is embedded in the first tier, followed by the embedding of “11” in the second tier with non-canceling flips. In this case, the distortion for p 2 is 0. Additionally, the maximum distortion for p 2 is 15, happening when “11” is embedded for the first time, followed by “10”, and the flips are non-canceling. In this case, the distortion for p 1 is 0. Except for the above two cases, the embedding flips result in minor distortions. Therefore, we believe that our approach will result in an acceptable level of distortion. Moreover, we suggest re-evaluating the flipping strategy in cases which the distributions of 0s and 1s of the secret data are not equally distributed with one value occurring more frequently. This redesign is intended to further optimize the impact of bit-flipping on image quality.

4. Experimental Results

To evaluate the proposed method, we conducted an extensive set of experiments. Firstly, we assessed the visual effectiveness of the proposed approach. Secondly, we discussed the data payload. Furthermore, we performed a security analysis. Lastly, we provided comparisons with several state-of-the-art schemes to demonstrate the advantages of our proposed approach.
In our experiments, we selected four images from the “UCID” [33] dataset, as illustrated in Figure 7, to showcase the outcomes. Additionally, we employed the “Kodak” [34] dataset for a general evaluation. Finally, we used four classic grayscale images for performance comparisons. The sizes of all test images used in the experiment were adjusted to 512 × 512 grayscale images. Moreover, to comprehensively illustrate the performance of the proposed scheme, we also evaluated the performance of the one-tier bit-flipping strategy in our experiments, which refers to a method without the second-tier embedding.

4.1. Visual Effectiveness

To showcase the visual quality of the stego images, we present the results obtained by applying one-tier bit-flipping and two-tier bit-flipping strategies to the test images in Figure 8 and Figure 9, respectively. From a visual perspective, both stego images ˇ 1 and stego images ˇ 2 cannot be distinguished from the original image.
In addition to visual observation, we also quantified the visual quality using the Peak Signal-to-Noise Ratio (PSNR) [35] and Structural Similarity Index (SSIM) [36]. The corresponding results are provided in Figure 8 and Figure 9. Observing Figure 8, when employing the one-tier bit-flipping strategy, the PSNR values of the stego images ˇ 1 exceed 47 dB, and the SSIM values surpass 0.997. With the two-tier bit-flipping strategy, the PSNR values remain above 46 dB, and the SSIM values exceed 0.997. Our findings indicate that the introduction of the second-tier embedding does not significantly degrade the image quality of the stego images ˇ 1 . When examining Figure 9 for stego images ˇ 2 , one-tier bit-flipping strategy yields the PSNR values exceeding 47 dB, and SSIM values exceeding 0.997. However, the PSNR values dropped to approximately 32.7 dB, and the SSIM values are about 0.95 with the two-tier bit-flipping strategy. Clearly, there are noticeable numerical decreases but they remain acceptable. We applied both one-tier and two-tier bit-flipping strategies to 24 images from the “Kodak” dataset for a general evaluation. The PSNR and SSIM results are presented in Figure 10 and Figure 11, respectively. The outcomes indicate that the proposed method maintains stable PSNR and SSIM values within an acceptable range, demonstrating its reliability.

4.2. Data Payload

To evaluate the data payload of our proposed scheme, we quantify it in terms of embedding rate (ER) measured in bits per pixel (bpp). Our method involves duplicating a grayscale image into two cover images to embed secret data. Each pixel pair can hold 4 bits of secret data, resulting in an average data payload of 2 bits per pixel. Importantly, this data payload remains consistent across all tested images, regardless of their complexity, demonstrating the adaptability of our method to various image scenarios. However, such a large data payload may not always be necessary in practical applications. Therefore, we suggest opting for the one-tier bit-flipping strategy in such scenarios, which achieves a data payload of 1 bpp.

4.3. Security Analysis

We also discussed the security aspects of the proposed method, focusing on the imperceptibility of stego images. We first employ the histogram to measure pixel changes. The results are provided in Figure 12. From the results, it can be observed that after the first-tier embedding, the histograms exhibit negligible changes, indicating that the first layer of embedding has minimal impact on the images. After the second-tier embedding, there are noticeable changes in the histograms, but these changes remain within acceptable limits. In addition, we introduced image entropy [37] as a crucial metric. The results are provided in Table 1. From the data, it can be observed that when employing the one-tier bit-flipping strategy, the image entropy of both stego images ˇ 1 and ˇ 2 closely aligns with that of the original grayscale image. With the two-tier bit-flipping strategy, stego images ˇ 1 maintains high similarity to the original grayscale images in terms of image entropy, stego images ˇ 2 exhibit slightly different, albeit minimal. The close numerical proximity suggests that the embedded secret data have not significantly impacted the original image, thus demonstrating the imperceptibility of the stego images. Furthermore, we illustrate the general entropy distribution results in Figure 13, where the medians of the five violin plots closely align, indicating similarity in data distribution among these images.

4.4. Time Efficiency Analysis

To evaluate the performance of our proposed method, we conducted a time efficiency analysis using various test images. The analysis focused on two primary processes: data embedding and data extraction with image recovery. The results are presented in Table 2. The results demonstrate a reasonable time efficiency for both data embedding and extraction processes, making the proposed scheme suitable for practical applications requiring efficient data hiding and retrieval without significant delay.

4.5. Comparisons

To showcase the effectiveness of our approach, we compared the proposed method with existing methodologies [22,23,24,25,26,27]. In our comparison, we selected four classic grayscale images, “Lena”, “Peppers”, “Barbara”, and “Goldhill” as test images. The comparison results are provided in Table 3. Upon examining Table 3, we observed that our proposed method achieved a significantly higher embedding rate (ER) across all test images compared to existing methods. Although some of the PSNR and SSIM values are slightly lower than those of other methods, the visual quality remains within an acceptable range. This demonstrates that our method effectively balances a high embedding capacity with reasonable visual quality.

5. Conclusions

In this paper, we utilize a two-tier embedding strategy to embed secret data. Specifically, we perform specific bit flips on paired pixels in dual images to embed the secret data. The resulting images are then distributed to two different recipients. Extraction of the embedded secret data requires recipients to establish mutual trust. Experimental results demonstrate that our method achieves a data payload of 2 bits per pixel. Additionally, we have evaluated the visual fidelity of the stego images against four classic grayscale images, ‘Lena’, ‘Peppers’, ‘Barbara’, and ‘Goldhill’. ˇ 1 achieves an average PSNR exceeding 46 dB and SSIM greater than 0.99, while ˇ 2 averages above 32 dB and SSIM values surpassing 0.94. This capability is crucial for applications requiring efficient data transmission without perceptible degradation in image quality. In our future work, we plan to explore further enhancements in the visual quality of stego images while maintaining a high data payload.

Author Contributions

Conceptualization, C.-C.C. (Chin-Chen Chang) and C.-C.C. (Ching-Chun Chang); methodology, C.-C.C. (Chin-Chen Chang) and C.-C.C. (Ching-Chun Chang); software, S.X.; validation, S.X.; formal analysis, S.X.; investigation, S.X.; resources, C.-C.C. (Chin-Chen Chang); data curation, S.X.; writing—original draft preparation, S.X.; writing—review and editing, J.-C.L., C.-C.C. (Chin-Chen Chang) and C.-C.C. (Ching-Chun Chang); visualization, S.X. and J.-C.L.; supervision, C.-C.C. (Chin-Chen Chang); project administration, C.-C.C. (Chin-Chen Chang) and C.-C.C. (Ching-Chun Chang); funding acquisition, C.-C.C. (Chin-Chen Chang). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bender, W.; Gruhl, D.; Morimoto, N.; Lu, A. Techniques for data hiding. IBM Syst. J. 1996, 35, 313–336. [Google Scholar] [CrossRef]
  2. Chang, C.; Lin, M.; Hu, Y. A fast and secure image hiding scheme based on LSB substitution. Int. J. Pattern Recognit. Artif. Intell. 2002, 16, 399–416. [Google Scholar] [CrossRef]
  3. Chan, C.; Cheng, L. Hiding data in images by simple LSB substitution. Pattern Recognit. 2004, 37, 469–474. [Google Scholar] [CrossRef]
  4. Yang, C. Inverted pattern approach to improve image quality of information hiding by LSB substitution. Pattern Recognit. 2008, 41, 2674–2683. [Google Scholar] [CrossRef]
  5. Tian, J. Reversible data embedding using a difference expansion. IEEE Trans. Circuits Syst. Video Technol. 2003, 13, 890–896. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Wang, W.; Zhao, Z.; Wang, E. PVO-based reversible data hiding using bit plane segmentation and pixel expansion. J. Inf. Secur. Appl. 2023, 79, 103649. [Google Scholar] [CrossRef]
  7. Chen, Y.; Sun, W.; Li, L.; Chang, C.; Wang, X. An efficient general data hiding scheme based on image interpolation. J. Inf. Secur. Appl. 2020, 54, 102584. [Google Scholar] [CrossRef]
  8. Kim, H.; Sachnev, V.; Shi, Y.; Nam, J.; Choo, H. A novel difference expansion transform for reversible data embedding. IEEE Trans. Inf. Forensics Secur. 2008, 3, 456–465. [Google Scholar]
  9. Dragoi, I.; Coltuc, D. Local-prediction-based difference expansion reversible watermarking. IEEE Trans. Image Process. 2014, 23, 1779–1790. [Google Scholar] [CrossRef]
  10. Coatrieux, G.; Pan, W.; Cuppens-Boulahia, N.; Cuppens, F.; Roux, C. Reversible watermarking based on invariant image classification and dynamic histogram shifting. IEEE Trans. Inf. Forensics Secur. 2012, 8, 111–120. [Google Scholar] [CrossRef]
  11. Li, X.; Li, B.; Yang, B.; Zeng, T. General framework to histogram-shifting-based reversible data hiding. IEEE Trans. Image Process. 2013, 22, 2181–2191. [Google Scholar] [CrossRef] [PubMed]
  12. Peng, F.; Li, X.; Yang, B. Improved PVO-based reversible data hiding. Digit. Signal Process. 2014, 25, 255–265. [Google Scholar] [CrossRef]
  13. Qu, X.; Kim, H. Pixel-based pixel value ordering predictor for high-fidelity reversible data hiding. Signal Process. 2015, 111, 249–260. [Google Scholar] [CrossRef]
  14. Akhtarkavan, E.; Majidi, B.; Salleh, M.; Patra, J. Fragile high-capacity data hiding in digital images using integer-to-integer DWT and lattice vector quantization. Multimed. Tools Appl. 2020, 79, 13427–13447. [Google Scholar] [CrossRef]
  15. Thanki, R.; Borra, S. A color image steganography in hybrid FRT–DWT domain. J. Inf. Secur. Appl. 2018, 40, 92–102. [Google Scholar] [CrossRef]
  16. Wang, X.; Chang, C.; Lin, C.; Chang, C. Privacy-preserving reversible data hiding based on quad-tree block encoding and integer wavelet transform. J. Vis. Commun. Image Represent. 2021, 79, 103203. [Google Scholar] [CrossRef]
  17. Meng, L.; Liu, L.; Wang, X.; Tian, G. Reversible data hiding in encrypted images based on IWT and chaotic system. Multimed. Tools Appl. 2022, 81, 16833–16861. [Google Scholar] [CrossRef]
  18. Xiao, M.; Li, X.; Ma, B.; Zhang, X.; Zhao, Y. Efficient reversible data hiding for JPEG images with multiple histograms modification. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 2535–2546. [Google Scholar] [CrossRef]
  19. Weng, S.; Zhou, Y.; Zhang, T.; Xiao, M.; Zhao, Y. General framework to reversible data hiding for JPEG images with multiple two-dimensional histograms. IEEE Trans. Multimed. 2022, 25, 5747–5762. [Google Scholar] [CrossRef]
  20. Zhang, X.; Pan, Z.; Zhou, Q.; Fan, G.; Dong, J. A reversible data hiding method based on bitmap prediction for AMBTC compressed hyperspectral images. J. Inf. Secur. Appl. 2024, 81, 103697. [Google Scholar] [CrossRef]
  21. Lin, J.; Lin, C.C.; Chang, C.C. Reversible steganographic scheme for AMBTC-compressed image based on (7, 4) hamming code. Symmetry 2019, 11, 1236. [Google Scholar] [CrossRef]
  22. Wang, X.; Liu, J.C.; Chang, C.C. A novel reversible data hiding scheme for VQ codebooks. Secur. Priv. 2023, 6, e315. [Google Scholar] [CrossRef]
  23. Xu, S.; Horng, J.; Chang, C.; Chang, C. Reversible data hiding with hierarchical block variable length coding for cloud security. IEEE Trans. Dependable Secur. Comput. 2023, 20, 4199–4213. [Google Scholar] [CrossRef]
  24. Yao, Y.; Wang, K.; Chang, Q.; Weng, S. Reversible data hiding in encrypted images using global compression of zero-valued high bit-planes and block rearrangement. IEEE Trans. Multimed. 2023, 26, 3701–3714. [Google Scholar] [CrossRef]
  25. Zhang, Q.; Chen, K. Reversible data hiding in encrypted images based on two-round image interpolation. Mathematics 2023, 12, 32. [Google Scholar] [CrossRef]
  26. Qin, C.; Chang, C.; Hsu, T. Reversible data hiding scheme based on exploiting modification direction with two steganographic images. Multimed. Tools Appl. 2015, 74, 5861–5872. [Google Scholar] [CrossRef]
  27. Lu, T.; Wu, J.; Huang, C. Dual-image-based reversible data hiding method using center folding strategy. Signal Process. 2015, 115, 195–213. [Google Scholar] [CrossRef]
  28. Yao, H.; Qin, C.; Tang, Z.; Tian, Y. Improved dual-image reversible data hiding method using the selection strategy of shiftable pixels’ coordinates with minimum distortion. Signal Process. 2017, 135, 26–35. [Google Scholar] [CrossRef]
  29. Jana, B.; Giri, D.; Mondal, S. Dual image based reversible data hiding scheme using (7,4) hamming code. Multimed. Tools Appl. 2018, 77, 763–785. [Google Scholar] [CrossRef]
  30. Chen, X.; Hong, C. An efficient dual-image reversible data hiding scheme based on exploiting modification direction. J. Inf. Secur. Appl. 2021, 58, 102702. [Google Scholar] [CrossRef]
  31. Kim, C.; Yang, C.; Zhou, Z.; Jung, K. Dual efficient reversible data hiding using Hamming code and OPAP. J. Inf. Secur. Appl. 2023, 76, 103544. [Google Scholar] [CrossRef]
  32. Kim, C.; Yang, C.N. Self-embedding fragile watermarking scheme to detect image tampering using AMBTC and OPAP approaches. Appl. Sci. 2023, 11, 1146. [Google Scholar] [CrossRef]
  33. Schaefer, G.; Stich, M. UCID: An uncompressed color image database. In Storage and Retrieval Methods and Applications for Multimedia; SPIE Proceedings: San Jose, CA, USA, 2004; pp. 472–480. [Google Scholar]
  34. Kodak, E. Kodak Lossless True Color Image Suite (Photocd Pcd0992). Volume 6. Available online: http://r0k.us/graphics/kodak (accessed on 15 November 1999).
  35. Smith, S. Digital Signal Processing: A Practical Guide for Engineers and Scientists; Newnes: San Francisco, CA, USA, 2003; ISBN 9780750674447. [Google Scholar]
  36. Lowe, D. Distinctive image features from scale-invariant key points. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  37. Tsai, D.Y.; Lee, Y.; Matsuyama, E. Information entropy measure for evaluation of image quality. J. Digit. Imaging 2008, 21, 338–347. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The application example of the proposed scheme.
Figure 1. The application example of the proposed scheme.
Mathematics 12 02219 g001
Figure 2. The flowchart of the proposed scheme.
Figure 2. The flowchart of the proposed scheme.
Mathematics 12 02219 g002
Figure 3. Schematic of the two-tier bit-flipping embedding strategy. (a) The first-tier. (b) The second-tier, where “1” and “2” are two distinct steps.
Figure 3. Schematic of the two-tier bit-flipping embedding strategy. (a) The first-tier. (b) The second-tier, where “1” and “2” are two distinct steps.
Mathematics 12 02219 g003
Figure 4. Two numerical examples of two-tier bit-flipping embedding strategy. (a) The first numerical example. (b) The second numerical example.
Figure 4. Two numerical examples of two-tier bit-flipping embedding strategy. (a) The first numerical example. (b) The second numerical example.
Mathematics 12 02219 g004
Figure 5. Schematic of the data extraction and image recovering. (a) The second-tier, where “1” and “2” are two distinct steps. (b) The first-tier.
Figure 5. Schematic of the data extraction and image recovering. (a) The second-tier, where “1” and “2” are two distinct steps. (b) The first-tier.
Mathematics 12 02219 g005
Figure 6. Two numerical examples of the data extraction and image recovering. (a) The first numerical example. (b) The second numerical example.
Figure 6. Two numerical examples of the data extraction and image recovering. (a) The first numerical example. (b) The second numerical example.
Mathematics 12 02219 g006aMathematics 12 02219 g006b
Figure 7. The four test images selected from the datasets “UCID”. (a) Toys, (b) Lego, (c) Sahara, and (d) Snoopy.
Figure 7. The four test images selected from the datasets “UCID”. (a) Toys, (b) Lego, (c) Sahara, and (d) Snoopy.
Mathematics 12 02219 g007
Figure 8. The stego images ˇ 1 for the test images in Figure 7 with full data payload. (ad) obtained by one-tier bit-flipping strategy. (eh) obtained by two-tier bit-flipping strategy.
Figure 8. The stego images ˇ 1 for the test images in Figure 7 with full data payload. (ad) obtained by one-tier bit-flipping strategy. (eh) obtained by two-tier bit-flipping strategy.
Mathematics 12 02219 g008
Figure 9. The stego images ˇ 2 for the test images in Figure 7 with full data payload. (ad) obtained by one-tier bit-flipping strategy. (eh) obtained by two-tier bit-flipping strategy.
Figure 9. The stego images ˇ 2 for the test images in Figure 7 with full data payload. (ad) obtained by one-tier bit-flipping strategy. (eh) obtained by two-tier bit-flipping strategy.
Mathematics 12 02219 g009
Figure 10. The PSNR results for images from the “Kodak” dataset. (a) Stego images ˇ 1 . (b) Stego images ˇ 2 .
Figure 10. The PSNR results for images from the “Kodak” dataset. (a) Stego images ˇ 1 . (b) Stego images ˇ 2 .
Mathematics 12 02219 g010
Figure 11. The SSIM results for images from the “Kodak” dataset. (a) Stego images ˇ 1 . (b) Stego images ˇ 2 .
Figure 11. The SSIM results for images from the “Kodak” dataset. (a) Stego images ˇ 1 . (b) Stego images ˇ 2 .
Mathematics 12 02219 g011
Figure 12. The histograms for test images in Figure 7 and the corresponding stego images. (ad) shows the histograms of the four original images in in Figure 7. (eh) show the histograms of the corresponding stego images ˇ 1 , respectively. (il) show the histograms of the corresponding stego images ˇ 2 , respectively.
Figure 12. The histograms for test images in Figure 7 and the corresponding stego images. (ad) shows the histograms of the four original images in in Figure 7. (eh) show the histograms of the corresponding stego images ˇ 1 , respectively. (il) show the histograms of the corresponding stego images ˇ 2 , respectively.
Mathematics 12 02219 g012
Figure 13. The distribution of image entropy for images from the “Kodak” dataset.
Figure 13. The distribution of image entropy for images from the “Kodak” dataset.
Mathematics 12 02219 g013
Table 1. Information entropy of images at different stages.
Table 1. Information entropy of images at different stages.
Figure 7aFigure 7bFigure 7cFigure 7d
No bit-flippingOriginal image 7.83347.70547.02067.6427
One-tier bit-flipping Stego image ˇ 1 7.83877.72137.03777.6461
Stego image ˇ 2 7.83777.71727.03587.6453
Two-tier bit-flipping Stego image ˇ 1 7.83907.72897.03867.6463
Stego image ˇ 2 7.86797.77667.17767.6903
Table 2. Time efficiency of test images.
Table 2. Time efficiency of test images.
Time EfficiencyFigure 7aFigure 7bFigure 7cFigure 7d
Data embedding0.6174 s0.5821 s0.5576 s0.6192 s
Data extraction and image recovery0.4826 s0.3872 s0.34293 s0.4523 s
Table 3. Comparisons with the existing methods and our proposed method.
Table 3. Comparisons with the existing methods and our proposed method.
MethodsMeasureLenaPeppersBarbaraGoldhill
ˇ 1 ˇ 2 ˇ 1 ˇ 2 ˇ 1 ˇ 2 ˇ 1 ˇ 2
[22]ER1.071.071.071.071.071.071.071.07
PSNR49.7650.4249.7550.3449.7550.4449.7750.54
SSIM0.99820.99820.99890.99840.99790.99870.99940.9993
[23]ER1.01.01.01.01.01.01.01.0
PSNR51.1351.1551.1551.1449.2249.2049.2349.18
SSIM0.99940.99880.99880.99940.99830.99780.99880.9969
[24]ER1.561.561.561.561.561.561.561.56
PSNR46.3646.3746.3846.3846.3846.3846.3746.36
SSIM0.99810.99820.99820.99820.99820.99830.99810.9981
[25]ER0.210.210.210.210.210.210.210.21
PSNR52.7152.8152.6752.7252.7052.7652.7352.78
SSIM0.99940.99870.99670.99890.99910.99870.99830.9948
[26]ER1.561.561.561.561.561.561.561.56
PSNR37.2637.2837.2537.2737.2537.2837.2637.26
SSIM0.99590.94600.99740.94040.99800.95300.99570.9612
[27]ER1.01.01.01.01.01.01.01.0
PSNR54.1448.1354.1548.1254.1248.1454.1648.11
SSIM0.99890.99560.99820.99600.99640.99440.99760.9950
OursER2.02.02.02.02.02.02.02.0
PSNR46.7532.7746.7332.7446.7532.7646.7632.77
SSIM0.99590.94600.99740.94040.99800.95300.99570.9612
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, S.; Liu, J.-C.; Chang, C.-C.; Chang, C.-C. Covert Communication for Dual Images with Two-Tier Bits Flipping. Mathematics 2024, 12, 2219. https://doi.org/10.3390/math12142219

AMA Style

Xu S, Liu J-C, Chang C-C, Chang C-C. Covert Communication for Dual Images with Two-Tier Bits Flipping. Mathematics. 2024; 12(14):2219. https://doi.org/10.3390/math12142219

Chicago/Turabian Style

Xu, Shuying, Jui-Chuan Liu, Ching-Chun Chang, and Chin-Chen Chang. 2024. "Covert Communication for Dual Images with Two-Tier Bits Flipping" Mathematics 12, no. 14: 2219. https://doi.org/10.3390/math12142219

APA Style

Xu, S., Liu, J. -C., Chang, C. -C., & Chang, C. -C. (2024). Covert Communication for Dual Images with Two-Tier Bits Flipping. Mathematics, 12(14), 2219. https://doi.org/10.3390/math12142219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop