Next Article in Journal
Advances in Industrial Waste Reduction
Next Article in Special Issue
Enhancement of Underwater Images with Retinex Transmission Map and Adaptive Color Correction
Previous Article in Journal
R-Curve Behavior of Polyhedral Oligomeric Silsesquioxane (POSS)–Epoxy Nanocomposites
Previous Article in Special Issue
A Novel Video Transmission Latency Measurement Method for Intelligent Cloud Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Authentication Method for AMBTC Compressed Images Using Dual Embedding Strategies

1
School of Ocean Information Engineering, Jimei University, Xiamen 361021, China
2
Department Computer Science and Information Engineering, National Taichung University of Science and Technology, Taichung City 404, Taiwan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(3), 1402; https://doi.org/10.3390/app13031402
Submission received: 24 October 2022 / Revised: 6 January 2023 / Accepted: 11 January 2023 / Published: 20 January 2023
(This article belongs to the Special Issue Computational Intelligence in Image and Video Analysis)

Abstract

:
In this paper, we proposed an efficient authentication method with dual embedment strategies for absolute moment block truncation coding (AMBTC) compressed images. Prior authentication works did not take the smoothness of blocks into account and only used single embedding strategies for embedment, thereby limiting image quality. In the proposed method, blocks were classified as either smooth or complex ones, and dual embedding strategies used for embedment. Respectively, bitmaps and quantized values were embedded with authentication codes, while recognizing that embedment in bitmaps of complex blocks resulted in higher distortion than in smooth blocks. Therefore, authentication codes were embedded into bitmaps of smooth blocks and quantized values of complex blocks. In addition, our method exploited to-be-protected contents to generate authentication codes, thereby providing satisfactory detection results. Experimental results showed that some special tampering, undetected by prior works, were detected by the proposed method and the averaged image quality was significantly improved by at least 1.39 dB.

1. Introduction

With the progress and development of Internet, digital images are increasingly easy to be transmitted over the network. However, these images are likely to be tampered with either with malicious intent or otherwise, resulting in unauthentic images being received. Therefore, authentication of images has become an important issue. Fragile watermarking [1,2,3,4,5] is a common technique used in image authentication by embedding watermark data into images. If pixels were tampered with, the embedded data could be retrieved to authenticate the image.
The fragile watermarking technique can be applied to images in spatial [6,7,8,9,10] or compressed [11,12,13] domains. In the spatial domain, data are embedded directly by modifying pixel values. Since images of spatial domain have many redundancies for embedding, higher payloads are expected. However, in recent years, most images are stored and transmitted in compressed formats. This is because compressed images have lower storage space and transmission bandwidth requirements. At present, vector quantization (VQ) [14,15,16], joint photographic expert group (JPEG) [17,18,19], and absolute moment block truncation coding (AMBTC) [20,21] are some commonly used compression techniques. Amongst these techniques, AMBTC requires the least computational cost. Therefore, several authentication techniques based on AMBTC images [22,23,24,25,26,27] have been proposed.
AMBTC was proposed by Lema and Mitchell [28], where blocks were compressed into quantized values and bitmaps. In 2013, ref. [22] proposed an AMBTC image authentication method with joint image coding. This method used pseudo random sequence to generate authentication codes which were embedded into the bitmaps. The embedded bitmaps together with the quantized values were, then, losslessly compressed to obtain the final bitstream. The advantage of their method was that it required less storage and achieved good detection accuracy. To improve image quality, ref. [23] in 2016 proposed a novel image authentication method based on a reference matrix. The matrix was employed to embed authentication codes into quantized values. The length of authentication codes was determined by the size of the matrix. Ref. [23] achieved a higher image quality compare to [22]. However, this method could not detect tampering in the bitmaps which were independently generated from the authentication codes. To improve security, ref. [24] also proposed an authentication method that used bitmaps to generate authentication codes. Furthermore, the resulting image had equivalent quality and higher detection accuracy to [23].
In 2018, ref. [25] proposed a tamper detection method for AMBTC images. The most significant bits (MSBs) of quantized values and bitmaps were hashed to generate the authentication codes. The codes were embedded into the least significant bits (LSBs) of the quantized values using the LSB embedding method. To obtain higher image quality, the MSBs of quantized values were perturbed within a small range and used to generate a set of authentication codes. The code with the smallest embedding error was selected for embedment to provide a higher image quality than [22,23,24].
In 2018, ref. [26] also proposed an AMBTC authentication method using adaptive pixel pair matching (APPM). Bitmaps and the position information were employed to generate the authentication codes, which were embedded in the quantized values using APPM. A threshold was used to classify blocks into edge and nonedge ones. If the difference of quantized values was larger than the threshold, the block was considered as an edge one; or otherwise, a nonedge block. Edges in an image were considered more informative than the nonedges; therefore, more bits were embedded in edge blocks than nonedge ones to provide more protection to edges.
In 2019, ref. [27] proposed a high-precision authentication method for AMBTC images using matrix encoding (ME). Bitmaps and position information were used to generate 6-bit authentication codes. The generated codes were divided into two equal parts of 3 bits, and embedded into the bitmaps using matrix encoding. To avoid damages to the bitmaps caused by embedding, the positions of to-be-flipped bits in the bitmap were recorded and embedded into the quantized values. This method resulted in higher image quality as well as higher detection accuracy.
Methods [25,26,27] could detect most tampering and also provided satisfactory image quality. However, some special tampering might escape detection using these methods. For example, the generation of authentication codes in [25] was independent of block position information. Therefore, tampering could not be detected when two blocks were interchanged. In addition, the embedding techniques of these methods did not take the smoothness of blocks into account, which resulted in low image quality. To improve image quality and security, this paper proposes an authentication method for AMBTC images using dual embedding strategies. Blocks are classified into smooth and complex ones according to a predefined threshold, and appropriate embedding strategies are employed based on their smoothness. Blocks are classified as smooth blocks for quantized values less than the threshold, or otherwise, as complex ones. Both quantized values and bitmaps are used to carry authentication codes. However, the difference of quantized values of complex blocks can be large and flipping bits of bitmaps can result in significant distortion. Therefore, the authentication codes of complex blocks will be embedded into the quantized values. In contrast, smooth blocks have lower distortion from flipping bits of bitmaps, and thus authentication codes will be embedded into bitmaps. Since the dual embedding strategies are based on block smoothness, the aim is to obtain higher image qualities. Moreover, the generation of authentication codes in the proposed method is related to the to-be-protected contents, which will increase the probability of detecting some special tampering.
The rest of this paper is organized as follows. Section 2 describes related works and Section 3 presents the proposed method. Section 4 and Section 5 show the experimental results and conclusions, respectively.

2. Related Works

This section briefly introduces the concepts of AMBTC compression technique. The APPM and matrix encoding techniques used in the proposed method are also presented. The specific procedures are described in the following three subsections.

2.1. AMBTC Compression Technique

AMBTC is a lossy compression technique [28] that uses low and high quantized values and a bitmap to represent a block. The proposed method is to embed the authentication codes into quantized values and bitmaps to protect the AMBTC images, and the limitations of AMBTC is that the compression ratio and image quality are both relatively low. Let I be the original image. Divide I into N blocks I = { I i } i = 0 N 1 of size n × n , and I i = { I i , j } j = 0 n × n 1 , where I i , j represents the j -th pixel of I i . Then, scan each block I i of { I i } i = 0 N 1 . To compress block I i , calculate the mean value m i of I i . Average pixels in I i that are smaller than m i , and record the result as the low quantized value a i . On the other hand, the high quantized value b i is the average of pixels greater than or equal to m i . Compare m i and the pixel I i , j to obtain the bitmap B i of block I i . If I i , j m i , then B i , j = 1 ; otherwise, B i , j = 0 , where B i , j represents the j -th bit of B i . Thus, the compressed code is C i = ( a i ,   b i ,   B i ) of I i . All blocks are compressed in the same way, and the result is the AMBTC compressed codes of image I , which is denoted by C = { a i ,   b i ,   B i } i = 0 N 1 . To decompress C i = ( a i ,   b i ,   B i ) , prepare an empty block D i with the same size of I i , and D i , j is the j -th pixel of D i . If B i , j = 0 , then D i , j = a i ; otherwise, D i , j = b i . All codes C = { a i ,   b i ,   B i } i = 0 N 1 are decompressed using the same procedures where the decompressed image is D = { D i } i = 0 N 1 .
A simple example is given to introduce the procedures of AMBTC compression technique. Let I i = [36, 34, 41, 42; 37, 35, 43, 46; 35, 39, 44, 41; 36, 41, 42, 48] be the original block of size 4 × 4 . Calculate the mean value of I i , and m i = 40 is obtained. Pixels in I i less than 40 are 36, 34, 37, 35, 35, 39 and 36, and the average of these pixels is a i = 36 . Similarly, b i = 43 is calculated. To obtain B i , if I i , j 40 ,   then   B i , j = 1 ; otherwise, B i , j = 0 . Therefore, B i = [0011; 0011; 0011; 0111]. Finally, the AMBTC compressed code   C i = (36, 43, [0011; 0011; 0011; 0111]) is obtained. To decompress   C i , bits 0 and 1 in B i are decoded by quantized values a i = 36 and b i = 43 , respectively, and D i = [36, 36, 43, 43; 36, 36, 43, 43; 36, 36, 43, 43; 36, 43, 43, 43].

2.2. The Adaptive Pixel Pair Matching (APPM) Technique

In the proposed method, we use the APPM embedding technique to embed the authentication codes into quantized values of complex blocks. The limitation of APPM is that it can only embed at most 8 bits into a pair of quantized values, fortunately only 6 bits are required in our method. The principle of APPM [29] is to embed a digit of base λ into a pixel pair by referring to the reference table R λ , where R λ is a table of size 256 × 256 filled with elements of integers in the range [ 0 ,   λ 1 ] . Let R λ ( a ,   b ) be the element located in a -th row and   b -th column of R λ . R λ ( a ,   b ) can be calculated by the following equation:
R λ ( a ,   b ) = ( c λ × a + b )   mod   λ ,
where c λ is a constant, and c 64 = 14 in this paper. To embed a digit d λ into ( a ,   b ) , first locate a pixel pair ( a ^ ,   b ^ ) in the vicinity of R λ ( a ,   b ) that satisfies R λ ( a ^ ,   b ^ ) = d λ and has the minimum distance to ( a ,   b ) . The located ( a ^ ,   b ^ ) is then employed to replace ( a ,   b ) and a marked pixel pair is obtained. When extracting the embedded digit d λ , since R λ and ( a ^ ,   b ^ ) are known, d λ can be calculated by d λ = R λ ( a ^ ,   b ^ ) . The schematic diagram of APPM is as shown in Figure 1a.
Following is a simple example to illustrate the APPM embedding procedures. Figure 1b shows a partial reference table R 64 . Let ( a ,   b ) = ( 53 ,   64 ) be the original pixel pair used to carry the digit d 64 = 26 of base 64. Since ( 52 ,   66 ) satisfies R 64 ( 52 ,   66 ) = 26 and has the minimum distance to ( 53 ,   64 ) , the marked pixel pair ( a ^ ,   b ^ ) = ( 52 ,   66 ) is located. To extract the embedded digit, we only need to locate the coordinate ( 52 ,   66 ) in R 64 , and d 64 = R 64 ( 52 ,   66 ) = 26 can be obtained (see Figure 1b).

2.3. The Matrix Encoding

The Matrix encoding is used to embed the authentication codes into bitmaps of smooth blocks. The limitation of Matrix encoding is that only 3 bits can be embedded at a time, and our method requires 6 bits to be embedded, thus a block has to be embedded 2 times. Matrix encoding ( β ,   k ) is an efficient embedding method [30] based on Hamming code, which is a linear error correction code proposed by Richard Wesley Hamming [31]. ( β ,   k ) represents the k secret bits s embedded into the vector V = { V i } i = 0 β 1 of length β . Matrix encoding ( β ,   k ) is embedded based on a parity matrix H . To embed s into V , p = ( ( H × V T ) mod   2 ) s T is calculated, where T, and mod represent transpose, exclusive-or and modulo-2 operations, respectively. Note that p is column vector of length k . Let ( p ) 10 be the decimal value of p and initialize V M = V as the marked vector of V . If ( p ) 10 = 0 , then no bit is required to be flipped in V M . Otherwise, flip the ( ( p ) 10 1 ) -th bit of V M . The secret bits can be extracted directly by calculating s = ( H × V M T ) T mod   2 .
Next, an example of ( β ,   k ) = ( 7 ,   3 ) is taken to introduce the embedding and extraction procedures of matrix encoding. Following equation shows the H of (7, 3):
H = [ 0 0 0 1 1 1 1 0 1 1 0 0 1 1 1 0 1 0 1 0 1 ] .
Let V = [1, 1, 1, 1, 1, 0, 0] be the original vector used to embed the secret bits s = [ 1 , 0 , 0 ] . Calculate p = ( ( H × V T ) mod   2 )     s T to get p = [ 1 ,   0 ,   1 ] T . Convert p to its decimal value, and ( p ) 10 = 5 . Then, flip the 5 1 = 4 -th bit of the initialized marked vector V M , and V M = [1, 1, 1, 1, 0, 0, 0]. The embedded secret bits can be extracted by s = ( H × V M T ) T mod   2 = [ 1 ,   0 ,   0 ] .

3. The Proposed Method

In methods [25,26,27], authentication codes are generated independently of position information and MSBs of quantized values, which can result in some tampering that cannot be detected. Moreover, these embedding techniques are not designed based on block smoothness, leading to a relatively high image distortion. In this paper, the to-be-protected contents, such as quantized values, are used to generate authentication codes to enhance the security of the image. In addition, different embedding strategies based on block smoothness are used to improve image quality. The smoothness is determined by a predefined threshold T . Given an AMBTC compressed code C i = ( a i ,   b i ,   B i ) of block I i . If | b i a i | < T , the block I i is smooth; otherwise, it is complex. In our method, the authentication codes of smooth blocks are embedded in bitmaps using the matrix encoding, while complex blocks are embedded in quantized values using the APPM. The detailed embedding and authentication procedures are presented in the following subsections. Notice that the detection result is related to the length of authentication codes. In the proposed method, smooth and complex blocks are embedded with the same length of authentication codes, thus they have identical detection performance.

3.1. The Embedment Algorithm of Smooth Blocks

Let C i = ( a i ,   b i ,   B i ) be the AMBTC compressed code of a smooth block I i . Following gives the embedment algorithm of a smooth block I i .
  • Step 1: Divide B i into { B i , j } j = 0 1 and { B i , j } j = 2 15 , which are employed to generate and carry a c i , respectively.
  • Step 2: Use the bitmap { B i , j } j = 0 1 , low quantized value a i , high quantized value b i , and position information i to generate a c i using the following equation:
    a c i = hash 6 ( { B i , j } j = 0 1 ,   a i , b i , i ) ,
    where hash 6 ( x ) is the function that hashes x using the MD5 [32] and reduces the hashed results to 6-bit a c i using the xor operation.
  • Step 3: The 6-bit a c i is divided into 2 groups of 3 bits denoted as a c i 0 and a c i 1 . The matrix encoding described in Section 2.3 is then employed to embed a c i 0 and a c i 1 into { B i , j   } j = 2 8 and { B i , j   } j = 9 15 , and we obtain { B ^ i , j } j = 2 8 and { B ^ i , j } j = 9 15 , respectively.
  • Step 4: Concatenate { B i , j   } j = 0 1 , { B ^ i , j } j = 2 8 , and { B ^ i , j } j = 9 15 , and we have the marked bitmap B ^ i . Finally, the marked compressed code C ^ i = ( a ^ i ,   b ^ i ,   B ^ i ) is outputted, where ( a ^ i ,   b ^ i ) = ( a i ,   b i ) .
A simple example is given to illustrate the procedure of generating an authentication code using the hash function hash 6 ( x ) . Suppose the hashed result of x is a 32-bit string ‘00110110101101001100010000101100’. Then the first 16 bits are xor-ed with the last 16 bits to create a 16 bits ‘1111001010011000’. Repeat this xor-ed procedure one more time, and we obtain an 8 bits ‘01101010’. Since our method only requires 6 bits, the last 2 bits are discarded. Therefore, the authentication code ‘011010’ is obtained.

3.2. The Embedment Algorithm of Complex Blocks

For a compressed code C i = ( a i ,   b i ,   B i ) , if ( b i a i ) T , the block I i is a complex one and embed the authentication code a c i into the quantized values ( a i ,   b i ) using APPM with the following algorithm.
  • Step 1: Use the following equation to construct the reference table R 64 𝓀 i :
    R 64 𝓀 i ( a i ,   b i ) = ( ( c 64 × a i + b i ) + 𝓀 i )   mod   64 ,
    where 𝓀 i is an random integer generated by a key K , and c 64 = 14 .
  • Step 2: Use the bitmap B i and position information i to generate the 6-bit a c i by
    a c i = hash 6 ( B i , i ) .
  • Step 3: Once a c i is obtained, ( a c i ) 10 of base 64 is embedded into the quantized values ( a i ,   b i ) using APPM to obtain the marked quantized values a ^ i and b ^ i , where ( a c i ) 10 is the decimal value of a c i .
In comparison to Equation (1), Equation (4) adds an additional integer 𝓀 i to generate the reference table. Actually, the image quality obtained by referring to R 64 𝓀 i is equal to that of R 64 . However, if a c i is embedded based on R 64 , it can be extracted publicly by Equation (1). Therefore, one can tamper with ( a ^ i ,   b ^ i ) by finding an alternative pixel pair ( a ˙ i ,   b ˙ i ) that satisfies ( c 64 × a ˙ i + b ˙ i )   mod   64 = ( c 64 × a ^ i + b ^ i )   mod   64 = a c i to escape detection. In contrast, a c i in Equation (4) can be obtained only if 𝓀 i is known. Thus, the embedment using R 64 𝓀 i is not only more secure than just using R 64 , but also maintains the same image quality.

3.3. Embedding of Smoothness-Changed Blocks

Since the a c i of complex block I i is embedded into the quantized values ( a i ,   b i ) , the smoothness of I i may change to the smooth one if | b ^ i a ^ i | < T . In this case, it can be processed using the following procedures. Firstly, alter the value of a i and b i by one, and the altered results a i = { a i 1 , a i , a i + 1 } and b i = { b i 1 , b i , b i + 1 } are obtained. If | b i a i | < T , use the embedding technique described in Section 3.1 to perform the embedment on ( a i ,   b i , B i ) and obtain ( a i * ,   b i * ,   B i * ) , where ( a i * ,   b i * ) = ( a i ,   b i ) . Otherwise, the embedment of ( a i ,   b i , B i ) is performed using the embedding technique of complex block described in Section 3.2 to obtain ( a i * ,   b i * ,   B i * ) , where B i * = B i . From all possible codes of ( a i * ,   b i * ,   B i * ) , the code with the smallest embedding error is selected as the final output ( a ^ i , b ^ i , B ^ i ) . Figure 2 shows the embedding framework of the proposed method.
Following is a simple example to illustrate the case where a complex block changes to a smooth one after embedment. Let ( a i ,   b i ,   B i ) = (67, 73, [0011; 0011; 0011; 0101]) be the original AMBTC code with n = 4 , and T = 6 . Since | b i a i | = | 73 67 | = 6 T , the I i is classified as a complex block. Then, use B i and i to generate the authentication code a c i using Equation (5), and employ APPM to embed a c i into ( a i ,   b i ) to obtain ( a ^ i ,   b ^ i ) . Suppose the marked quantized values ( a ^ i ,   b ^ i ) = ( 68 ,   73 ) . Since b ^ i a ^ i = 73 68 = 5 < T , the smoothness of ( a ^ i ,   b ^ i ) is different from that of ( a i ,   b i ) . Thus, the embedment of ( a i ,   b i ,   B i ) requires additional processing. Alter the values of ( a i ,   b i ) by one unit, and the altered results ( a i ,   b i ) are (66, 72), (66, 73), (66, 74), (67, 72), (67, 73), (67, 74), (68, 72), (68, 73) and (68, 74). The differences of a i and b i less than T = 6 are (67, 72), (68, 72) and (68, 73), and the technique described in Section 3.1 is used to embed them. The differences of a i and b i larger than or equal to 6 are (66, 72), (66, 73), (66, 74), (67, 73), (67, 74) and (68, 74), and employ the technique described in Section 3.2 to embed them. Let (67, 72, [0011; 0111; 0011; 0001]), (68, 72, [0011; 0010; 0011; 0111]), and (68, 73, [0010; 0011; 0011; 1101]) are the embedded codes with differences of quantified values less than 6, where the underlined bits are the flipped bits. Suppose that (65, 73, B i ), (67, 76, B i ), (66, 73, B i ), (67, 75, B i ), (66, 74, B i ) and (68, 75, B i ) are the codes with differences of quantified values larger than or equal to 6. Decompress these codes and the original codes ( a i ,   b i ,   B i ) using AMBTC to obtain the decompress block D i * and D i . Then, calculate the squared differences of D i * and D i , which are 68, 64, 68, 32, 72, 8, 32, 16 and 40. Since the code (66, 73, B i ) has the least embedding distortion 8, it can be selected and outputted as the final code ( a ^ i ,   b ^ i , B ^ i ) = ( 66 ,   73 ,   B i ) .

3.4. The Embedding Procedures

In this section, the embedding procedures of the proposed method are described in the following algorithm. Let C = { a i ,   b i ,   B i } i = 0 N 1 be the AMBTC codes used to embed the authentication codes. The embedding procedures are listed below.
  • Input: AMBTC compressed codes C = { a i ,   b i ,   B i } i = 0 N 1 , key K , and parameters n and T .
  • Output: Marked AMBTC codes { a ^ i ,   b ^ i ,   B ^ i } i = 0 N 1 .
  • Step 1: Scan each code ( a i ,   b i ,   B i ) in { a i ,   b i ,   B i } i = 0 N 1 and calculate the difference of a i and b i .
  • Step 2: If | b i a i | < T , use Equation (3) to hash { B i , j } j = 0 1 , a i , b i and i to generate the 6-bit a c i . Embed a c i into { B i , j } j = 2 15 using the matrix encoding described in Section 2.3 to obtain { B ^ i , j } j = 2 15 . Concatenate { B i , j } j = 0 1 and { B ^ i , j } j = 2 15 , and the marked bitmap B ^ i is obtained. Then, we have the marked code ( a ^ i ,   b ^ i ,   B ^ i ) , where ( a ^ i ,   b ^ i ) = ( a i ,   b i ) .
  • Step 3: If | b i a i | T , use the key K to generate 𝓀 i , and construct the reference table R 64 𝓀 i using Equation (4). Then, use the APPM to embed a c i into ( a i ,   b i ) , and we have ( a ^ i ,   b ^ i ) . If | b ^ i a ^ i | T , the marked code ( a ^ i ,   b ^ i ,   B ^ i ) is outputted and B ^ i = B i . Otherwise, ( a i ,   b i ,   B i ) is embedded using the technique described in Section 3.3 to obtain ( a ^ i ,   b ^ i ,   B ^ i ) .
  • Step 4: Repeat Steps 1–3 until all codes are embedded and output the marked codes { a ^ i ,   b ^ i ,   B ^ i } i = 0 N 1 , key K , and parameters n and T .

3.5. The Authentication Procedures

To authenticate whether the codes { a ˜ i ,   b ˜ i ,   B ˜ i } i = 0 N 1 have been tampered with, we first regenerate the authentication code a c ˜ i using either Equations (3) or (5) according to the difference of a ˜ i and b ˜ i . Then, extract the code e a c ˜ i embedded in ( a ˜ i ,   b ˜ i ,   B ˜ i ) , and compare e a c ˜ with a c ˜ i . If e a c ˜ = a c ˜ i , the code ( a ˜ i ,   b ˜ i ,   B ˜ i ) is untampered with; otherwise, it is tampered with. The detailed authentication procedures are listed as follows.
  • Input: To-be-authenticated codes { a ˜ i ,   b ˜ i ,   B ˜ i } i = 0 N 1 , key K , and parameters n and T .
  • Output: The detection result.
  • Step 1: Scan each code ( a ˜ i ,   b ˜ i ,   B ˜ i ) in { a ˜ i ,   b ˜ i ,   B ˜ i } i = 0 N 1 and calculate the difference of a ˜ i and b ˜ i .
  • Step 2: If |   b ˜ i a ˜ i   | < T , use Equation (3) to hash { B ˜ i , j } j = 0 1 , a ˜ i , b ˜ i and i to regenerate 6-bit a c ˜ i . The matrix encoding is employed to extract e a c ˜ i from { B ˜ i , j } j = 2 15 .
  • Step 3: If |   b ˜ i a ˜ i   | T , employ K to generate 𝓀 i , and construct the R 64 𝓀 i by Equation (4). Besides, B ˜ i and i are employed to regenerate a c ˜ i by Equation (5). Then, use APPM to extract e a c ˜ i embedded in ( a ˜ i ,   b ˜ i ) .
  • Step 4: Compare a c ˜ i and e a c ˜ i to judge whether the code ( a ˜ i ,   b ˜ i ,   B ˜ i ) has been tampered with. If a c ˜ i = e a c ˜ i , the code is untampered with. Otherwise, it is tampered with.
  • Step 5: Repeat Steps 1–4 until all blocks have been detected, which refers to the coarse detection in our method.
  • Step 6: The refined detection, described here, is used to improve detection accuracy. If the top and bottom, left and right, top left and bottom right, or top right and bottom left blocks of an untampered block have been determined as tampered with, the untampered block is redetermined to be a tampered one. Repeat this procedure until no other blocks are redetermined and we have finished the authentication procedures.

4. Experimental Results

To evaluate the effectiveness of our method, we perform several experiments on a set of grayscale images. Eight images of size 512 × 512, namely, Lena, Jet, Baboon, Tiffany, Sailboat, Splash, Peppers, and House are used as test images for the experiments, as shown in Figure 3. These test images can be obtained from the USC-SIPI image database [33]. Comparisons of image quality and detectability between prior works [25,26,27] and the proposed method are also shown in this section. In the experiments, the peak signal-to-noise ratio (PSNR) is employed to measure the marked image quality:
PSNR = 10 × log 10 255 2 MSE
where MSE is the mean square error of the marked image and the original image. Equation (6) shows that the smaller the MSE is, the larger is the PSNR. The structural similarity (SSIM) metric [34] is also employed to measure the similarity between the marked and original images. The SSIM is calculated by:
SSIM ( x , y ) = ( 2 μ x   μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
where x and y represent the original and marked images. μ x , μ y and σ x , σ y are the mean value and standard deviation of x , y , respectively. σ x y is the covariance of x and y , and C 1 and C 2 are constants. The value of SSIM is within the range [0, 1], and the larger the SSIM is, the higher the visual quality of the marked image is.

4.1. The Performance of the Proposed Method

In this paper, we use a threshold T to classify blocks into smooth and complex ones and embed them using different techniques. The PSNR and SSIM are related to the value of T . Table 1 shows the comparisons between PSNR and SSIM for the test images when the threshold T is set from 0 to 10. As shown in the table, the highest PSNR is achieved when T = 6 or T = 7 . As T becomes smaller or larger, the PSNR decreases gradually. For example, the Jet image has a PSNR of 43.32 dB for T = 6 , which is higher than when T = 5 and T = 7 . Besides, the peak values of SSIM are achieved when T = 6 . Thus T = 6 is adopted in the proposed method. This experiment was performed using the Python programming language, and the average time required to embed an image is less than one second when T = 6 .
Figure 4 shows the plot of threshold T versus PSNR for various test images. When T = 0 , the PSNRs of the eight test images are around 41 dB. Among these images at T = 6 , Jet has the highest PSNR, while Baboon achieves the lowest PSNR, which is about 2 dB lower than that of Jet. This is because image quality is dependent on the ratio of smooth blocks for a given threshold, which is demonstrated in the following experiments.
Table 2 shows the ratio of smooth blocks and the PSNR of images when T = 6 . We observe that under the same threshold, the larger the ratio of smooth blocks, the higher the PSNR, implying that the proposed method is more effective. The reason is that the differences of quantized values of smooth blocks are close to each other, thus the error caused by flipping bits of bitmaps is also small. For example, Lena, Jet, Tiffany, Splash and House contain more than 40% smooth blocks, and the PSNR of these images can reach more than 42 dB. However, the PSNR of Baboon is the lowest (41.04 dB), which is due to the fact that it has only 6.99% smooth blocks. PSNR is a metric to measure an embedding method, and higher PSNR means that the embedding method is more effective. In these 8 test images, the PSNRs of all images are higher than 41 dB, implying the effectiveness of our method.
The following experiment demonstrates the detectability of the proposed method. The marked codes of Lena are tampered such that the tampered decompressed image shows a daisy on Lena’s hat, as shown in Figure 5a. Figure 5b shows the tampered region indicated by black blocks, while Figure 5c,d present the coarse and refined detection results of Figure 5a. An image contains a total of ( 512 × 512 ) / ( 4 × 4 ) = 16384 blocks, and this experiment tampers with 2176 blocks with a tampering rate of 13.28%. Figure 5c shows that a few scattered blocks are not detected in the coarse detection. Nevertheless, those undetected blocks can be detected in the refined detection, as shown in Figure 5d.

4.2. PSNR Comparisons with Prior Works

In this section, we compare the PSNR between [25,26,27] and the proposed method, as shown in Table 3. In this experiment, 6-bit authentication code is embedded in all methods, and T = 6 is used in the proposed method. As shown in the table, the PSNR of the proposed method is the highest compared to [25,26,27]. The averaged PSNR of our method is 42.36 dB, which is 42.36 39.71 = 2.65 , 42.36 40.97 = 1.39 and 42.36 40.35 = 2.01 dB higher than [25,26,27], respectively. Moreover, if an image contains a larger ratio of smooth blocks, the improvement of PSNR obtained by our method is higher. For example, compared to [27], the improvement of Baboon is 41.04 40.69 = 0.35 dB, while Splash is 43.33 40.13 = 3.20 dB, reflecting the effectiveness of the proposed method.
An additional 200 images of sized 512 × 512 obtained from BOWS-2 image database [35] are used to explore the applicability of the compared methods influenced by image smoothness. To better evaluate the performance, we sort 200 images according to the number of smooth blocks of images in the ascending order, as shown in Figure 6. The figure shows that when the number of smooth blocks is close to 0, the PSNR improvement of our method is not significant compared to [25,26,27]. However, as the number of smooth blocks increases, the improvement becomes more obvious, as shown in Figure 6. From the above analysis, the proposed method is more suitable for smooth images compared to complex ones.

4.3. Detectability Comparisons with Prior Works

This section shows the detectability comparisons of [25,26,27] and the proposed method. In this experiment, the marked compressed codes of Sailboat are tampered with by adding a bird. Figure 7a,b show the tampered image and the region of tamper, while Figure 7c–f present the detection results of [25,26,27] and the proposed method. The results show all methods can achieve a satisfactory detection result.
Different metrics are used to measure the detection results of Figure 7, as shown in Table 4. The number of tampered pixels (NTB) in Figure 7a is 19,712, and the tampering rate is 19712 / ( 512 × 512 ) = 7.52 % , where. True positive (TP) represents the number of tampered pixels detected as tampered ones, while false negative (FN) is the number of tampered pixels incorrectly detected as untampered ones. The refined detection rate is calculated by TP / ( NTB ) . In all methods, the length of authentication codes embedded in each block is 6. Therefore, the collision probability is 1 / 2 6 = 1.56 % , i.e., the coarse detection rate is as high as 100 % 1.56 % = 98.44 % . The table shows that the refined detection rates of [25,26,27] and the proposed method are up to 99%, which are better than the coarse detection rates and meet the theoretical values.
A good detection method should be able to detect any kind of tampering. The following experiments are performed with some special tampering to further compare the detectability of [25,26,27] and the proposed method. The special tampering consisted of the marked codes of Peppers by adding bananas, orange and watermelon, and Figure 8a,b present the tampered image and regions. Let { a ^ i ,   b ^ i , B ^ i } i = 0 N 1 be the marked codes of Peppers. Suppose { a i B ,   b i B , B i B } i = 0 N B 1 , { a i O ,   b i O , B i O } i = 0 N O 1 and { a i W ,   b i W , B i W } i = 0 N W 1 are the codes of the bananas, orange and watermelon, and N B , N O and N W represent the number of blocks of these images, respectively. For tampering with bananas, codes { a i B ,   b i B , B i B } i = 0 N B 1 are used to replace { a ^ i ,   b ^ i , B ^ i } i = 0 N B 1 . Then, from { a ^ i ,   b ^ i , B ^ i } i = 0 N 1 , find the codes { a ^ i B ,   b ^ i B , B ^ i B } i = 0 N B 1 which are the closest to { a i B ,   b i B , B i B } i = 0 N B 1 , and the found codes are used to replace { a i B ,   b i B , B i B } i = 0 N B 1 . For orange’s tampering, the codes { a i O ,   b i O , B i O } i = 0 N O 1 are used to replace { a ^ i ,   b ^ i , B ^ i } i = 0 N O 1 . Then, the pixel pair ( a ˙ i O ,   b ˙ i O ) that satisfy ( 14 × a ˙ i O + b ˙ i O )   mod   64 = ( 14 × a ^ i + b ^ i )   mod   64 and has the minimum distance to ( a i O ,   b i O ) is found. Finally, we use ( a ˙ i O ,   b ˙ i O ) to replace ( a i O ,   b i O ) for 0 i N O 1 . To achieve the tampering of watermelon, the 5 MSBs of { a i W ,   b i W } i = 0 N W 1 are use to replace that of { a ^ i ,   b ^ i } i = 0 N W 1 .
Figure 9 shows the refined detection results of [25,26,27] and the proposed method. Figure 9a–c show that [25,26,27] do not detect the tampering of bananas, orange and watermelon, respectively. This is because the generation of authentication codes of [25,26,27] is independent of the protected contents, e.g., position information and MSBs of quantized values. However, the proposed method can detect these tampering, as shown in Figure 9d, demonstrating the superiority of our method against other works.
The components for generating authentication codes, embedding techniques and detectability of methods [25,26,27] and the proposed method are summarized in Table 5. Furthermore, the performance in recovery domain for methods [20,21] are also compared. Comparisons are made between the various detection methods for “Detection of bananas”, “Detection of orange” and “Detection of watermelon” with tampering collages of banana, orange and watermelon, respectively. The table shows that methods [20,25,26,27] cannot detect all these types of tampering. The reason is that the components used for generating authentication codes are not included in all to-be-protected contents. In contrast, method [20] and the proposed method use contents that need to be protected to generate authentication codes, and thus they are able to detect these tampering.

5. Conclusions

In this paper, we proposed an efficient authentication method with a high image quality for AMBTC images. Based on the smoothness of blocks, blocks are classified into smooth and complex ones. To enhance the security, to-be-protected contents including position information, bitmaps and quantized values are employed to generate the authentication codes. Moreover, a key is added to construct the reference table used in APPM to further protect the authentication codes. According to the smoothness of blocks, the authentication codes are embedded into the bitmap using matrix encoding for smooth block and in the quantized values using APPM for complex blocks. Experimental results show that in comparisons to prior works, the proposed method achieves a better detection results and higher image quality.

Author Contributions

X.Z., J.C. and W.H. contributed to the conceptualization, methodology, and writing of this paper. Z.-F.L. and G.Y. conceived the simulation setup, formal analysis and conducted the investigation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jasra, B.; Moon, A.H. Color image encryption and authentication using dynamic DNA encoding and hyper chaotic system. Expert Syst. Appl. 2022, 206, 117861. [Google Scholar] [CrossRef]
  2. Hossain, M.S.; Islam, M.T.; Akhtar, Z. Incorporating deep learning into capacitive images for smartphone user authentication. J. Inf. Secur. Appl. 2022, 69, 103290. [Google Scholar] [CrossRef]
  3. Qin, C.; Ji, P.; Zhang, X.; Dong, J.; Wang, J. Fragile image watermarking with pixel-wise recovery based on overlapping embedding strategy. Signal Process. 2017, 138, 280–293. [Google Scholar] [CrossRef]
  4. You, C.; Zheng, H.; Guo, Z.; Wang, T.; Wu, X. Tampering detection and localization base on sample guidance and individual camera device convolutional neural network features. Expert Syst. 2022, 40, e13102. [Google Scholar] [CrossRef]
  5. Hussan, M.; Parah, S.A.; Jan, A.; Qureshi, G.J. Hash-based image watermarking technique for tamper detection and localization. Health Technol. 2022, 12, 385–400. [Google Scholar] [CrossRef]
  6. Zhao, D.; Tian, X. A Multiscale Fusion Lightweight Image-Splicing Tamper-Detection Model. Electronics 2022, 11, 2621. [Google Scholar] [CrossRef]
  7. Hussan, M.; Parah, S.A.; Jan, A.; Qureshi, G.J. Self-embedding framework for tamper detection and restoration of color images. Multimed. Tools Appl. 2022, 81, 18563–18594. [Google Scholar] [CrossRef]
  8. Zhou, X.; Hong, W.; Weng, S.; Chen, T.S.; Chen, J. Reversible and recoverable authentication method for demosaiced images using adaptive coding technique. J. Inf. Secur. Appl. 2020, 55, 102629. [Google Scholar] [CrossRef]
  9. Wang, Q.; Xiong, D.; Alfalou, A.; Brosseau, C. Optical image authentication scheme using dual polarization decoding configuration. Opt. Lasers Eng. 2019, 112, 151–161. [Google Scholar] [CrossRef]
  10. Molina, J.; Ponomaryov, V.; Reyes, R.; Sadovnychiy, S.; Cruz, C. Watermarking framework for authentication and self-recovery of tampered colour images. IEEE Lat. Am. Trans. 2020, 18, 631–638. [Google Scholar] [CrossRef]
  11. Wu, X.; Yang, C. Invertible secret image sharing with steganography and authentication for AMBTC compressed images. Signal Process. Image 2019, 78, 437–447. [Google Scholar] [CrossRef]
  12. Zhang, X.; Wang, S.; Qian, Z.; Feng, G. Reversible fragile watermarking for locating tampered blocks in JPEG images. Signal Process. 2010, 90, 3026–3036. [Google Scholar] [CrossRef]
  13. Hong, W.; Wu, J.; Lou, D.C.; Zhou, X.; Chen, J. An AMBTC authentication scheme with recoverability using matrix encoding and side match. IEEE Access 2021, 9, 133746–133761. [Google Scholar] [CrossRef]
  14. Zhang, T.; Weng, S.; Wu, Z.; Lin, J.; Hong, W. Adaptive encoding based lossless data hiding method for VQ compressed images using tabu search. Inform. Sci. 2022, 602, 128–142. [Google Scholar] [CrossRef]
  15. Pan, Z.; Wang, L. Novel reversible data hiding scheme for two-stage VQ compressed images based on search-order coding. J. Vis. Commun. Image R. 2018, 50, 186–198. [Google Scholar] [CrossRef]
  16. Li, Y.; Chang, C.C.; Mingxing, H. High capacity reversible data hiding for VQ-compressed images based on difference transformation and mapping technique. IEEE Access 2020, 8, 32226–32245. [Google Scholar] [CrossRef]
  17. Battiato, S.; Giudice, O.; Guarnera, F.; Puglisi, G. CNN-based first quantization estimation of double compressed JPEG images. J. Vis. Commun. Image R. 2022, 89, 103635. [Google Scholar] [CrossRef]
  18. Yao, H.; Mao, F.; Qin, C.; Tang, Z. Dual-JPEG-image reversible data hiding. Inform. Sci. 2021, 563, 130–149. [Google Scholar] [CrossRef]
  19. Cogranne, R.; Giboulot, Q.; Bas, P. Efficient steganography in JPEG images by minimizing performance of optimal detector. IEEE Trans. Inf. Foren. Sec. 2022, 17, 1328–1343. [Google Scholar] [CrossRef]
  20. Chen, T.S.; Zhou, X.; Chen, R.; Hong, W.; Chen, K. A high fidelity authentication scheme for AMBTC compressed image using reference table encoding. Mathematics 2021, 9, 2610. [Google Scholar] [CrossRef]
  21. Lin, C.C.; Liu, X.; Zhou, J.; Tang, C.Y. An image authentication and recovery scheme based on turtle Shell algorithm and AMBTC-compression. Multimed. Tools Appl. 2022, 81, 39431–39452. [Google Scholar] [CrossRef]
  22. Hu, Y.C.; Lo, C.C.; Chen, W.L.; Wen, C.H. Joint image coding and image authentication based on absolute moment block truncation coding. J. Electron. Imaging 2013, 22, 013012. [Google Scholar] [CrossRef]
  23. Li, W.; Lin, C.C.; Pan, J.S. Novel image authentication scheme with fine image quality for BTC-based compressed images. Multimed. Tools Appl. 2016, 75, 4771–4793. [Google Scholar] [CrossRef]
  24. Chen, T.H.; Chang, T.C. On the security of a BTC-based-compression image authentication scheme. Multimed. Tools Appl. 2018, 77, 12979–12989. [Google Scholar] [CrossRef]
  25. Hong, W.; Zhou, X.Y.; Lou, D.C.; Huang, X.Q.; Peng, C. Detectability improved tamper detection scheme for absolute moment block truncation coding compressed images. Symmetry 2018, 10, 318. [Google Scholar] [CrossRef] [Green Version]
  26. Hong, W.; Chen, M.J.; Chen, T.S.; Huang, C.C. An efficient authentication method for AMBTC compressed images using adaptive pixel pair matching. Multimed. Tools Appl. 2018, 77, 4677–4695. [Google Scholar] [CrossRef]
  27. Su, G.D.; Chang, C.C.; Lin, C.C. High-precision authentication scheme based on matrix encoding for AMBTC-compressed images. Symmetry 2019, 11, 996. [Google Scholar] [CrossRef] [Green Version]
  28. Lema, M.; Mitchell, O. Absolute moment block truncation coding and its application to color image. IEEE Trans. Commun. 1984, 32, 1148–1157. [Google Scholar] [CrossRef]
  29. Hong, W.; Chen, T.S. A novel data embedding method using adaptive pixel pair matching. IEEE Trans. Inf. Foren. Sec. 2012, 7, 176–184. [Google Scholar] [CrossRef]
  30. Liu, S.; Fu, Z.; Yu, B. Rich QR codes with three-layer information using Hamming code. IEEE Access 2019, 7, 78640–78651. [Google Scholar] [CrossRef]
  31. Hamming, R.W. Error detecting and error correcting codes. Bell Labs Tech. J. 1950, 29, 147–160. [Google Scholar] [CrossRef]
  32. Menezes, A.J.; Van Oorschot, P.C.; Vanstone, S.A. Handbook of Applied Cryptography; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
  33. The USC-SIPI Image Database. Available online: http://sipi.usc.edu/database/ (accessed on 10 January 2023).
  34. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  35. BOWS-2 Image Database. Available online: http://bows2.ec-lille.fr/ (accessed on 10 January 2023).
Figure 1. The examples of APPM embedding method. (a) The schematic diagram of the APPM; (b) An example of R 64 .
Figure 1. The examples of APPM embedding method. (a) The schematic diagram of the APPM; (b) An example of R 64 .
Applsci 13 01402 g001
Figure 2. The embedding framework of the proposed method.
Figure 2. The embedding framework of the proposed method.
Applsci 13 01402 g002
Figure 3. Eight test images. (a) Lena; (b) Jet; (c) Baboon; (d) Tiffany; (e) Sailboat; (f) Splash; (g) Peppers; (h) House.
Figure 3. Eight test images. (a) Lena; (b) Jet; (c) Baboon; (d) Tiffany; (e) Sailboat; (f) Splash; (g) Peppers; (h) House.
Applsci 13 01402 g003
Figure 4. The plot of threshold T versus PSNR for various test images.
Figure 4. The plot of threshold T versus PSNR for various test images.
Applsci 13 01402 g004
Figure 5. The tampered and detection results. (a) The tampered image; (b) The tampered region; (c) The coarse detection result; (d) The refined detection result.
Figure 5. The tampered and detection results. (a) The tampered image; (b) The tampered region; (c) The coarse detection result; (d) The refined detection result.
Applsci 13 01402 g005
Figure 6. Comparisons between PSNR and the number of smooth blocks for 200 images.
Figure 6. Comparisons between PSNR and the number of smooth blocks for 200 images.
Applsci 13 01402 g006
Figure 7. The tampered and detection results of [25,26,27] and the proposed method. (a) The tampered image; (b) The tampered region; (c) Detection result of [25]; (d) Detection result of [26]; (e) Detection result of [27]; (f) Detection result of the proposed method.
Figure 7. The tampered and detection results of [25,26,27] and the proposed method. (a) The tampered image; (b) The tampered region; (c) Detection result of [25]; (d) Detection result of [26]; (e) Detection result of [27]; (f) Detection result of the proposed method.
Applsci 13 01402 g007
Figure 8. The tampering of Peppers image. (a) The tampered image; (b) The tampered regions.
Figure 8. The tampering of Peppers image. (a) The tampered image; (b) The tampered regions.
Applsci 13 01402 g008
Figure 9. Detectability comparisons between [25,26,27] and the proposed method. (a) Detection result of [25]; (b) Detection result of [26]; (c) Detection result of [27]; (d) Detection result of the proposed method.
Figure 9. Detectability comparisons between [25,26,27] and the proposed method. (a) Detection result of [25]; (b) Detection result of [26]; (c) Detection result of [27]; (d) Detection result of the proposed method.
Applsci 13 01402 g009
Table 1. PSNR and SSIM comparisons with various T value.
Table 1. PSNR and SSIM comparisons with various T value.
TMetricLenaJetBaboonTiffanySailboatSplashPeppersHouse
T = 0 PSNR41.0241.0241.0041.0141.0541.0241.0241.06
SSIM0.95640.95050.98670.96120.97230.95010.96130.9654
T = 1 PSNR41.0841.1441.0141.0941.0741.1141.0641.12
SSIM0.95740.95230.98670.96220.97270.95140.96180.9664
T = 2 PSNR41.3141.8041.0141.4541.2141.6341.1841.84
SSIM0.96130.96380.98670.96660.97460.95800.96370.9788
T = 3 PSNR41.6242.3941.0241.8041.3442.0441.3442.02
SSIM0.96600.97270.98680.97090.97640.96260.96570.9810
T = 4 PSNR42.0342.9441.0342.3241.4942.6141.5742.22
SSIM0.97100.97840.98700.97560.97800.96790.96830.9828
T = 5 PSNR42.3143.1941.0442.6441.5943.0641.7942.39
SSIM0.97380.98050.98700.97800.97920.97140.97070.9844
T = 6 PSNR42.4843.3241.0542.7841.6443.3141.9342.48
SSIM0.97480.98080.98710.97860.97970.97270.97160.9849
T = 7 PSNR42.4943.3141.0442.7741.6343.3341.9442.47
SSIM0.97410.98060.98700.97830.97930.97220.97110.9847
T = 8 PSNR42.3943.2640.9942.7041.5343.2441.8342.43
SSIM0.973010.97980.98650.97760.97830.97110.96960.9843
T = 9 PSNR42.2443.1740.9042.5541.3743.0941.6542.35
SSIM0.97190.97930.98550.97660.97650.96980.96750.9837
T = 10 PSNR42.0643.0240.7842.3341.1342.9341.4142.22
SSIM0.97050.97860.98430.97530.97470.96880.96540.9830
Table 2. The relation between the ratio of smooth blocks and the PSNR of images.
Table 2. The relation between the ratio of smooth blocks and the PSNR of images.
ImageNumber of Smooth BlocksRatio of Smooth BlocksPSNR
Lena869553.07%42.49
Jet975859.56%43.31
Baboon11466.99%41.04
Tiffany951858.09%42.78
Sailboat363822.20%41.64
Splash11,82072.14%43.33
Peppers552433.71%41.91
House661640.38%42.47
Table 3. PSNR comparisons with [25,26,27] (in dB).
Table 3. PSNR comparisons with [25,26,27] (in dB).
MethodLenaJetBaboonTiffanySailboatSplashPeppersHouseAverage
[25]39.7139.6939.7339.7639.6739.6739.7039.7039.71
[26]40.9841.0840.9640.8940.9640.9840.9940.9340.97
[27]40.3240.0740.6940.2240.5540.1340.4940.2540.35
Proposed42.4943.3141.0442.7841.6443.3341.9442.4742.36
Table 4. Detectability comparisons using different metrics.
Table 4. Detectability comparisons using different metrics.
Method[25][26][27]Proposed
Number of tampered blocks (NTB)19,712
Tampering rate7.52%
True positive (TP)19,68019,69619,69619,712
False negative (FN)3216160
The refined detection rate99.83%99.91%99.91%100%
Table 5. Comparisons between prior works and the proposed method.
Table 5. Comparisons between prior works and the proposed method.
MethodComponents for
Generating
Authentication Codes
Embedding TechniquesDetection of BananasDetection of OrangeDetection of Watermelon
[20]Bitmaps and positionAPPMYesNoYes
[21]Recovery codes
and position
Turtle ShellYesYesYes
[25]MSBs of quantized
values and bitmaps
LSBNo YesYes
[26]Bitmaps and positionAPPMYesNo Yes
[27]Bitmaps and positionMatrix
encoding
YesYesNo
Proposedquantized values,
bitmaps and position
APPM and matrix
encoding
YesYesYes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, X.; Chen, J.; Yang, G.; Lin, Z.-F.; Hong, W. An Authentication Method for AMBTC Compressed Images Using Dual Embedding Strategies. Appl. Sci. 2023, 13, 1402. https://doi.org/10.3390/app13031402

AMA Style

Zhou X, Chen J, Yang G, Lin Z-F, Hong W. An Authentication Method for AMBTC Compressed Images Using Dual Embedding Strategies. Applied Sciences. 2023; 13(3):1402. https://doi.org/10.3390/app13031402

Chicago/Turabian Style

Zhou, Xiaoyu, Jeanne Chen, Guangsong Yang, Zheng-Feng Lin, and Wien Hong. 2023. "An Authentication Method for AMBTC Compressed Images Using Dual Embedding Strategies" Applied Sciences 13, no. 3: 1402. https://doi.org/10.3390/app13031402

APA Style

Zhou, X., Chen, J., Yang, G., Lin, Z. -F., & Hong, W. (2023). An Authentication Method for AMBTC Compressed Images Using Dual Embedding Strategies. Applied Sciences, 13(3), 1402. https://doi.org/10.3390/app13031402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop