Next Article in Journal
Efficiencies and Work Losses for Cycles Interacting with Reservoirs of Apparent Negative Temperatures
Next Article in Special Issue
Reversible Data Hiding in JPEG Images Using Quantized DC
Previous Article in Journal
Entropy Generation Optimization in Squeezing Magnetohydrodynamics Flow of Casson Nanofluid with Viscous Dissipation and Joule Heating Effect
Previous Article in Special Issue
Reversible Data Hiding Algorithm in Fully Homomorphic Encrypted Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Study of Three Steganographic Methods Using a Chaotic System and Their Universal Steganalysis Based on Three Feature Vectors

1
LASTRE Laboratory, Lebanese University, 210 Tripoli, Lebanon
2
Institut d’Electronique et des Télécommunications de Rennes (IETR), UMR CNRS 6164, Université de Nantes—Polytech Nantes, Rue Christian Pauc CS 50609, CEDEX 3, 44306 Nantes, France
3
School of Electronics and Telecommunications, Hanoi University of Science and Technology, 1 Dai Co Viet, Hai Ba Trung, Hanoi, Vietnam
4
INSA de Rennes, CNRS, IETR, CEDEX 7, 35708 Rennes, France
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(8), 748; https://doi.org/10.3390/e21080748
Submission received: 27 June 2019 / Revised: 21 July 2019 / Accepted: 24 July 2019 / Published: 30 July 2019
(This article belongs to the Special Issue Entropy Based Data Hiding)

Abstract

:
In this paper, we firstly study the security enhancement of three steganographic methods by using a proposed chaotic system. The first method, namely the Enhanced Edge Adaptive Image Steganography Based on LSB Matching Revisited (EEALSBMR), is present in the spatial domain. The two other methods, the Enhanced Discrete Cosine Transform (EDCT) and Enhanced Discrete Wavelet transform (EDWT), are present in the frequency domain. The chaotic system is extremely robust and consists of a strong chaotic generator and a 2-D Cat map. Its main role is to secure the content of a message in case a message is detected. Secondly, three blind steganalysis methods, based on multi-resolution wavelet decomposition, are used to detect whether an embedded message is hidden in the tested image (stego image) or not (cover image). The steganalysis approach is based on the hypothesis that message-embedding schemes leave statistical evidence or structure in images that can be exploited for detection. The simulation results show that the Support Vector Machine (SVM) classifier and the Fisher Linear Discriminant (FLD) cannot distinguish between cover and stego images if the message size is smaller than 20% in the EEALSBMR steganographic method and if the message size is smaller than 15% in the EDCT steganographic method. However, SVM and FLD can distinguish between cover and stego images with reasonable accuracy in the EDWT steganographic method, irrespective of the message size.

1. Introduction

Steganography is an increasingly important security domain; it aims to hide a message (secret information) in digital cover media without causing perceptual degradation (in this study, we use images as cover media). It should be noted that many steganographic methods have been proposed in the spatial and frequency domains. In the spatial domain, pixels are directly used to hide secret messages; these techniques are normally easy to implement and have a high capacity. However, they are not generally robust against statistical attacks [1,2]. In the transform domain, coefficients of frequency transforms, such as DCT (Discrete Cosine Transform), FFT (Fast Fourier Transform), and DWT (Discrete Wavelet Transform), are used to hide secret data. Generally, these techniques are complex, but they are more robust against steganalysis (to noise and to image processing).
The main steganographic methods in the spatial domain [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] are LSB-based (Low Significant Bit). Recently, entropy has also been extensively used to support data-hiding algorithms [18,19,20]. The LSB methods entail replacing the least significant bit of pixels with a bit of the secret data. Among these methods, the EALSBMR method [3] is an edge adaptive scheme with respect to the message size and can embed data according to the difference between two consecutive pixels in the cover image. To the best of our knowledge, we conclude that this method is the best (good PNSR, high embedding capacity, and especially adaptive), but it suffers from low security in terms of message detection. For this reason, we have enhanced its security.
Frequency domain steganography, as a watermarking domain [21,22,23,24,25,26,27,28,29], is widely based on the DCT and DWT transforms. The DCT usually transforms an image representation into a frequency representation by grouping pixels into 8 × 8 pixel blocks and transforming each block, using the DCT transform, into 64 DCT coefficients. A message is then embedded into the DCT coefficients. The Forward Discrete Wavelet Transform is, in general, suitable for identifying areas in the cover image where a secret message can be effectively embedded due to excellent space-frequency localization properties. In particular, these properties allow exploiting the masking effect of a human visual system so that if a DWT coefficient is modified, it modifies only the region that corresponds to that coefficient. The Haar wavelet is the simplest possible wavelet that can achieve the DWT.
However, the aforementioned steganographic methods are not secure in terms of message detection. To protect the content of messages, chaos can be used. Indeed, chaotic sequences play an important role in information hiding and in security domains, such as cryptography, steganography, and watermarking, because of their properties such as sensitivity to initial conditions and parameters of the system, ergodicity, uniformity, and pseudo-randomness. Steganography generally leaves traces that can be detected in stego images. This can allow an adversary, using steganalysis techniques, to divulge a hiding secret message. There are two types of opponents: passive and active. A passive adversary only examines communication to detect whether communication contains hidden messages. In this case, the content of the communication is not modified by the rival. An active adversary can intentionally cause disruption, distortion, or destruction of communication, even in the absence of evidence of secret communication. The main steganographic methods have been designed for cases of passive adversary. In general, there are two kinds of steganalysis: specific and universal. Specific steganalysis is designed to attack a specific steganography algorithm. This type of specific steganalysis can generally produce more accurate results, but it fails to produce satisfactory results if the inserted secret messages are in the form of a modified algorithm. Universal steganalysis, on the other hand, can be regarded as a universal technique to detect various types of steganography. Moreover, it can be used to detect new steganographic techniques where specific steganalysis does not yet exist. In other words, universal steganalysis is an irreplaceable tool for detection if the integration algorithm is unknown or secret.
In this paper, we first integrate an efficient chaotic system into the three steganographic methods mentioned above to make them more secure. The chaotic system quasi-chaotically chooses pixel positions in the cover image where the bits of the secret message will be embedded. Thus, the inserted bits of the secret message becomes secure against message bits recovery attacks because their position is unknown.
Second, we study and apply three universal steganalysis methods to the aforementioned chaos-based steganographic methods. The first steganalysis method, developed by Farid [30], uses higher-order statistics of high-frequency wavelet sub-bands and their prediction errors to form the feature vectors. In the second steganalysis method, as formulated by Shi et al. [31], the statistical moments of the characteristic functions of the prediction-error image, the test image, and their wavelet sub-bands are selected as the feature vectors. The third steganalysis method, introduced by Wang et al. [32], uses the features that are extracted from both the empirical probability density function (PDF) moments and the normalized absolute characteristic function (CF). For the three steganalysis algorithms, we applied FLD analysis and the SVM method with the RBF kernel as classifiers between cover images and stego images.
The paper has been organized as follows: In Section 2, we describe the proposed chaotic system. In Section 3, we present the three enhanced steganographic algorithms. In Section 4, we illustrate the experimental results and analyze the enhanced algorithms. In Section 5, we develop, in detail, the steganalysis techniques for the previous algorithms. In Section 6, we report the results of the steganalysis, and in the last section, we conclude our work.

2. Description of the Proposed Chaotic System

This system is made of a perturbed chaotic generator and a 2-D cat map. The chaotic generator supplies the dynamic keys K p for the process of provides the position of the new random pixel (see Figure 1). The chaotic system allows inserting a message both in a secretive and uniform manner [33,34,35,36,37,38,39,40].
The generator of discrete chaotic sequences exhibits orbits with very large lengths. It is based on two connected non-linear digital IIR filters (cells). The discrete PWLCM and SKEW TENT maps (non-linear functions) are used. A linear feedback shift register (m-LFSR) is then used to disturb each cell (Figure 2). The disturbing technique is associated with the cascading technique, which allows controlling and increasing the length of the orbits that are produced. The minimum orbit length of the generator output is calculated using Equation (1):
o m i n = l c m Δ 1 × 2 k 1 1 , Δ 2 × 2 k 2 1
In the above equation, l c m is the least common multiple, k 1 = 23 and k 2 = 21 are the degrees of the LFSR’s primitive polynomials, and  Δ 1 and Δ 2 are the lengths s 1 and s 2 of outputs cells, respectively, without disturbance. The equations of the chaotic generators are formulated as follows:
s i n = N L F i u i n 1 , p i , i = 1 , 2 u i n 1 = m o d s i n 1 × c i , 1 + s i n 2 × c i , 2 , 2 N , i = 1 , 2 s n = s 1 n + s 2 n
The two previously mentioned functions, PWLCM map and Skew map, are defined according to the following relations:
s 1 n = N L F 1 u 1 n 1 , p 1 = 2 N × u 1 n 1 p 1 i f 0 u 1 n 1 < p 1 2 N × 2 N u 1 n 1 2 N p 1 i f p 1 u 1 n 1 < 2 N 1 N L F 1 2 N u 1 n 1 o t h e r w i s e
s 2 n = N L F 2 u 2 n 2 , p 2 = 2 N × u 2 n 1 p 1 i f 0 u 2 n 1 < p 2 2 N × 2 N u 2 n 1 2 N p 2 + 1 i f p 2 u 2 n 1 < 2 N
The control parameter p 1 is used for the PWLCM map and ranges from 1 to 2 N 1 1 , and  p 2 is the control parameter that is used for the Skew map and ranges from 1 to 2 N 1 . N = 32 is the word length used for simulations. The size of the secret key K, formed by all initial conditions and parameters of the chaotic generator, is (6 × 32 + 5 × 32 + 31 + 23 +21) = 427 bits. It is large enough to resist a brute-force attack.

Description of the Cat Map Used

The permutation process is based on the modified Cat map and is calculated in a very efficient manner using the equation below [37]:
M c n M l n = m o d 1 u v 1 + u v × ( M l M c ) + r l + r c r c , [ M M ] + 1 1
In the above equation, M l , M c and M l n , M c n are the original and permuted square matrices of size M , M , from which we calculate the I n d matrix as follows:
M l = 1 1 . . 1 2 2 . . 2 . . . . . . M M . . M ; M c = 1 2 . . M 1 2 . . M . . . . . . 1 2 . . M
I n d = M l n 1 + M c n 1 × M + 1
The dynamic key K p is structured as follows:
K p = k p 1 , k p 2 , , k p r
k p i = u i , v i , r l i , r c i ; i = 1 , 2 , , r
In the above equations, 0 u i , v i , r l i , r c i M 1 are the parameters of the Cat map and r is the number of rounds.

3. Enhanced Steganographic Algorithms

In this section, we describe three enhanced steganographic algorithms by using an efficient chaotic system.

3.1. Enhanced EALSBMR (EEALSBMR)

Below, we present the insertion procedure and the extraction procedure of the proposed enhancement of the EALSBMR method (EEALSBMR) [41].

3.1.1. Insertion Procedure

The flow diagram of the embedding scheme can be found in Figure 3.
The detailed embedding steps for this algorithm have been explained as follows:
Step 1:
Capacity estimation
  • To estimate the insertion capacity, we arrange the cover image into a 1D vector V, and we divide its content into non-overlapping embedding units (blocks) consisting of two consecutive pixels p i , p i + 1 . Following this, we calculate the difference between the pixels of each block, and we increase by one the content of the vector-difference V D of 31 elements t 1 , 2 , 3 , , 31 , in which each element contains E U t number of blocks where E U t is a set of pixel pairs whose absolute differences are greater than or equal to t, as shown below:
    E U t = p i , p i + 1 | | p i p i + 1 t , p i , p i + 1 V
  • For a given secret message M of size M bits, the threshold T used in the embedding process is determined by the following expression and pseudo-code (Algorithm 1):
    T = a r g m a x t 2 E U t M
    Algorithm 1 Pseudo-code determining the value of the threshold T
      1:
    procedure
      2:
         n u m b e r _ p i x e l s = 0;
      3:
        for t = 31:-1:1 do
      4:
             n u m b e r _ p i x e l s = n u m b e r _ p i x e l s + V D ( t ) ;
      5:
            if (2* n u m b e r _ p i x e l s > = |M|) then
      6:
                T = t ;
      7:
               break;
      8:
            end if;
      9:
        end for;
    10:
    end procedure
Step 2:
Embedding process
  • The embedding process is achieved as follows: we divide the cover image into two sub-images; one includes the odd columns, and the other includes the even columns.
  • Following this, the chaotic system chooses a pixel position ( I n d ) from the odd sub-image; the second pixel position of the corresponding block must have the same I n d in the even image. If a pair of pixel units p i , p i + 1 satisfies Equation (8), then a 2 bit-message can be hidden (one bit by pixel); otherwise, the chaotic system chooses another I n d .
    p i p i + 1 T , p i , p i + 1 V
  • For each unit p i , p i + 1 , we perform data-hiding based on the following four cases [42]:
    Case 1:
    if L S B ( p i ) = m i and f ( p i , p i + 1 ) = m i + 1 ( p i , p i + 1 ) = ( p i , p i + 1 )
    Case 2:
    if L S B ( p i ) = m i and f ( p i , p i + 1 ) m i + 1 ( p i , p i + 1 ) = ( p i , p i + 1 + r )
    Case 3:
    if L S B ( p i ) m i and f ( p i 1 , p i + 1 ) = m i + 1 ( p i , p i + 1 ) = ( p i 1 , p i + 1 )
    Case 4:
    if L S B ( p i ) m i and f ( p i 1 , p i + 1 ) m i + 1 ( p i , p i + 1 ) = ( p i + 1 , p i + 1 )
    In the above equations, m i and m i + 1 are the i th and ( i + 1 ) th secret bits of the message to be embedded; r is a random value belonging to 1 , 1 , and ( p i , p i + 1 ) denotes the pixel pair after data-hiding. The function f is defined as follows:
    f ( a , b ) = L S B ( a 2 + b )
  • Readjustment if necessary: After hiding, ( p i , p i + 1 ) may be out of range [0, 255] or the new difference value p i p i + 1 may be less than the threshold T. In these cases, we need to readjust p i and p i + 1 , and the new readjusted values, p i and p i + 1 , are calculated as follows [3]:
    ( p i , p i + 1 ) = a r g m i n ( e 1 , e 2 ) e 1 p i + e 2 p i + 1
    with :
    e 1 = p i + 4 k 1 e 2 = p i + 1 + 2 k 2 k 1 , k 2 Z
    k 1 , k 2 are two arbitrary numbers from Z ; when:
    0 e 1 , e 1 255 and e 1 e 2 T
    then :
    p i = e 1 p i + 1 = e 2
    The sequence follows as such for each new block position.
  • Finally, we embed the parameter T of the stego image into the first five pixels or the last five pixels, for example.

3.1.2. Extraction Procedure

  • Extract the parameter T from the stego image.
  • Divide the stego image into two sub-images; one includes the odd columns, and the other includes the even columns.
  • Generate a pseudo-chaotic position (using the same secret key K), as done in the insertion procedure, to obtain the same order of pixel unit position as the odd sub-image. The second pixel block has the same I n d in the even image.
  • Verify if p i s p i + 1 s T and then extract the two secret bits of M m i , m i + 1 as follows:
    m i = L S B ( p i s ) ; m i + 1 = f ( p i s , p i + 1 s )
    with : p i s = p i or p i
    Otherwise, the chaotic system chooses another pseudo-chaotic position. The sequence follows as such for each unit position until all messages have been extracted.
  • Example of insertion:
    The cover image is this image of “peppers” as in Figure 4:
    The embedded message appears as follows in 40 × 40 pixels as shown in Figure 5:
    The corresponding sequence of the bits message has been given as follows:
    M = 10001000100011001000110001100111001001111010010110
    11101011000110101011101000000110100010110010
    The length of the binary message is 13,120 bits.
    Capacity estimation produces the threshold T = 12
    Suppose that the pseudo-chaotic positions of a block to embed the two bits message m 1 = 1 and m 2 = 0 are (354, 375) and (354, 376) that correspond to the 141 and 129 gray values (see Figure 6).
    Hiding the message bits:
    L S B ( 141 ) = 1 = m 1 = 1
    f ( p 1 , p 2 ) = L S B ( p 1 2 + p 2 ) = L S B ( 70 + 129 ) = 1 m 2
    We are in Case 2:
    L S B ( p i ) = m i ; f ( p i , p i + 1 ) m i + 1
    Therefore, the new pixel values are as follows:
    ( p 1 , p 2 ) = ( p 1 , p 2 + r ) = ( 141 , 130 ) with r = 1
    The difference between the new pixel values is:
    d = p 1 p 2 = | 141 130 | = 11 < T
    Then we need to adjust the new pixel values:
    We test the values 50 < k 1 < 50 and 50 < k 2 < 50 until we obtain the smallest difference between the initial values p 1 and p 2 and the corresponding obtained values e 1 and e 2 by using Equations (12) and (13). In our example, we find k 1 = 0 and k 2 = 1 and then: p 1 = 141 , p 2 = 128 .
  • Extraction of the bits message in the previous insertion example:
    The extraction is performed using the following equation:
    m 1 = L S B ( p 1 ) = L S B ( 141 ) = 1
    m 2 = f ( p 1 , p 2 ) = L S B ( p 1 2 + p 2 ) ) = L S B ( 70 + 128 ) = L S B ( 198 ) = 0

3.2. Enhanced DCT Steganographic Method (EDCT)

The DCT transforms a signal or image from the spatial domain into the frequency domain [43,44]. A DCT expresses a sequence of finitely many data points in terms of a sum of cosine functions, oscillating at different frequencies. The 2D DCT is calculated as follows:
D C T i , j = α i α j m = 0 M 1 n = 0 N 1 C m n cos π ( 2 m + 1 ) i 2 M cos π ( 2 n + 1 ) j 2 N
where:
α i = 1 M i = 0 2 M 0 i M 1 α j = 1 N i = 0 2 N 0 i N 1
The block diagram of the proposed enhanced steganographic-based DCT transform has been shown in Figure 7.

3.2.1. Insertion Procedure

The embedding process consists of the following steps:
  • Read the cover image and the secret message.
  • Convert the secret message into a 1-D binary vector.
  • Divide the cover image into 8 × 8 blocks. Then apply the 2D DCT transformation to each block (from left to right, top to bottom).
  • Use the same chaotic system to generate a pseudo-chaotic I n d .
  • Replace the LSB of each located DCT coefficient with the one bit of the secret message to hide.
  • Apply the 2D Inverse DCT transform to produce the stego image.

3.2.2. Extraction Procedure

The extraction procedure consists of the following steps:
  • Read the stego image.
  • Divide the stego image into 8 × 8 blocks and then apply the 2D DCT to each block.
  • Use the same chaotic system to generate pseudo-chaotic I n d .
  • Extract the LSB of each pseudo-located coefficient.
  • Construct the secret image.

3.3. Enhanced DWT Steganographic Method (EDWT)

The embedded secret image in the lower frequency sub-band ( A ) is generally more robust than the other sub-bands, but it significantly decreases the visual quality of the image, as normally, most of the image energy is stored in this sub-band. In contrast, the edges and textures of the image and the human eye are not generally sensitive to changes in the high-frequency sub-band ( D ) ; this allows secret information to be embedded without being perceived by the human eye. However, the sub-band ( D ) is not robust against active attacks (filtering, compression, etc.). The compromise adopted by many DWT-based algorithms to achieve accepted performance of imperceptibility and robustness enables embedding the secret image in the middle-frequency sub-bands ( H ) or ( V ) . In the block diagram of the proposed steganographic EDWT method shown in Figure 8, we embed the secret image in the sub-band ( H ) of the cover image (the size of the secret message must, at most, be equal to the size of the sub-band ( H ) of the cover image).

3.3.1. Insertion Procedure

The embedding process consists of the following steps:
  • Read the cover image and the secret image.
  • Transform the cover image into one level of decomposition using Haar Wavelet.
  • Permute the secret image in a pseudo-chaotic manner.
  • Fuse the DWT coefficients ( H ) of the cover image and the permuted secret image P S I as follows [45]:
    X = α X + β × P S I α + β = 1 ; α β
    In the above equations, X is the modified DWT coefficient ( H ) ; X is the original DWT coefficient ( H ) . α and β are the embedding strength factors; they are chosen such that the resulting stego image has a large P S N R . In our experiments, we tested some values of β , and the best value was found to be approximately 0.01.
  • Apply Inverse Discrete Wavelet Transform (IDWT) to produce the stego image in the spatial domain.

3.3.2. Extraction Procedure

The extraction procedure involves the following steps:
  • Read the stego image.
  • Transform the stego image into one level of decomposition using Haar Wavelet.
  • Apply inverse fusion transform to extract the permuted secret image as follows:
    P S I = ( X α X ) / β
    The extraction procedure is not blind, as we need the cover image to extract the permuted secret message.
  • Apply the inverse permutation procedure using the same chaotic system to obtain the secret image.

4. Experimental Results and Analysis

In the experiments, we first create the stego images by using the implemented steganographic methods that were applied on the standard gray level cover images “Lena”, “Peppers”, “Baboon” in 512 × 512 pixels and using “Boat” as a secret message with different sizes (embedding rates, ranging from 5% to 40%). The six criteria used to evaluate the qualities of the stego images have been listed as follows: Peak Signal-to-Noise Ratio ( P S N R ) [46], Image Fidelity ( I F ), structural similarity ( S S I M ), the entropy (E), the redundancy (R), and the image redundancy ( I R ). They can be represented by the following equations:
P S N R = 10 × log 10 ( M a x p c 2 ( i , j ) 1 M × N ( i = 0 M 1 j = 0 N 1 [ p c ( i , j ) p s ( i , j ) ] 2 ) )
I F = 1 i = 0 M 1 j = 0 N 1 [ p c ( i , j ) ] 2 ) ( i = 0 M 1 j = 0 N 1 [ p c ( i , j ) p s ( i , j ) ] 2 )
S S I M = ( 2 μ c μ s 1 ) ( 2 c o v c s + c 2 ) ( μ c 2 + μ s 2 + c 1 ) ( σ c 2 + σ s 2 + c 2 )
In the above equations, p c ( i , j ) and p s ( i , j ) are the pixel value of the i th row and j th column of the cover and stego image; M and N are the width and height of the considered cover image.
μ c , μ s are the average of the cover and stego images; σ c 2 , σ s 2 are the variance of the cover and stego images; μ c s is the co-variance of the cover-stego; c 1 = ( k 1 L ) 2 , c 2 = ( k 2 L ) 2 are two variables that are used to stabilize the division with a weak denominator; L is the dynamic range of the pixel values, and k 1 , k 2 are two much smaller constants compared to 1. We considered k 1 = k 2 = 0.05.
The higher the P S N R , I F , and S S I M , the better the quality of the stego image. P S N R values falling below 40 dB indicate a fairly low quality. Therefore, a high-quality stego should strive to be above 40 dB.
Additionally, we used three other parameters to estimate the qualities of the stego images. These parameters have been listed as follows:
-
The Entropy E, given by the following relation:
E = 0 2 L 1 p ( P i ) l o g 2 ( p ( P i ) )
L is already defined. p ( P i ) is the probability of the pixel value P i .
-
The Redundancy R is usually represented by the following formula:
R = E m a x E E
Here, E m a x = 8 . However, this relationship is problematic because the value of the minimal entropy is not known. For that, Tasnime [47] proposed using the following relationship, which seems to be more precise:
I R = i = 1 L R i R o p t R o p t ( 2 L 1 ) + ( S R o p t )
Called Image Redundancy ( I R ) with:
  • S being the size of the image under test;
  • R i being the number of occurrences of each pixel value;
  • R o p t being the optimal number of occurrences that each pixel value should have to get a non-redundant image.
In the following section, we present and compare the performance of the three implemented steganographic methods.

4.1. Enhanced EALSBMR

The results obtained from the parameters P S N R , I F , and S S I M for the algorithm have been presented in Table 1; their values indicate the high quality of the stego images, even with a high embedding rate of 40%. We observe that the P S N R , I F , and S S I M values decrease, as expected, when the size of the secret message increases.
In Figure 9a–c, we show the “Baboon” cover image and the corresponding stego images for 5% and 40% embedding rates, respectively. The visual quality obtained from the “Baboon” stego images is very high because visually, it is impossible to discriminate between the cover and stego images.
Just to fix the ideas, using the Lina image as the cover, and to obtain approximately identical capacity, we globally compared the obtained P S N R of the EEALSBMP method with that obtained by the following methods: [4,5,6,17]. We observed that only the method proposed by Borislav et al. [17] produces a better P S N R than the EEALSBMP method. However, this method cannot be adapted.

4.2. Enhanced DCT Steganographic Method

The results obtained from this method, as presented in Table 2, indicate the high quality of the stego images, even with a high embedding rate. Additionally, even the visual quality obtained is very high, as shown in Figure 10.

4.3. Enhanced DWT Steganographic Method

Table 3 presents the results obtained from the EDWT algorithm, which indicate that the steganographic algorithm exhibits good performance. Furthermore, no visual trace can be found in the resulting stego images, as shown in Figure 11a–c.

4.4. Performance Comparison of the Three Steganographic Methods

Table 1, Table 2 and Table 3 of P S N R , I F , and S S I M of the three methods show that the EEALSBMR and EDCT methods, in comparison with the EDWT method, ensure better quality of the stego images at different embedding rates. There is approximately a 10-dB difference in P S N R s at a 5% embedding rate and a 5 to 8 dB difference in P S N R s at a 40% embedding rate.

4.5. Performance Using Parameters E, R and I R

The results obtained from parameters E, R, and I R for the three algorithms on the stego images with different embedding rates have been presented in Table 4, Table 5 and Table 6. As we can see, these values, given in Table 7, are too close to the values obtained over the original images. This is consistent with the previous results obtained from the parameters P S N R , I F , and S S I M regarding the high quality of the stego images.

5. Universal Steganalysis

A good steganographic method should be imperceptible not only to human vision systems but also to computer analysis. Steganalysis is the art and science that detects whether a given image has a message hidden in it [1,48]. The extensive range of natural images and the wide range of data embedding algorithms make steganalysis a difficult task. In this work, we consider universal steganalysis to be based on statistical analysis.
Universal (blind) steganalysis attempts to detect hidden information without any knowledge about the steganographic algorithm. The idea is to extract the features of cover images and the features of stego images and then use them as the feature vectors that are used by a supervised classifier (SVM, FLD, neural networks…) to distinguish whether the image under test is a stego image. This procedure is illustrated in Figure 12. The left side of the flowchart displays the different steps of the learning process while the right side illustrates the different steps of the testing process.

5.1. Multi-Resolution Wavelet Decomposition

The DWT, which uses a sub-bands coding algorithm, is found to quickly compute the Wavelet Transform. Furthermore, it is easy to implement and reduces the computation time and the number of resources required. The DWT analyses the signal at different frequency bands with different resolutions by decomposing the signal into a coarse approximation and into detailed information. The decomposition of the signal into different frequencies is achieved by applying separable low-pass g ^ ( n ) and high-pass h ^ ( n ) filters along the image axes. The DWT computes the approximation coefficients matrix A and details coefficients matrices H, V, and D (horizontal, vertical, and diagonal, respectively) of the input matrix X, as illustrated in Figure 13.

5.2. Feature Vector Extraction

As the amount of image data is enormous, it is not feasible to directly use the complete image data for analysis. Therefore, for steganalysis, it is useful to extract a certain amount of useful data features that represent the image instead of the image itself. The addition of a message to a cover image may not affect the visual appearance of the image, but it will affect some statistics. The features required for steganalysis should be able to detect these minor statistical disorders that are created during the data-hiding process.
Three feature-extraction techniques are used in this paper to detect the presence of a secret message; these methods calculate the statistical properties of the images by employing multi-resolution wavelet decomposition.

5.2.1. Method 1: Feature Vectors Extracted from the Empirical Moments of the PDF-Based Multi-Resolution Coefficients and Their Prediction Error

The multi-resolution wavelet decomposition employed here is based on separable quadrature mirror filters (QMFs). This decomposition splits the frequency space into multiple scales and orientations. This is accomplished by applying separable low-pass and high-pass filters along the image axes, generating a vertical, horizontal, diagonal, and low-pass sub-band. The horizontal, vertical, and diagonal sub-bands at scale m = 1, 2, ..., n are denoted as H m , V m and D m .
In our work, the first set of features is extracted from the statistics over coefficients S m (x,y) of each sub-band and for levels (scales) m = 1 and n = 3. These characteristics represent the following: mean μ , variance σ 2 , skewness ξ , and kurtosis κ . They can be represented as follows:
μ = 1 N x N y x , y S m ( x , y ) σ 2 = 1 N x N y x , y ( S m ( x , y ) μ ) 2 ξ = 1 N x N y σ 3 x , y ( S m ( x , y ) μ ) 3 κ = 1 N x N y σ 4 x , y ( S m ( x , y ) μ ) 4 3
From Equation (24), we can build the first feature vector Z s of N m × N b d × n = 4 × 3 × 3 = 36 elements, where N m , N b d , and n are the number of moments, sub-bands, and scales. The feature vector Z s is represented as follows:
Z s = [ Z 1 , Z 2 , Z 3 ]
where:
Z 1 = [ μ H 1 , μ V 1 , μ D 1 | σ H 1 , σ V 1 , σ D 1 | ξ H 1 , ξ V 1 , ξ D 1 | κ H 1 , κ V 1 , κ D 1 ]
Z 2 = [ μ H 2 , μ V 2 , μ D 2 | σ H 2 , σ V 2 , σ D 2 | ξ H 2 , ξ V 2 , ξ D 2 | κ H 2 , κ V 2 , κ D 2 ]
Z 3 = [ μ H 3 , μ V 3 , μ D 3 | σ H 3 , σ V 3 , σ D 3 | ξ H 3 , ξ V 3 , ξ D 3 | κ H 3 , κ V 3 , κ D 3 ]
The second set of statistics is based on the prediction errors of coefficients S m ( x , y ) of an optimal linear predictor. The sub-band coefficients are correlated with their spatial, orientation, and scale neighbors. Several prediction techniques of coefficients S H m p ( x , y ) , S V m p ( x , y ) , and S D m p ( x , y ) (m = 1, 2, 3) may be used. In this work, we used a linear predictor, specifically the one proposed by Farid in [30], as shown below:
S H m p ( x , y ) = w 1 S H m ( x 1 , y ) + w 2 S H m ( x + 1 , y ) + w 3 S H m ( x , y 1 ) + w 4 S H m ( x , y + 1 ) + w 5 S H m + 1 ( x 2 , y 2 ) + w 6 S D m ( x , y ) + w 7 S D m + 1 ( x 2 , y 2 )
S V m p ( x , y ) = w 1 S V m ( x 1 , y ) + w 2 S V m ( x + 1 , y ) + w 3 S V m ( x , y 1 ) + w 4 S V m ( x , y + 1 ) + w 5 S V m + 1 ( x 2 , y 2 ) + w 6 S D m ( x , y ) + w 7 S D m + 1 ( x 2 , y 2 )
S D m p ( x , y ) = w 1 S D m ( x 1 , y ) + w 2 S D m ( x + 1 , y ) + w 3 S D m ( x , y 1 ) + w 4 S D m ( x , y + 1 ) + w 5 S D m + 1 ( x 2 , y 2 ) + w 6 S H m ( x , y ) + w 7 S V m + 1 ( x 2 , y 2 )
For more clarity, in Figure 14, we provide the block diagram for the prediction of coefficient S V 1 p ( x , y ) .
The parameters w i (scalar weighting values) of the error prediction coefficients of each sub-band for a given level m are adjusted to minimize the prediction error by minimizing the quadratic error function, as shown below:
E ( w ) = [ S m Q w ] 2
The columns of the matrix Q contain the neighboring coefficient magnitudes, as specified in Equations (25)–(27). The quadratic error function is minimized analytically as follows:
d E ( w ) d w = 2 Q T ( S m Q w ) = 0
Then, we obtain:
w o p t = ( Q t Q ) 1 Q t S m
For the optimal predictor, we use the log error given by the following equation to predict error coefficients of each sub-band for a given level m:
ϵ m p = log 2 S m log 2 ( | Q w o p t | )
By using Equation (31), additional statistics are collected, namely the mean, variance, skewness, and kurtosis (see Equation (24)). The feature vector Z ϵ p is similar to Z s ; it is represented as follows:
Z ϵ p = [ Z 1 ϵ p , Z 2 ϵ p , Z 3 ϵ p ]
where:
Z 1 ϵ p = [ μ ϵ H 1 p , μ ϵ V 1 p , μ ϵ D 1 p | σ ϵ H 1 p , σ ϵ V 1 p , σ ϵ D 1 p | ξ ϵ H 1 p , ξ ϵ V 1 p , ξ ϵ D 1 p | κ ϵ H 1 p , κ ϵ V 1 p , κ ϵ D 1 p ]
Z 2 ϵ p = [ μ ϵ H 2 p , μ ϵ V 2 p , μ ϵ D 2 p | σ ϵ H 2 p , σ ϵ V 2 p , σ ϵ D 2 p | ξ ϵ H 2 p , ξ ϵ V 2 p , ξ ϵ D 2 p | κ ϵ H 2 p , κ ϵ V 2 p , κ ϵ D 2 p ]
Z 3 ϵ p = [ μ ϵ H 3 p , μ ϵ V 3 p , μ ϵ D 3 p | σ ϵ H 3 p , σ ϵ V 3 p , σ ϵ D 3 p | ξ ϵ H 3 p , ξ ϵ V 3 p , ξ ϵ D 3 p | κ ϵ H 3 p , κ ϵ V 3 p , κ ϵ D 3 p ]
Finally, the feature vector that will be used for the learning classifier is represented by Z = [ Z s | Z ϵ p ] . It contains 72 components.

5.2.2. Method 2: Feature Vectors Extracted from Empirical Moments of CF-Based Multi-Resolution

The first set of feature vectors Z s is extracted based on the CF and the wavelet decomposition, as proposed by Shi et al. [31]. The statistical moments of the characteristic function ϕ ( k ) of order n = 1 to 3 are represented for each sub-band ( A m , H m , V m , D m ) at different levels m = 1, 2, and 3 of the wavelet decomposition as follows:
M S m n = k = 1 N 2 | ϕ ( k ) | × k n k = 1 N 2 | ϕ ( k ) |
ϕ ( k ) = i = 1 N h ( i ) exp j 2 π i k K 1 k K
is a component of the characteristic function at frequency k, calculated from the histogram of the sub-band S m , and N is the total number of points of the histogram. Equation (32) allows us to build the first feature vector Z m of size 12 × 3 = 36 components and 3 moments of the initial image. The feature vectors Z m have been listed as follows:
Z s = [ M I 1 , M I 2 , M I 3 | M A 1 1 , M A 1 2 , M A 1 3 | M H 1 1 , M H 1 2 , M H 1 3 | M V 1 1 , M V 1 2 , M V 1 3 | M D 1 1 , M D 1 2 , M D 1 3 | M A 2 1 , M A 2 2 , M A 2 3 | M H 2 1 , M H 2 2 , M H 2 3 | M V 2 1 , M V 2 2 , M V 2 3 | M D 2 1 , M D 2 2 , M D 2 3 | M A 3 1 , M A 3 2 , M A 3 3 | M H 3 1 , M H 3 2 , M H 3 3 | M V 3 1 , M V 3 2 , M V 3 3 | M D 3 1 , M D 3 2 , M D 3 3 ]
In the above equation, M I 1 , M I 2 , M I 3 are the moments of the initial image.
The second category of features is calculated from the moments of prediction-error image and its wavelet decomposition.
Prediction-error image:
In steganalysis, we only care about the distortion caused by data-hiding. This type of distortion may be rather weak and, hence, covered by other types of noises, including those caused due to the peculiar feature of the image itself. To make the steganalysis more effective, it is necessary to keep the noise of the dissimulation and eliminate most of the other noises. For this purpose, we calculate the moments of characteristic functions of order n = 1, 3 of the predicted error image and of its wavelet decomposition at the various levels m = 1, 2, and 3 (see Equation (32)). The prediction-error image is obtained by subtracting the predicted image (in which each predicted pixel grayscale value in the cover image uses its neighboring pixels’ grayscale values (see Equation (34))) from the cover image. Such features make the steganalysis more efficient because the hidden data is usually unrelated to the cover media. The prediction pixel is expressed as follows:
x ^ = m a x ( a , b ) c m i n ( a , b ) m i n ( a , b ) c m a x ( a , b ) a + b c o t h e r w i s e
In the above equation, a, b, c are the context of the pixel x under consideration; x ^ is the prediction value of x. The location of a, b, c can be illustrated as in Figure 15.
The feature vector Z ϵ p is represented as follows:
Z ϵ p = [ M ϵ 1 p 1 , M ϵ 1 p 2 , M ϵ 1 p 3 | M A 1 1 , M A 1 2 , M A 1 3 | M H 1 1 , M H 1 2 , M H 1 3 | M V 1 1 , M V 1 2 , M V 1 3 | M D 1 1 , M D 1 2 , M D 1 3 | M A 2 1 , M A 2 2 , M A 2 3 | M H 2 1 , M H 2 2 , M H 2 3 | M V 2 1 , M V 2 2 , M V 2 3 | M D 2 1 , M D 2 2 , M D 2 3 | M A 3 1 , M A 3 2 , M A 3 3 | M H 3 1 , M H 3 2 , M H 3 3 | M V 3 1 , M V 3 2 , M V 3 3 | M D 3 1 , M D 3 2 , M D 3 3 ]
In the above equation, M A 1 1 , M A 1 2 , M A 1 3 are the 1st, 2nd , and 3rd order moments of the corresponding CFs, from the sub-band A 1 of the 1st level decomposition on the error image.
Finally, the feature vector that will be used for learning classification is Z = [ Z s | Z ϵ p ] , containing 78 components.

5.2.3. Method 3: Feature Vector Extracted from Empirical Moments Based on the FC and the PDF of Image Prediction Error and Its Different Sub-Bands of the Multi-Resolution Decomposition

The first characteristic vector Z s combines two types of normalized moments: moments based on the function density of probability and moments based on the characteristic function of various sub-bands of the multi-resolution decomposition at three levels of the gray image. We use the expression of Wang and Moulin [32] to calculate the moments of order n = 1 to 6 of the initial image and its sub-band ( A m , H m , V m , D m ) of the three-level ( m = 1 to 3) wavelet decomposition, as shown below:
M S m n = k = 1 N 2 | ϕ ( k ) | × sin n ( π k K ) k = 1 N 2 | ϕ ( k ) |
ϕ ( k ) = i = 1 N h ( i ) exp j 2 π i k K 1 k K
is a component of the characteristic function at frequency k, estimated from the histogram. Equation (35) already allows having a feature vector of 6 × 1 + 6 × (4 × 3) = 78 components. Also, to improve the performance of the learning system, we calculate the moments of the sub-bands A 2 , H 2 , V 2 , D 2 obtained from the decomposition of the diagonal sub-band D 1 . Therefore, the total size of the vector Z s is 78 + (6 × 4) = 102 components.
Z s = [ M I i | M A 1 i | M H 1 i | M V 1 i | M D 1 i | M A 2 i | M H 2 i | M V 2 i | M D 2 i | M A 3 i | M H 3 i | M V 3 i | M D 3 i | M A 2 i | M H 2 i | M V 2 i | M D 2 i ] , i = 1 , 2 , , 6
For example, M I i = [ M I 1 , M I 2 , M I 3 , M I 4 , M I 5 , M I 6 ] are the first six order moments of the original image.
The second category of characteristics consists of the first six moments of the prediction error, which is ϵ m p = log 2 S m log 2 ( | Q w o p t | ) of coefficients of each sub-band for a given level m, as shown below:
m ϵ m p n = 1 N i = 1 N ( ϵ m p ) n n = 1 , 2 . , 6
The vector of the second category is defined by Z ϵ p , as shown below:
Z ϵ p = [ m ϵ H m i | m ϵ V m i | m ϵ D m i ]
for each
m = 1 , 2 , 3 ; i = 1 , 2 , , 6
The size of Z ϵ p is 3 x 6 x 3 = 54 components.
Finally, the feature vector to be used for classification by learning is Z = [ Z s | Z ϵ p ] . It has 156 components.

5.3. Classification

The last stage of the learning and test process of the universal steganalysis is classification (see Figure 12). Its objective is to group the images into two classes, class of the cover images and class of the stego images, according to their feature vectors. We adopt the Fisher linear discriminator (FLD) and the support vector machine (SVM) for training and testing.

5.3.1. FLD Classifier

Below, we reformulate the FLD classifier for our application and apply it to two classes. Let Z = Z 1 , Z 2 , , Z N be a set of feature vectors, each with n d dimensions. Among these vectors, N 1 vectors are Z c feature vectors labeled 1, indicating cover images. N 2 vectors are Z s labeled 2, indicating stego images, with N = N 1 + N 2 . We want to form all projection values ( Z p ) = Z p 1 , Z p 2 , , Z p N of dimension N through linear combinations of feature vectors Z p as follows:
Z p = W t Z
In the above equation, W is an orientation vector of dimension n d .
In our study, the feature vector Z is projected into a space of two classes. This projection tends to maximize the distance between the projected class means ( M c p , M s p ) while minimizing projected class scatters S c p , S s p .
  • Learning process
    The learning process involves optimizing the following expression:
    J ( W ) = | M c p M s p | 2 S c p + S s p
    where:
    M c p = 1 N 1 Z p Z c p Z p = 1 N 1 Z Z c W t Z = W t M c
    is the mean feature vector of cover class after projection, and
    M c = 1 N 1 Z Z c Z
    is the mean feature vector of cover class of dimension n d .
    The mean feature vector of stego class after projection is represented as follows:
    M s p = 1 N 2 Z p Z s p Z p = 1 N 2 Z Z s W t Z = W t M s
    where:
    M s = 1 N 2 Z Z s Z
    is the mean feature vector of a stego class of dimension n d .
    The scatter matrix of the cover class after projection has been shown as follows:
    S c p = Z p Z c p ( Z p M c p ) 2 = Z Z c ( W t Z W t M c ) 2 = Z Z c W t ( Z M c ) ( Z M c ) t W = W t S c W
    where:
    S c = ( Z M c ) ( Z M c ) t
    is the scatter matrix (of dimension n d × n d ) of a cover class.
    The scatter matrix of the projected samples of a stego class has been shown as follows:
    S s p = Z p Z s p ( Z p M s p ) 2 = Z Z s ( W t Z W t M s ) 2 = Z Z s W t ( Z M s ) ( Z M s ) t W = W t S s W
    where:
    S s = ( Z M s ) ( Z M s ) t
    is a scatter matrix (of dimension n d × n d ) for the samples in the original feature space of a stego class.
    The within-class scatter matrix after projection is defined as follows:
    S c p + S s p = W t ( S c + S s ) W = W t S w W
    where:
    S w = S c + S s
    The difference between the projected means is expressed as follows:
    ( M c p M s p ) 2 = ( W t M c W t M s ) 2 = W t ( M c M s ) ( M c M s ) t W = W t S B W
    where:
    S B = ( M c M s ) ( M c M s ) t
    We can finally express the Fisher criterion (Equation (39)) in terms of S B and S W as follows:
    J ( W ) = W t S B W W t S w W
    The solution of Equation (52) is given by [49].
    W o p t = S w 1 ( M c M s )
  • Testing process
    The testing process (classification step) is conducted as follows:
    Let Z be the matrix containing the feature vectors of covers and stegos.
    The projection of Z on the orientation vector W o p t gives all projected values Z p .
    Z p ( j ) = i = 1 9 W o p t ( i ) × Z ( i , j ) + b j = 1 , 2 , , N
    b is a threshold of discrimination between both classes, and it can be fixed to a value that is halfway between both averages projected on the cover and stego.
    b = 0.5 × ( M c p + M s p )
    with:
    M c p = W o p t t × M c M s p = W o p t t × M s
    In the above equations, W o p t t is the transposed of W o p t .
    The result Z p ( j ) , j = 1 , , N determines the cover or stego class of every test image.
    Indeed, if Z p ( j ) 0 , then the image under test is cover; otherwise, it is stego.

5.3.2. SVM Classifier

According to numerous recent studies, the SVM classification method is better than the other data classification algorithms in terms of classification accuracy [50]. SVM performs classification by creating a hyper-plan that separates the data into two categories in the most optimal way.
Let ( Z i , y i ) ( 1 i N ) be a set of training examples, each example Z i R n d , n d being the dimension of the input space; it belongs to a class labeled as y i 1 , 1 . SVM classification constructs a hyper-plan W T Z + b = 0 , which best separates the data through a minimizing process, as shown below:
1 2 w 2 + C i = 1 N ζ i s u b j e c t t o : y i ( w Z i + b ) 1 ζ i
Variables ζ i are called slack variables, and they measure the error made at point ( Z i , y i ) .
Parameter C can be viewed as a way to control overfitting.
ζ i 0 and C > 0 is the trade-off between regularization and constraint violation.
Problems related to quadratic optimization are a well-known class of mathematical programming problems, and many (rather intricate) algorithms exist to aid in solving them. Solutions involve constructing a dual problem where a Lagrange multiplier α i is associated with every constraint in the primary problem, as shown below:
L ( α ) = i α i 1 2 i j α i α j y i y j Z i T Z j s u b j e c t t o : i α i y i = 0 0 α i y i C
α i or Lagrange multipliers are also known as support values.
The linear classifier presented previously is very limited. In most case, classes not only overlap, but the genuine separation functions are non-linear hyper-surfaces. The motivation for such an extension is that an SVM that can create a non-linear decision hyper-surface will be able to non-linearly classify separable data.
The idea is that the input space can always be mapped on to a higher dimensional feature space where the training set is separable.
The linear classifier relies on the dot product between vectors K ( Z i , Z j ) = Z i T Z j . If every data point is mapped on to a high-dimensional space via some transformation Φ : Z φ ( Z ) , the dot product becomes: K ( Z i , Z j ) = φ ( Z i ) T φ ( Z j ) . Then in the dual formulation, we maximize the following:
L ( α ) = i = 1 N α i 1 2 i j α i α j y i y j K ( Z i , Z j ) s u b j e c t t o : i α i y i = 0 0 α i y i C
Subsequently, the decision function turns into the following:
f ( x ) = s g n ( i = n m α i y i K ( Z i , Z ) + b )
It should be noted that the dual formulation only requires access to the kernel function and not the features Φ ( . ) , allowing one to solve the formulation in very high-dimensional feature spaces efficiently. This is also called the kernel trick.
There are many kernel functions in SVM. Therefore, determining how to select a good kernel function is also a research issue. However, for general purposes, there are some popular kernel functions [50,51], which have been listed as follows:
  • Linear Kernel:
    K ( Z i , Z j ) = Z i T Z j
  • Polynomial Kernel:
    K ( Z i , Z j ) = ( γ Z i T Z j + r ) d γ > 0
  • RBF Kernel:
    K ( Z i , Z j ) = exp ( γ Z i Z j 2 ) γ > 0
  • Sigmoid Kernel:
    K ( Z i , Z j ) = tan h ( γ Z i T Z j + r )
Here, γ , r, and d are kernel parameters.
In our work, we used the RBF kernel function.

6. Experimental Results of Steganalysis

In this section, we present some experimental results that were obtained from the studied steganalysis system that was applied to the enhanced steganographic methods in the spatial and frequency domain. For this purpose, the image dataset UCID [52,53] is used, which includes 1338 uncompressed color images, and all the images were converted to grayscale before conducting the experiments.
In our experiments, we first created the stego images using the following steganographic methods: Enhanced EALSBMR (EEALSBMR), Enhanced DCT steganography (EDCT), and Enhanced DWT steganography (EDWT). We used these methods with different embedding rates of 5%, 10%, and 20%. Following this, we extracted the image features using the three feature-extraction techniques described above (Farid, Shi, and Moulin techniques) for both the cover and stego images. Finally, we employed the classifiers FLD and SVM to classify the images as either containing a hidden message or not. The evaluation of the classification (binary classification) and the steganalysis (also indirectly the efficiency of insertion methods) is performed by calculating the following parameters: sensibility, specificity, and precision of the confusion matrix and the Kappa coefficient (see Table 8 and Equation (64))
K a p p a = P 0 P a 1 P a
with:
P 0 = T P + T N ; P a = ( T P + F P ) × ( T P + F N ) + ( F N + T N ) × ( F P + T N )
In the above equation, P 0 is the total agreement probability (related to the accuracy), and P a is the agreement probability that arises out of chance.
Here is one possible interpretation of Kappa values:
  • Poor agreement = Less than 0.20
  • Fair agreement = 0.20 to 0.40
  • Moderate agreement = 0.40 to 0.60
  • Good agreement = 0.60 to 0.80
  • Very good agreement = 0.80 to 1.00

6.1. Classification Results Applied to the Steganographic Method EEALSBMR

In Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14, we present the classification results (steganalysis) based on the classifiers FLD and SVM and the features of Farid, Shi, and Moulin for the EEALSBMR insertion method with different insertion rates of 5%, 10%, and 20%. The results show that steganalysis is not effective for all insertion rates. Indeed, the values S e , S p , and P r vary around 50%, so these values are not informative values and do not give any idea about the nature of the data. The value of the Kappa coefficient (lower than 0.2) confirms these results. The EEALSBMR steganographic method is robust against statistical steganalysis techniques.

6.2. Classification Results Applied to the Steganographic Method EDCT

The classification results (steganalysis) provided in Table 15, Table 16, Table 17, Table 18, Table 19 and Table 20 for the EDCT insertion method show that with the FLD classifier, when the insertion rate is equal to or higher than 20%, steganalysis is very effective with Shi features and Moulin features, but it is less effective with Farid features. With the SVM classifier, except in the case of Shi features, when an insertion rate of 20% is applied, the results obtained are quite similar to those obtained from the EEALSBMR algorithm and, therefore, steganalysis is not effective. It should be noted that the FLD classifier is more effective for a feature vector of a high dimension than the SVM classifier.

6.3. Classification Results Applied to the Steganographic Method EDWT

With respect to the EDWT method, the results are provided in Table 21, Table 22, Table 23, Table 24, Table 25 and Table 26. These results obtained with the classifiers FLD and SVM indicate that the values of the parameters S e , S p , P r , A c , and K a p p a are high for all insertion rates and feature vectors (Farid, Shi, and Moulin). These results can easily inform us about the presence of hidden information; therefore, steganalysis can be concluded to be very effective. As a result, the insertion method is not robust. It should be noted that steganalysis is very effective here because both the steganagraphic method and feature vectors are based on multi-resolution wavelet decomposition.

6.4. Discussion

The enhanced adaptive LSB methods of steganography in the spatial domain (EEALSBMR) and frequency domain (EDCT and EDWT) provide stego images with a good visual quality up to an embedding rate of 40%: the P S N R is over 50 dB, and the distortion is not visible to the naked eye. Security of the message contents, in case detected by an opponent, is ensured by using the chaotic system. On the other hand, we applied a universal steganalysis method that can work well with all known and unknown steganography algorithms. Universal steganalysis methods exploit the changes in certain inherent features of the cover images when a message is embedded. The accuracy of the classification (discrimination between two classes: cover and stego) of the system greatly relies on several factors, such as the choice of the right characteristic vectors, the classifier, and its parameters.

7. Conclusions

In this work, we first improved the structure and security of three steganagraphic methods that are studied in the spatial and frequency domain by integrating them with a robust proposed chaotic system. Following this, we built a statistical steganalysis system to evaluate the robustness of the three enhanced steganographic methods. In this system, we selected three different feature vectors, namely higher-order statistics of high-frequency wavelet sub-bands and their prediction errors, statistical moments of the characteristic functions of the prediction-error image, the test image, and their wavelet sub-bands, and both empirical PDF moments and the normalized absolute CF. After this, we applied two types of classifiers, namely FLD and SVM, with the RBF kernel.
Extensive experimental work has demonstrated that the proposed steganalysis system based on the multi-dimensional feature vectors can detect hidden messages using the EDWT steganographic method, irrespective of the message size. However, it cannot distinguish between cover and stego images using the EEALSBMR steganographic and EDCT methods if the message size is smaller than 20% and 15%, respectively.

Author Contributions

Funding acquisition, T.M.H.; Supervision, B.B., O.D. and M.K.; Writing—original draft preparation, D.B.; Writing—review & editing, S.E.A., T.M.H.

Funding

This work is supported by the National Foundation for Science and Technology Development (NAFOSTED) of Vietnam through the grant number 102.04-2018.06.

Acknowledgments

The authors thank the anonymous reviewers for useful comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, Z.; Wang, X.; Sun, X.; Liu, Q.; Xiong, N. Steganalysis of LSB matching using differences between nonadjacent pixels. Multimed. Tools Appl. 2016, 75, 1947–1962. [Google Scholar] [CrossRef]
  2. Mohammadi, F.G.; Abadeh, M.S. Image steganalysis using a bee colony based feature selection algorithm. Eng. Appl. Artif. Intell. 2014, 31, 35–43. [Google Scholar] [CrossRef]
  3. Luo, W.; Huang, F.; Huang, J. Edge Adaptive Image Steganography Based on LSB Matching Revisited. IEEE Trans. Inf. Forensics Secur. 2010, 5, 201–214. [Google Scholar]
  4. Chan, C.K.; Cheng, L. Hiding data in images by simple LSB substitution. Pattern Recognit. 2004, 37, 469–474. [Google Scholar] [CrossRef]
  5. Wu, H.C.; Wu, N.I.; Tsai, C.S.; Hwang, M.S. Image steganographic scheme based on pixel-value differencing and LSB replacement methods. IEE Proc.-Vis. Image Signal Process. 2005, 152, 611–615. [Google Scholar] [CrossRef] [Green Version]
  6. Jung, K.; Ha, K.; Yoo, K. Image Data Hiding Method Based on Multi-Pixel Differencing and LSB Substitution Methods. In Proceedings of the 2008 International Conference on Convergence and Hybrid Information Technology, Daejeon, Korea, 28–30 August 2008; pp. 355–358. [Google Scholar] [CrossRef]
  7. Huang, Q.; Ouyang, W. Protect fragile regions in steganography LSB embedding. In Proceedings of the 2010 Third International Symposium on Knowledge Acquisition and Modeling, Wuhan, China, 20–21 October 2010; pp. 175–178. [Google Scholar]
  8. Xi, L.; Ping, X.; Zhang, T. Improved LSB matching steganography resisting histogram attacks. In Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 9–11 July 2010; Volume 1, pp. 203–206. [Google Scholar]
  9. Swain, G.; Lenka, S.K. Steganography using two sided, three sided, and four sided side match methods. CSI Trans. ICT 2013, 1, 127–133. [Google Scholar] [CrossRef] [Green Version]
  10. Islam, S.; Modi, M.R.; Gupta, P. Edge-based image steganography. EURASIP J. Inf. Secur. 2014, 2014, 1–14. [Google Scholar] [CrossRef]
  11. Mungmode, S.; Sedamkar, R.; Kulkarni, N. A Modified High Frequency Adaptive Security Approach using Steganography for Region Selection based on Threshold Value. Procedia Comput. Sci. 2016, 79, 912–921. [Google Scholar] [CrossRef] [Green Version]
  12. Akhter, F. A Novel Approach for Image Steganography in Spatial Domain. arXiv 2015, arXiv:1506.03681. [Google Scholar]
  13. Iranpour, M.; Rahmati, M. An efficient steganographic framework based on dynamic blocking and genetic algorithm. Multimed. Tools Appl. 2015, 74, 11429–11450. [Google Scholar] [CrossRef]
  14. Kumar, R.; Chand, S. A reversible high capacity data hiding scheme using pixel value adjusting feature. Multimed. Tools Appl. 2016, 75, 241–259. [Google Scholar] [CrossRef]
  15. Muhammad, K.; Ahmad, J.; Farman, H.; Jan, Z. A new image steganographic technique using pattern based bits shuffling and magic LSB for grayscale images. arXiv 2016, arXiv:1601.01386. [Google Scholar]
  16. Kordov, K.; Stoyanov, B. Least Significant Bit Steganography using Hitzl-Zele Chaotic Map. Int. J. Electron. Telecommun. 2017, 63, 417–422. [Google Scholar] [CrossRef] [Green Version]
  17. Stoyanov, B.P.; Zhelezov, S.K.; Kordov, K.M. Least significant bit image steganography algorithm based on chaotic rotation equations. C. R. L’Academie Bulgare Sci. 2016, 69, 845–850. [Google Scholar]
  18. Taleby Ahvanooey, M.; Li, Q.; Hou, J.; Rajput, A.R.; Chen, Y. Modern Text Hiding, Text Steganalysis, and Applications: A Comparative Analysis. Entropy 2019, 21, 355. [Google Scholar] [CrossRef]
  19. Sadat, E.S.; Faez, K.; Saffari Pour, M. Entropy-Based Video Steganalysis of Motion Vectors. Entropy 2018, 20, 244. [Google Scholar] [CrossRef]
  20. Yu, C.; Li, X.; Chen, X.; Li, J. An Adaptive and Secure Holographic Image Watermarking Scheme. Entropy 2019, 21, 460. [Google Scholar] [CrossRef]
  21. Hashad, A.; Madani, A.S.; Wahdan, A.E.M.A. A robust steganography technique using discrete cosine transform insertion. In Proceedings of the 2005 International Conference on Information and Communication Technology, Cairo, Egypt, 5–6 December 2005; pp. 255–264. [Google Scholar]
  22. Fard, A.M.; Akbarzadeh-T, M.R.; Varasteh-A, F. A new genetic algorithm approach for secure JPEG steganography. In Proceedings of the 2006 IEEE International Conference on Engineering of Intelligent Systems, Islamabad, Pakistan, 22–23 April 2006; pp. 1–6. [Google Scholar]
  23. McKeon, R.T. Strange Fourier steganography in movies. In Proceedings of the 2007 IEEE International Conference on Electro/Information Technology, Chicago, IL, USA, 17–20 May 2007; pp. 178–182. [Google Scholar]
  24. Abdelwahab, A.; Hassaan, L. A discrete wavelet transform based technique for image data hiding. In Proceedings of the 2008 National Radio Science Conference, Tanta, Egypt, 18–20 March 2008; pp. 1–9. [Google Scholar]
  25. Singh, I.; Khullar, S.; Laroiya, D.S. DFT based image enhancement and steganography. Int. J. Comput. Sci. Commun. Eng. 2013, 2, 5–7. [Google Scholar]
  26. Samata, R.; Parghi, N.; Vekariya, D. An Enhanced Image Steganography Technique using DCT, Jsteg and Data Mining Bayesian Classification Algorithm. Int. J. Sci. Technol. Eng. (IJSTE) 2015, 2, 9–13. [Google Scholar]
  27. Karri, S.; Sur, A. Steganographic algorithm based on randomization of DCT kernel. Multimed. Tools Appl. 2015, 74, 9207–9230. [Google Scholar] [CrossRef]
  28. Pan, J.S.; Li, W.; Yang, C.S.; Yan, L.J. Image steganography based on subsampling and compressive sensing. Multimed. Tools Appl. 2015, 74, 9191–9205. [Google Scholar] [CrossRef]
  29. Ali, M.; Ahn, C.W.; Siarry, P. Differential evolution algorithm for the selection of optimal scaling factors in image watermarking. Eng. Appl. Artif. Intell. 2014, 31, 15–26. [Google Scholar] [CrossRef]
  30. Farid, H. Detecting hidden messages using higher-order statistical models. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; Volume 2. [Google Scholar]
  31. Shi, Y.Q.; Zou, D.; Chen, W.; Chen, C. Image steganalysis based on moments of characteristic functions using wavelet decomposition, prediction-error image, and neural network. In Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6 July 2005; p. 4. [Google Scholar]
  32. Wang, Y.; Moulin, P. Optimized Feature Extraction for Learning-Based Image Steganalysis. IEEE Trans. Inf. Forensics Secur. 2007, 2, 31–45. [Google Scholar] [CrossRef]
  33. Abutaha, M. Real-Time and Portable Chaos-Based Crypto-Compression Systems for Efficient Embedded Architectures. Ph.D. Thesis, University of Nantes, Nantes, France, 2017. [Google Scholar]
  34. Abu Taha, M.; El Assad, S.; Queudet, A.; Deforges, O. Design and efficient implementation of a chaos-based stream cipher. Int. J. Internet Technol. Secur. Trans. 2017, 7, 89–114. [Google Scholar] [CrossRef]
  35. El Assad, S. Chaos based information hiding and security. In Proceedings of the 2012 International Conference for Internet Technology and Secured Transactions, London, UK, 10–12 December 2012; pp. 67–72. [Google Scholar]
  36. Song, C.Y.; Qiao, Y.L.; Zhang, X.Z. An image encryption scheme based on new spatiotemporal chaos. Opt.-Int. J. Light Electron Opt. 2013, 124, 3329–3334. [Google Scholar] [CrossRef]
  37. Tataru, R.L.; Battikh, D.; Assad, S.E.; Noura, H.; Déforges, O. Enhanced adaptive data hiding in spatial LSB domain by using chaotic sequences. In Proceedings of the 2012 Eighth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Piraeus, Greece, 18–20 July 2012; pp. 85–88. [Google Scholar]
  38. Assad, S.E.; Noura, H. Generator of Chaotic Sequences and Corresponding Generating System. International Patent No. WO2011121218A1, 28 March 2011. [Google Scholar]
  39. Farajallah, M.; El Assad, S.; Deforges, O. Fast and secure chaos-based cryptosystem for images. Int. J. Bifurc. Chaos 2015. [Google Scholar] [CrossRef]
  40. El Assad, S.; Farajallah, M. A new chaos-based image encryption system. Signal Proc. Image Commun. 2015. [Google Scholar] [CrossRef]
  41. Battikh, D.; El Assad, S.; Bakhache, B.; Déforges, O.; Khalil, M. Enhancement of two spatial steganography algorithms by using a chaotic system: Comparative analysis. In Proceedings of the 8th International Conference for Internet Technology and Secured Transactions (ICITST-2013), London, UK, 9–12 December 2013; pp. 20–25. [Google Scholar]
  42. Mielikainen, J. LSB matching revisited. IEEE Signal Process. Lett. 2006, 13, 285–287. [Google Scholar] [CrossRef]
  43. Habib, M.; Bakhache, B.; Battikh, D.; El Assad, S. Enhancement using chaos of a Steganography method in DCT domain. In Proceedings of the 2015 Fifth International Conference on Digital Information and Communication Technology and its Applications (DICTAP), Beirut, Lebanon, 29 April–1 May 2015; pp. 204–209. [Google Scholar]
  44. Danti, A.; Acharya, P. Randomized embedding scheme based on dct coefficients for image steganography. IJCA Spec. Issue Recent Trends Image Process. Pattern Recognit 2010, 2, 97–103. [Google Scholar]
  45. Boora, M.; Gambhir, M. Arnold Transform Based Steganography. Int. J. Soft Comput. Eng. (IJSCE) 2013, 3, 136–140. [Google Scholar]
  46. Walia, E.; Jain, P.; Navdeep, N. An analysis of LSB & DCT based steganography. Glob. J. Comput. Sci. Technol. 2010, 10, 4–8. [Google Scholar]
  47. Omrani, T. Conception et Cryptanalyse des CryptosystèMes LéGers Pour l’IoT. Ph.D. Thesis, El Manar University, Tunis, Tunisia, 2019. [Google Scholar]
  48. Song, X.; Liu, F.; Luo, X.; Lu, J.; Zhang, Y. Steganalysis of perturbed quantization steganography based on the enhanced histogram features. Multimed. Tools Appl. 2015, 74, 11045–11071. [Google Scholar] [CrossRef]
  49. Lee, C.K. Infrared Face Recognition. 2004. Available online: https://apps.dtic.mil/dtic/tr/fulltext/u2/a424713.pdf (accessed on 26 July 2019).
  50. Vapnik, V.N. Statistical Learning Theory; Adaptive and Learning Systems for Signal Processing, Communications, and Control; Wiley: Hoboken, NJ, USA, 1998. [Google Scholar]
  51. Vapnik, V.N. An overview of statistical learning theory. Neural Netw. IEEE Trans. 1999, 10, 988–999. [Google Scholar] [CrossRef] [PubMed]
  52. Schaefer, G.; Stich, M. UCID: An uncompressed color image database. In Proceedings of the Storage and Retrieval Methods and Applications for Multimedia 2004, San Jose, CA, USA, 18–22 January 2004; pp. 472–480. [Google Scholar]
  53. Battikh, D.; El Assad, S.; Deforges, O.; Bakhache, B.; Khalil, M. Stéganographie Basée Chaos Pour Assurer la Sécurité de L’information; Presses Académiques Francophones: Sarrebruck, France, 2015. (In French) [Google Scholar]
Figure 1. Proposed chaotic generator.
Figure 1. Proposed chaotic generator.
Entropy 21 00748 g001
Figure 2. Chaotic generator.
Figure 2. Chaotic generator.
Entropy 21 00748 g002
Figure 3. EEALSBMR insertion procedure.
Figure 3. EEALSBMR insertion procedure.
Entropy 21 00748 g003
Figure 4. “Peppers” as cover image.
Figure 4. “Peppers” as cover image.
Entropy 21 00748 g004
Figure 5. “Bike” is as embedded message.
Figure 5. “Bike” is as embedded message.
Entropy 21 00748 g005
Figure 6. Pseudo-chaotic block selection and its corresponding gray value.
Figure 6. Pseudo-chaotic block selection and its corresponding gray value.
Entropy 21 00748 g006
Figure 7. Diagram of the enhanced steganographic-based DCT transform.
Figure 7. Diagram of the enhanced steganographic-based DCT transform.
Entropy 21 00748 g007
Figure 8. Diagram of the EDWT algorithm.
Figure 8. Diagram of the EDWT algorithm.
Entropy 21 00748 g008
Figure 9. (a) Cover image, (b) Stego image with embedding rate of 5%, (c) Stego image with embedding rate of 40%.
Figure 9. (a) Cover image, (b) Stego image with embedding rate of 5%, (c) Stego image with embedding rate of 40%.
Entropy 21 00748 g009
Figure 10. (a) Cover image, (b) Stego image with embedding rate of 5%, (c) Stego image with embedding rate of 40%.
Figure 10. (a) Cover image, (b) Stego image with embedding rate of 5%, (c) Stego image with embedding rate of 40%.
Entropy 21 00748 g010
Figure 11. (a) Cover image, (b) Stego image with embedding rate of 5%, (c) Stego image with embedding rate of 40%.
Figure 11. (a) Cover image, (b) Stego image with embedding rate of 5%, (c) Stego image with embedding rate of 40%.
Entropy 21 00748 g011
Figure 12. Flowchart of the blind steganalysis process.
Figure 12. Flowchart of the blind steganalysis process.
Entropy 21 00748 g012
Figure 13. Multi-resolution wavelet decomposition.
Figure 13. Multi-resolution wavelet decomposition.
Entropy 21 00748 g013
Figure 14. Block diagram for the prediction of coefficient S V 1 p ( x , y ) .
Figure 14. Block diagram for the prediction of coefficient S V 1 p ( x , y ) .
Entropy 21 00748 g014
Figure 15. Prediction context of a pixel x.
Figure 15. Prediction context of a pixel x.
Entropy 21 00748 g015
Table 1. P S N R , I F , and S S I M values for the EEALSBMR method.
Table 1. P S N R , I F , and S S I M values for the EEALSBMR method.
Embedding RateCover Image PSNR IF SSIM
5%Baboon68.38100.99990.9999
Lena68.18470.99990.9999
Peppers67.71600.99990.9999
10%Baboon65.59860.99990.9999
Lena65.28210.99990.9999
Peppers64.77630.99990.9999
20%Baboon62.35510.99990.9999
Lena62.35590.99990.9996
Peppers61.70660.99990.9995
30%Baboon60.69020.99980.9999
Lena60.56300.99980.9990
Peppers59.95850.99980.9992
40%Baboon59.42450.99970.9999
Lena59.26080.99970.9985
Peppers58.66620.99970.9988
Table 2. P S N R , I F , and S S I M values for the EDCT method.
Table 2. P S N R , I F , and S S I M values for the EDCT method.
Embedding RateCover Image PSNR IF SSIM
5%Baboon71.23720.99990.9999
Lena71.17690.99990.9999
Peppers70.48660.99990.9999
10%Baboon64.88460.99990.9999
Lena64.94870.99990.9998
Peppers64.14260.99990.9998
20%Baboon59.68950.99970.9999
Lena59.62250.99970.9992
Peppers58.95350.99970.9993
30%Baboon57.42120.99950.9998
Lena57.34210.99950.9989
Peppers56.74060.99950.9988
40%Baboon56.34210.99940.9997
Lena56.22650.799940.9987
Peppers55.48760.99940.9985
Table 3. P S N R , I F , and S S I M values for EDWT method.
Table 3. P S N R , I F , and S S I M values for EDWT method.
Embedding RateCover Image PSNR IF SSIM
5%Baboon59.18760.99990.9999
Lena58.76730.99970.9999
Peppers58.16990.99970.9999
10%Baboon56.22240.99970.9999
Lena55.80850.99940.9999
Peppers55.20860.99930.9999
20%Baboon53.34630.99880.9999
Lena52.82050.99880.9999
Peppers52.22690.99870.9999
30%Baboon52.04650.99840.9999
Lena51.64710.99840.9999
Peppers51.05090.99830.9999
40%Baboon51.34500.99820.9999
Lena50.95360.99810.9999
Peppers50.34170.99800.9999
Table 4. E, R, and I R for the EEALSBMR method.
Table 4. E, R, and I R for the EEALSBMR method.
Embedding RateCover ImageER IR
5%Baboon7.35860.08020.3805
Lena7.44550.06930.3261
Peppers7.57150.05360.2975
10%Baboon7.35860.08020.3805
Lena7.44560.06930.3261
Peppers7.57150.05350.2976
20%Baboon7.35850.08020.3805
Lena7.44570.06930.3261
Peppers7.57170.05350.2977
30%Baboon7.35840.08020.3805
Lena7.44570.06930.3261
Peppers7.57180.05350.2975
40%Baboon7.35780.08030.3806
Lena7.44540,06930.3260
Peppers7.57220.05350.2973
Table 5. E, R, and I R values for the EDCT method.
Table 5. E, R, and I R values for the EDCT method.
Embedding RateCover ImageER IR
5%Baboon7.35850.08020.3804
Lena7.44560.06930.3261
Peppers7.57160.05360.2976
10%Baboon7.35850.08020.3805
Lena7.44560.06930.3262
Peppers7.57170.05350.2976
20%Baboon7.35850.08020.3804
Lena7.44570.06930.3263
Peppers7.57250.05340.2973
30%Baboon7.35840.08020.3802
Lena7.44590.06930.3261
Peppers7.57300.05340.2969
40%Baboon7.35780.08030.3806
Lena7.44620,06920.3257
Peppers7.57340.05330.2973
Table 6. E, R, and I R values for EDWT method.
Table 6. E, R, and I R values for EDWT method.
Embedding RateCover ImageER IR
5%Baboon7.35810.08020.3805
Lena7.44550.06930.3261
Peppers7.57150.05360.2975
10%Baboon7.35800.08020.3806
Lena7.44560.06930.3261
Peppers7.57170.05350.2974
20%Baboon7.35800.08020.3806
Lena7.44560.06930.3261
Peppers7.57180.05350.2975
30%Baboon7.35800.08020.3805
Lena7.44560.06930.3261
Peppers7.57180.05350.2974
40%Baboon7.35800.08030.3806
Lena7.44570,06930.3261
Peppers7.57210.05330.2973
Table 7. E, R, and I R values for the cover images.
Table 7. E, R, and I R values for the cover images.
Cover ImageER IR
Baboon7.35850.08020.3805
Lena7.44550.06930.3261
Peppers7.57150.05360.2976
Table 8. Confusion matrix.
Table 8. Confusion matrix.
H0: Stego ImageH1: Cover Image
Test
outcome
Test
outcome
positive
True Positive
T P
False Positive
F P
Positive predictive
value ( P P V ),
or Precision
P r = T P T P + F P
Test
outcome
negative
False Negative
F N
True Negative
T N
Negative predictive
value ( N P V )
N P V = T N T N + F N
True positive rate ( T P R ),
or, Sensitivity ( S e ),
S e = T P T P + F N
True negative
rate ( T N R ),
or Specificity( S p ),
S p = T N T N + F P
Accuracy ( A c ),
A c = T P + T N T P + F N + F P + T N
Table 9. FLD classification evaluation of EEALSBMR algorithm using Farid features.
Table 9. FLD classification evaluation of EEALSBMR algorithm using Farid features.
5%H0: Stego ImagesH1: Cover Images
H00.27440.2714 P r = 0.5027
H10.22560.2286 N P V = 0.5033
S e = 0.5487 S p = 0.4572 E x = 0.5030
K a p p a = 0.0060
10%H0: Stego ImagesH1: Cover Images
H00.26900.2645 P r = 0.5042
H10.23100.2355 N P V = 0.5048
S e = 0.5380 S p = 0.4710 E x = 0.5045
K a p p a = 0.0090
20%H0: Stego ImagesH1: Cover Images
H00.27450.2459 P r = 0.5275
H10.22550.2541 N P V = 0.5298
S e = 0.5490 S p = 0.5082 E x = 0.5286
K a p p a = 0.0572
Table 10. FLD classification evaluation of EEALSBMR algorithm using Shi features.
Table 10. FLD classification evaluation of EEALSBMR algorithm using Shi features.
5%H0: Stego ImagesH1: Cover Images
H00.26120.2405 P r = 0.5207
H10.23870.2595 N P V = 0.5208
S e = 0.5225 S p = 0.5190 E x = 0.5208
K a p p a = 0.0415
10%H0: Stego ImagesH1: Cover Images
H00.25040.2448 P r = 0.5057
H10.24960.2552 N P V = 0.5056
S e = 0.5008 S p = 0.5105 E x = 0.5056
K a p p a = 0.0112
20%H0: Stego ImagesH1: Cover Images
H00.31910.1946 P r = 0.6212
H10.18090.3054 N P V = 0.6280
S e = 0.6382 S p = 0.6108 E x = 0.6245
K a p p a = 0.2490
Table 11. FLD classification evaluation of EEALSBMR algorithm using Moulin features.
Table 11. FLD classification evaluation of EEALSBMR algorithm using Moulin features.
5%H0: Stego ImagesH1: Cover Images
H00.24890.2476 P r = 0.5013
H10.25110.2524 N P V = 0.5012
S e = 0.4977 S p = 0.5048 E x = 0.5012
K a p p a = 0.0025
10%H0: Stego ImagesH1: Cover Images
H00.25590.2299 P r = 0.5268
H10.24410.2701 N P V = 0.5253
S e = 0.5117 S p = 0.5403 E x = 0.5260
K a p p a = 0.0520
20%H0: Stego ImagesH1: Cover Images
H00.29900.1985 P r = 0.6010
H10.20100.3015 N P V = 0.6000
S e = 0.5980 S p = 0.6030 E x = 0.6005
K a p p a = 0.2010
Table 12. SVM classification evaluation of EEALSBMR algorithm using Farid features.
Table 12. SVM classification evaluation of EEALSBMR algorithm using Farid features.
5%H0: Stego ImagesH1: Cover Images
H00.34380.3431 P r = 0.5005
H10.15620.1569 N P V = 0.5011
S e = 0.6876 S p = 0.3137 A c = 0.6870
K a p p a = 0.0013
10%H0: Stego ImagesH1: Cover Images
H00.40060.3977 P r = 0.5018
H10.09940.1023 N P V = 0.5071
S e = 0.8011 S p = 0.2046 A c = 0.5029
K a p p a = 0.0057
20%H0: Stego ImagesH1: Cover Images
H00.32510.3199 P r = 0.5041
H10.17490.1801 N P V = 0.5074
S e = 0.6503 S p = 0.3602 A c = 0.5052
K a p p a = 0.0105
Table 13. SVM classification evaluation of EEALSBMR algorithm using Shi features.
Table 13. SVM classification evaluation of EEALSBMR algorithm using Shi features.
5%H0: Stego ImagesH1: Cover Images
H00.22200.2188 P r = 0.5037
H10.27800.2812 N P V = 0.5029
S e = 0.4440 S p = 0.5625 A c = 0.5032
K a p p a = 0.0065
10%H0: Stego ImagesH1: Cover Images
H00.21890.2161 P r = 0.5032
H10.28110.2839 N P V = 0.5024
S e = 0.4377 S p = 0.5678 A c = 0.5028
K a p p a = 0.0055
20%H0: Stego ImagesH1: Cover Images
H00.22820.1999 P r = 0.5330
H10.27180.3001 N P V = 0.5247
S e = 0.4564 S p = 0.6002 A c = 0.5283
K a p p a = 0.0566
Table 14. SVM classification evaluation of EEALSBMR algorithm using Moulin features.
Table 14. SVM classification evaluation of EEALSBMR algorithm using Moulin features.
5%H0: Stego ImagesH1: Cover Images
H00.22750.2264 P r = 0.5013
H10.27250.2736 N P V = 0.5010
S e = 0.4550 S p = 0.5472 A c = 0.5011
K a p p a = 0.0023
10%H0: Stego ImagesH1: Cover Images
H00.24120.2380 P r = 0.5034
H10.25880.2620 N P V = 0.5031
S e = 0.4825 S p = 0.5240 A c = 0.5032
K a p p a = 0.0065
20%H0: Stego ImagesH1: Cover Images
H00.29220.2684 P r = 0.5212
H10.20780.2316 N P V = 0.5271
S e = 0.5844 S p = 0.4632 A c = 0.5238
K a p p a = 0.0476
Table 15. FLD classification evaluation of EDCT algorithm using Farid features.
Table 15. FLD classification evaluation of EDCT algorithm using Farid features.
5%H0: Stego ImagesH1: Cover Images
H00.25240.2454 P r = 0.5070
H10.24760.2546 N P V = 0.5069
S e = 0.5048 S p = 0.5091 A c = 0.5070
K a p p a = 0.0139
10%H0: Stego ImagesH1: Cover Images
H00.26170.2238 P r = 0.5390
H10.23830.2762 N P V = 0.5368
S e = 0.5234 S p = 0.5524 A c = 0.5379
K a p p a = 0.0758
20%H0: Stego ImagesH1: Cover Images
H00.31040.1719 P r = 0.6436
H10.18960.3281 N P V = 0.6337
S e = 0.6208 S p = 0.6562 A c = 0.6385
K a p p a = 0.2770
Table 16. FLD classification evaluation of EDCT algorithm using Shi features.
Table 16. FLD classification evaluation of EDCT algorithm using Shi features.
5%H0: Stego ImagesH1: Cover Images
H00.25480.2343 P r = 0.5209
H10.24520.2657 N P V = 0.5200
S e = 0.5095 S p = 0.5314 A c = 0.5205
K a p p a = 0.0410
10%H0: Stego ImagesH1: Cover Images
H00.32420.1893 P r = 0.6313
H10.17580.3107 N P V = 0.6386
S e = 0.6484 S p = 0.6213 A c = 0.6349
K a p p a = 0.2697
20%H0: Stego ImagesH1: Cover Images
H00.44090.0635 P r = 0.8741
H10.05910.4365 N P V = 0.8807
S e = 0.8817 S p = 0.8730 A c = 0.8773
K a p p a = 0.7547
Table 17. FLD classification evaluation of EDCT algorithm using Moulin features.
Table 17. FLD classification evaluation of EDCT algorithm using Moulin features.
5%H0: Stego ImagesH1: Cover Images
H00.26110.2499 P r = 0.5110
H10.23890.2501 N P V = 0.5115
S e = 0.5223 S p = 0.5002 A c = 0.5112
K a p p a = 0.0225
10%H0: Stego ImagesH1: Cover Images
H00.27800.2136 P r = 0.5655
H10.22200.2864 N P V = 0.5633
S e = 0.5560 S p = 0.5728 A c = 0.5644
K a p p a = 0.1288
20%H0: Stego ImagesH1: Cover Images
H00.37390.1243 P r = 0.7505
H10.12610.3757 N P V = 0.7487
S e = 0.7478 S p = 0.7514 A c = 0.7496
K a p p a = 0.4992
Table 18. SVM classification evaluation of EDCT algorithm using Farid features.
Table 18. SVM classification evaluation of EDCT algorithm using Farid features.
5%H0: Stego ImagesH1: Cover Images
H00.06530.0591 P r = 0.5249
H10.43470.4409 N P V = 0.5035
S e = 0.1307 S p = 0.8817 A c = 0.5062
K a p p a = 0.0124
10%H0: Stego ImagesH1: Cover Images
H00.08480.0644 P r = 0.5683
H10.41520.4356 N P V = 0.5120
S e = 0.1695 S p = 0.8712 A c = 0.5204
K a p p a = 0.0408
20%H0: Stego ImagesH1: Cover Images
H00.17340.0843 P r = 0.6729
H10.32660.4157 N P V = 0.5600
S e = 0.3469 S p = 0.8314 A c = 0.5891
K a p p a = 0.1783
Table 19. SVM classification evaluation of EDCT algorithm using Shi features.
Table 19. SVM classification evaluation of EDCT algorithm using Shi features.
5%H0: Stego ImagesH1: Cover Images
H00.31560.3138 P r = 0.5014
H10.18440.1862 N P V = 0.5024
S e = 0.6312 S p = 0.3724 A c = 0.5018
K a p p a = 0.0036
10%H0: Stego ImagesH1: Cover Images
H00.35720.3266 P r = 0.5224
H10.14280.1734 N P V = 0.5485
S e = 0.7145 S p = 0.3469 A c = 0.5307
K a p p a = 0.0613
20%H0: Stego ImagesH1: Cover Images
H00.42170.2220 P r = 0.6551
H10.07830.2780 N P V = 0.7803
S e = 0.8434 S p = 0.5560 A c = 0.6997
K a p p a = 0.3994
Table 20. SVM classification evaluation of EDCT algorithm using Moulin features.
Table 20. SVM classification evaluation of EDCT algorithm using Moulin features.
5%H0: Stego ImagesH1: Cover Images
H00.30530.3020 P r = 0.5027
H10.19470.1980 N P V = 0.5042
S e = 0.6107 S p = 0.3960 A c = 0.5033
K a p p a = 0.0067
10%H0: Stego ImagesH1: Cover Images
H00.30210.2924 P r = 0.5082
H10.19790.2076 N P V = 0.5120
S e = 0.6042 S p = 0.4152 A c = 0.5097
K a p p a = 0.0194
20%H0: Stego ImagesH1: Cover Images
H00.32640.2427 P r = 0.5736
H10.17360.2573 N P V = 0.5971
S e = 0.6528 S p = 0.5147 A c = 0.5837
K a p p a = 0.1674
Table 21. FLD classification evaluation of EDWT algorithm using Farid features.
Table 21. FLD classification evaluation of EDWT algorithm using Farid features.
5%H0: Stego ImagesH1: Cover Images
H00.47860.0150 P r = 0.9695
H10.02140.4850 N P V = 0.9577
S e = 0.9571 S p = 0.9699 A c = 0.9635
K a p p a = 0.9270
10%H0: Stego ImagesH1: Cover Images
H00.49410.0056 P r = 0.9888
H10.00590.4944 N P V = 0.9882
S e = 0.9882 S p = 0.9888 A c = 0.9885
K a p p a = 0.9770
20%H0: Stego ImagesH1: Cover Images
H00.49930.0005 P r = 0.9990
H10.00070.4995 N P V = 0.9987
S e = 0.9987 S p = 0.9990 A c = 0.9989
K a p p a = 0.9977
Table 22. FLD classification evaluation of EDWT algorithm using Shi features.
Table 22. FLD classification evaluation of EDWT algorithm using Shi features.
5%H0: Stego ImagesH1: Cover Images
H00.40480.0470 P r = 0.8961
H10.09520.4530 N P V = 0.8263
S e = 0.8095 S p = 0.9061 A c = 0.8578
K a p p a = 0.7156
10%H0: Stego ImagesH1: Cover Images
H00.45360.0311 P r = 0.9358
H10.04640.4689 N P V = 0.9100
S e = 0.9072 S p = 0.9377 A c = 0.9225
K a p p a = 0.8450
20%H0: Stego ImagesH1: Cover Images
H00.47530.0232 P r = 0.9534
H10.02470.4768 N P V = 0.9508
S e = 0.9507 S p = 0.9535 A c = 0.9521
K a p p a = 0.9042
Table 23. FLD classification evaluation of EDWT algorithm using Moulin features.
Table 23. FLD classification evaluation of EDWT algorithm using Moulin features.
5%H0: Stego ImagesH1: Cover Images
H00.39460.0650 P r = 0.8587
H10.10540.4350 N P V = 0.8049
S e = 0.7891 S p = 0.8701 A c = 0.8296
K a p p a = 0.6592
10%H0: Stego ImagesH1: Cover Images
H00.43940.0387 P r = 0.9191
H10.06060.4613 N P V = 0.8839
S e = 0.8789 S p = 0.9227 A c = 0.9008
K a p p a = 0.8015
20%H0: Stego ImagesH1: Cover Images
H00.46030.0321 P r = 0.9348
H10.03970.4679 N P V = 0.9218
S e = 0.9206 S p = 0.9358 A c = 0.9282
K a p p a = 0.8564
Table 24. SVM classification evaluation of EDWT algorithm using Farid features.
Table 24. SVM classification evaluation of EDWT algorithm using Farid features.
5%H0: Stego ImagesH1: Cover Images
H00.47700.0230 P r = 0.9541
H10.02300.4770 N P V = 0.9541
S e = 0.9541 S p = 0.9541 A c = 0.9541
K a p p a = 0.9082
10%H0: Stego ImagesH1: Cover Images
H00.48930.0058 P r = 0.9883
H10.01070.4942 N P V = 0.9789
S e = 0.9787 S p = 0.9884 A c = 0.9835
K a p p a = 0.9670
20%H0: Stego ImagesH1: Cover Images
H00.49840.0084 P r = 0.9835
H10.00160.4916 N P V = 0.9967
S e = 0.9968 S p = 0.9832 A c = 0.9900
K a p p a = 0.9800
Table 25. SVM classification evaluation of EDWT algorithm using Shi features.
Table 25. SVM classification evaluation of EDWT algorithm using Shi features.
5%H0: Stego ImagesH1: Cover Images
H00.33660.1658 P r = 0.6700
H10.16340.3342 N P V = 0.6716
S e = 0.6731 S p = 0.6684 A c = 0.6708
K a p p a = 0.3415
10%H0: Stego ImagesH1: Cover Images
H00.41070.1371 P r = 0.7497
H10.08930.3629 N P V = 0.8024
S e = 0.8213 S p = 0.7257 A c = 0.7735
K a p p a = 0.5470
20%H0: Stego ImagesH1: Cover Images
H00.46050.1175 P r = 0.7967
H10.03950.3825 N P V = 0.9063
S e = 0.9210 S p = 0.7650 A c = 0.8430
K a p p a = 0.6859
Table 26. SVM classification evaluation of EDWT algorithm using Moulin features.
Table 26. SVM classification evaluation of EDWT algorithm using Moulin features.
5%H0: Stego ImagesH1: Cover Images
H00.37070.1108 P r = 0.7699
H10.12930.3892 N P V = 0.7506
S e = 0.7413 S p = 0.7785 A c = 0.7599
K a p p a = 0.5198
10%H0: Stego ImagesH1: Cover Images
H00.43320.0725 P r = 0.8567
H10.06680.4275 N P V = 0.8649
S e = 0.8665 S p = 0.8550 A c = 0.8608
K a p p a = 0.7215
20%H0: Stego ImagesH1: Cover Images
H00.46720.0724 P r = 0.8659
H10.06680.4276 N P V = 0.9288
S e = 0.9345 S p = 0.8552 A c = 0.8949
K a p p a = 0.7897

Share and Cite

MDPI and ACS Style

Battikh, D.; El Assad, S.; Hoang, T.M.; Bakhache, B.; Deforges, O.; Khalil, M. Comparative Study of Three Steganographic Methods Using a Chaotic System and Their Universal Steganalysis Based on Three Feature Vectors. Entropy 2019, 21, 748. https://doi.org/10.3390/e21080748

AMA Style

Battikh D, El Assad S, Hoang TM, Bakhache B, Deforges O, Khalil M. Comparative Study of Three Steganographic Methods Using a Chaotic System and Their Universal Steganalysis Based on Three Feature Vectors. Entropy. 2019; 21(8):748. https://doi.org/10.3390/e21080748

Chicago/Turabian Style

Battikh, Dalia, Safwan El Assad, Thang Manh Hoang, Bassem Bakhache, Olivier Deforges, and Mohamad Khalil. 2019. "Comparative Study of Three Steganographic Methods Using a Chaotic System and Their Universal Steganalysis Based on Three Feature Vectors" Entropy 21, no. 8: 748. https://doi.org/10.3390/e21080748

APA Style

Battikh, D., El Assad, S., Hoang, T. M., Bakhache, B., Deforges, O., & Khalil, M. (2019). Comparative Study of Three Steganographic Methods Using a Chaotic System and Their Universal Steganalysis Based on Three Feature Vectors. Entropy, 21(8), 748. https://doi.org/10.3390/e21080748

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop