1. Introduction
The protection of digital images is one of the urgent security issues that need to be solved nowadays, and digital image watermarking technology provides an effective solution. Digital image watermarking technology embeds watermarked information into multimedia information carriers without degrading the perceived quality but at the same time resists common attacks. The technology must satisfy robustness, imperceptibility and watermark capacity [
1]. In past decades, digital image watermarking has been widely studied in grayscale images, whereas color images have received much less attention though they constitute most of the displayed multimedia content. Color information is also viewed as a significant feature in many fields of image processing. If correctly handled, color information will lead to more effective watermarking schemes, especially when achieving a good trade-off between imperceptibility and robustness [
2]. Therefore, there is a considerable hot research topic for researchers to use the color information in digital image watermarking technology.
At present, most color image watermarking algorithms extract luminance information of color images or process only a single color channel, such as: (1) By transforming the color space model, the color image is transformed from RGB color space to YCbCr (or YUV) color space, and then the luminance component Y of the image is selected to embed the watermark; (2) According to the insensitivity of human vision system (HVS) to the change of blue component, the watermark is embedded by modifying the blue component value of color image [
3]; (3) The three color channels of color images are processed separately, and watermark embedding also needs to be carried out on three color components respectively. Therefore, how to make better use of the correlation between the three channels of the color image is an issue that cannot be ignored.
In order to realize a better tradeoff between robustness and invisibility, the watermark strength can be achieved by the JND, which is the maximum distortion not perceived by HVS. The most well-known JND model is proposed by Watson et al. [
4], the model consists of a sensitivity function, two masking components based on luminance and contrast masking. Lihong et al. [
5] proposed robust algorithms which incorporate Watson’s model to compute the quantization steps, it has proved a significant improvement in robustness against the common attacks by the used perceptual model. In the past few years, the JND model has been the focus of research because of its excellent performance in the field of digital image analysis, such as Kim’s model [
6], Zhang’s model [
7], Wan’s model [
8] and so on. And based on the development of JND modeling, some JND model-based watermarking algorithms are proposed [
9,
10,
11]. In addition, visual saliency (VS) is also considered to facilitate JND metrics. However, these existing JND models restore each color channel separately or process the vector representation from three color channels with the traditional monochromatic model. And it cannot make full use of the high correlation among RGB channels. To account for this, a quaternion perceptual JND model is needed.
Quaternions, which have been increasingly used in color image processing in the past two decades, offer a solution to achieve this goal. They represent an image by encoding its three color channels on the imaginary parts of quaternion numbers. Compared with traditional color image processing technologies, the main advantage of such a representation is that a color image can be processed holistically as a vector field and can exploit the correlation between the three color components, so does the color image watermarking [
12].
Recently, many algorithms have been proposed for color image watermarking based on Quaternion Discrete Fourier Transform (QDFT). Bas et al. [
13] firstly proposed a non-blind color image watermarking algorithm in the QDFT domain by the method of quantization index modulation. But the algorithm has a low peak signal noise ratio (PSNR) and poor ability to resist attacks. Ma et al. [
14] proposed a watermarking scheme for color images based on local quaternion Fourier spectral analysis (LQFSA). They introduced invariant feature transform (IFT) and geometric correction scheme to enhance the robustness to tackle geometric attacks. Jiang et al. [
15] pointed out that, Bas et al. [
13] didn’t consider the issue that the real part of the quaternion matrixes by inverse QDFT should be equal to zero and the problem could lead to a loss of watermark energy. They selected the real part of the QDFT coefficient matrixes to insert watermark and modified the coefficients of the real part symmetrically. Based on this constraint of symmetric distortion, Chen et al. [
16] provided a Full 4-D quaternion discrete Fourier transform watermarking framework to illustrate the overall performance gain in terms of imperceptibility, capacity and robustness they can achieve compared to other quaternion Fourier transform based algorithms.
Furthermore, some other quaternion algorithms have been proposed, such as Quaternion Singular Value Decomposition (QSVD). In [
17], a blind color image watermarking algorithm is proposed based on QSVD. The QSVD and rotation are employed to fulfill the process of watermarking and extracting watermark. Liu et al. [
18] firstly performed QSVD to get U matrix and then the watermark was inserted into the optimally selected coefficients of the quaternion elements in the first column of the U matrix to enhance the invisibility. Recently, because the Discrete Cosine Transform (DCT) is compatible with the JPEG image compression standard, the watermarking algorithm in QDCT domain has received more considerable attention [
19]. Therefore, it is meaningful to study how to introduce Quaternion Discrete Cosine Transform (QDCT) into watermarking algorithm.
In this paper, a robust quaternion JND model for color image watermarking (QuatJND) is proposed. And a novel and efficient robust quantization watermarking framework by exploiting quaternionic domain DCT based QuatJND model is proposed for color images. In our method, we embed the watermark into the QDCT domain by the method of spread transform dither modulation (STDM). At first, the colorfulness which is obtained in the QDCT domain is introduced as a new impact factor for QuatJND model. Furthermore, the QuatJND model is incorporated to derive the optimum quantization step for the embedding.
In summary, our main contributions are listed as follows:
- (1)
We proposed perceptual unit pure quaternion in the QDCT watermarking scheme. In this way, the proposed scheme can have the better performance.
- (2)
A quaternion perceptual JND model (QuatJND) is calculated in the QDCT domain.
- (3)
The color information and the pattern guided contrast masking effect in quaternion domain are considered for the QuatJND model.
- (4)
A logarithmic STDM watermarking scheme is proposed incorporate the QuatJND model. The proposed watermarking scheme can achieve a better performance with Peak Signal to Noise Ratio (PSNR) and Quaternion Structural Similarity Index (QSSIM).
The rest of this paper is organized as follows.
Section 2 introduces the basic definitions that include quaternion and the QDCT of color images.
Section 3 we provide QuatJND model which is used in the scheme and the colorfulness masking effect in quaternion DCT domain. Subsequently, we present the proposed watermarking scheme based on QDCT combines with QuatJND model. Experimental results and comparisons in
Section 4 are provided to demonstrate the superior performance of the proposed scheme. Finally, we draw the conclusions in
Section 5.
2. Quaternion DCT Definition
Quaternions were introduced by mathematician Hamilton in 1843 [
20]. For easy reading, the main relevant abbreviations and symbols used in this paper is listed in
Table 1.
Quaternion is the extension of real number and complex number, a quaternion has one real part and three imaginary parts given by
where
, and
are three imaginary numbers which obey the following rules
If the real part , q is called a pure quaternion.
Pei et al. [
21] first applied quaternion to color image, as well proposed quaternion model of color image, which considered the three color components R, G, B as three imaginary parts of the quaternion. Let
be an RGB image function with the quaternion representation (QR), then each pixel can be represented as a pure quaternion as
where
,
and
are the pixel values of the R, G and B color components at position
, respectively.
Because of the non-commutative multiplication rule for quaternions, the form of QDCT has two categories, left-handed form and right-handed form [
19]. Without loss of generality, for QDCT, only the left-side one is considered in this paper, which satisfy the following equation
Corresponding to QDCT, the inverse Quaternion Discrete Cosine Transform (IQDCT) of
is defined as
where,
and
and
is a unit pure quaternion which meets the constraint that
.
In order to reduce the complex computations and to make full use of the existing real-valued DCT codes, this subsection describes the relationship between QDCT and DCT. This relationship can provide not only an efficient computation approach for QDCT but also an approach to analyse the constraints for the watermark embedding.
Considering the general unit pure quaternion
, substituting Equation (
4) into Equation (
5), we have
where,
,
,
, are respectively the conventional DCT matrix of the red, green and blue channels, and
is the conventional discrete cosine transform.
Similarly, applying IQDCT, we get the reconstructed image
where,
Here,
is the conventional inverse discrete Cosine transform.
For the color image signal, it can be drawn from Equation (
12) that IQDCT must be a pure quaternion matrix after modifying some QDCT coefficients to insert watermark. Otherwise, taking only the three imaginary parts of this quaternion matrix to get the watermarked image will discard non-null real part data and result in a loss of watermark energy. Based on the above relationships Equations (
10) and (
12) and depending on the pure unit quaternion considered, one can identify the constraint to respect when modifying QDCT coefficients so as to avoid watermark energy loss. After the watermark embedding process,
should be a pure quaternion, or more clearly
where
is a zero matrix.
For the IQDCT coefficients matrix, we can obtain the real part from Equation (
13) as
In order to respect the constraint Equation (
14), as we can see from Equation (
15),
is not related to one component
. So, if we modify
to insert watermark, the precondition
is satisfied.
3. Proposed Method
3.1. Perceptual Unit Pure Quaternion
To avoid watermark energy loss, the real part
after QDCT is selected to embed watermarking. It can be seen form Equation (
16), for different unit pure quaternion, the
part transformation coefficients are different, and the schemes of modifying coefficient embedding watermark are also different. Hence, the combination of unit pure quaternion and its weight will affect the performance of watermarking algorithm.
, , are respectively the conventional DCT matrix of the red, green and blue channels. Therefore, part can be deemed a weighted aggregate of each component of the color such as R, G, and B. Although we embed watermark information into part which changes the distribution of the values of , for the whole image in spatial domain, the differences can spread to the R, G, B three color components.
During the QDCT transformation, a unit pure quaternion is the most commonly used where . The unit pure quaternion will cause the same amount of change in the three color components of R, G, and B. However, due to the color sensitivity of the human eye to R, G and B is different, this change will make the invisibility of watermarking method is poor. In order to improve the invisibility of the watermarking scheme, we proposed the perceptual unit pure quaternion.
In the process of exploring the weight of
,
, and
, we find that Zhu et al. [
22] pointed out the RGB input signal can be converted into the YCbCr signal to remove the redundancies across three color channels and to offer good experimental results. The luminance component Y can be represented use the R, G, B three color components and the weight of R, G and B is 0.299, 0.587 and 0.114, respectively. And, some color image watermarking algorithms such as in YCbCr (or YUV) space [
23,
24], they modified the luminance component to inject watermark, and the experimental results showed good invisibility.
Therefore, to obtain well imperceptibility of the watermarked model, the unit pure quaternion and its weight according to relative relationship between the color channel R, G, and B is 0.299, 0.587 and 0.114, respectively. And the unit pure quaternion which should meet the constraint that
. Then the perceptual unit pure quaternion is
and,
where, the perceptual unit pure quaternion
and its weight
,
, and
is 0.299:0.587:0.114, substituting the relative relationship into Equation (
18), and we can obtain the
,
, and
, respectively. The experimental results are provided in
Section 4.3.1 show that the perceptual pure unit quaternion
has the better performance.
3.2. Proposed Quaternionic JND Model
For an image, a high-precision perceptual JND profile is usually perceived various changes which includes the spatial contrast sensitivity function (CSF), luminance adaptation (LA) effect and the contrast masking (CM) effect. In fact, the color sensitivity needs to be concerned for a perceptual JND profile in color images. The JND in the QDCT domain is typically expressed as a product of a base threshold and some modulation factor. In this paper, the real part
after QDCT is selected to embed watermarking. To obtain the JND threshold of the modified coefficients in
, in this section, a novel contrast masking effects considering colorfulness is introduced:
where the parameter
t is the index of a QDCT block, and
is the position of the QDCT block coefficients.
is to account for the summation effect of individual JND thresholds over a spatial neighborhood for the visual system and is set to 0.14.
N is the dimension of QDCT (8 in this case).
is the base CSF threshold,
is the LA effect and
is the CM effect [
8,
9,
25,
26]. And
is an important factor to reflect the colorfulness.
3.2.1. Spatial CSF in Quaternion Domain
is the quaternion domain JND value for the component
generated by spatial CSF on a uniform background image [
6] and can be given by considering the oblique effect in QDCT domain as
where
and
is formulated by QDCT coefficients
where
is cycle per degree (cpd) for the
-th QDCT coefficient and is given by
and,
where
indicates the horizontal/vertical length of a pixel in degrees of visual angle,
is the ratio of the viewing distance to the screen height, and
H is the number of pixels in the screen height.
stands for the direction angle of the corresponding QDCT component, which is expressed as
3.2.2. Luminance Adaptation in Quaternion Domain
An luminance adaptation factor
that employed both the cycles per degree (cpd)
for spatial frequencies and the average intensity value of the block
can be formulated as,
where the
,
are empirically set as
where
is expressed as in Equation (
22) and the average intensity value of the t-block
can be expressed as
where
is the QDCT coefficient at position
of the t-th
block called Q-DC coefficient (Quaternion DC coefficient).
denotes the maximum directional energy of image block in Equation (
28),
C is a fixed constant and is approximately equal to
to ensure the invariance and stability of
. Therefore, the proposed formula can resist the fixed gain attack as it will vary linearly with the amplitude changes.
where
,
and
are the QDCT coefficients at position
,
and
of the t-th
block called Q-AC coefficient (Quaternion AC coefficient). Similar to DCT transformation [
27], the Q-AC coefficients obtained after QDCT transformation can reflect the image block direction energy. In our work, we select
,
,
to reflect the directional energy of the block in the horizontal, vertical and diagonal direction, respectively.
3.2.3. Pattern Guided Contrast Masking in Quaternion Domain
is modeled for boosting the
based on local spatial texture complexity (e.g., smoothness, edge or texture), which is given by
where the
is modeled in a gamma pdf form and expressed as
where,
represents the contrast masking effect of t-th QDCT block. In this paper, both pattern complexity and luminance contrast are considered to construct the contrast masking effect. And the contrast masking effect
is defined as
where,
is the pattern complexity and
is the luminance contrast of t-th QDCT block, respectively.
The pattern complexity measurement of the block proposed by Wan et al. [
9] is the ratio of the maximum directional energy and the DC coefficient of each
block, which can measure energy in different directions while keeping the measurement of pattern complexity insensitive to the changes caused by the watermarking process. However, this method ignores the relationship between the directional energy of a DCT block and its neighboring DCT blocks. Therefore, we propose a new pattern complexity representation that combines the directional energy within a QDCT block and the directional energy of its neighboring QDCT blocks. This method is more effective in representing the complexity relationship of image patterns.
Firstly, we choose a neighborhood of size for each QDCT block. If the directional location of the maximum directional energy of its neighboring block is the same as this QDCT block, then the neighboring block is marked. We choose the ratio of the number of marked neighborhood blocks to all neighborhood blocks of this QDCT block as the pattern complexity .
Therefore, the pattern complexity
of the image block is represented by
where,
represents the correlation between the image block and its neighbor in Equation (
34), and
n is the number of neighborhoods of t-th QDCT block.
where,
is the maximum directional energy of t-th QDCT block and
are the maximum directional energy of neighboring blocks of the t-th QDCT block.
Since the pattern complexity of the irregular regions in the image is stronger, the diminishing effect of
follows the non-linear transducer as
The luminance contrast
can be obtained from Q-AC coefficients
,
and
where,
is normalization operation. Following logarithmic form, the increasing effect of
can be represented as
Figure 1 shows the
of three types of image blocks, such as smoothness, edge and texture. The yellow image block is smooth, and its
is less than 0.15. The blue image block is an edge block whose
is greater than 0.15 and less than 0.2. An image block with its
greater than 0.2 is a texture block, such as the green image block in
Figure 1.
3.2.4. Colorfulness Masking in Quaternion Domain
In this part, we proposed a new masking function that consider the colorfulness masking effect from
,
and
parts. For the color images, when human eyes observe different colors, the interaction between different colors will interfere with the judgment of color. Colorfulness is the attribute of chrominance information humans perceive. Hasler and Susstrunk [
28] have shown that colorfulness can be represented effectively with combinations of image statistics ( the variance and mean values). And Panetta et al. [
29] pointed out, just as the human visual system (HVS), human eyes capture color information in the opponent color spaces such as red-green (R-G) and yellow-blue (Y-B) color space. In a word, the colorfulness can be formulated by using image statistics in opponent color spaces.
In this paper, we select
,
and
parts after QDCT to calculate the image block’s colorfulness. In QDCT domain, we are first transformed into the opponent red-green and yellow-blue color space can be expressed as follows:
Then, for a QDCT block (
), the image colorfulness
is defined as
where,
,
,
and
represent the variance and mean values along these two opponent color axes and can be expressed by the coefficients of QDCT block
Figure 2 shows the comparisons of colorfulness metrics.
Figure 2a,b are from TID2008 database [
30]. The colorfulness of
Figure 2a,b is 0.9462 and 0.4563, respectively. The results indicate the colorfulness metrics have a good correlation with human color perception. Inspired by this, a factor obtained from colorfulness is used to make JND a better match for human beings. The colorfulness masking factor
is defined as
where,
is normalization operation.
3.3. QuatJND-Based Watermarking
In this section, the flowchart of the proposed watermarking scheme based QuatJND model is briefly introduced.
3.3.1. Adaptive Quantization Step
In this paper, some of the QDCT coefficients denoted as the host vector
X, the maximum imperceptible change in the random direction of
v can be given as
. To ensure the independence between the quantization compensation and the original signal in the watermarking process, the host vector is transformed into logarithmic domain firstly.
where,
v is the random projection vector,
is the image block direction maximum energy in Equation (
28), which is to resist the linear variation. And
z is used as a secret key.
In this arrangement, the transformed vector
Y is quantized into
regarding the watermark bit as
where,
is the dither signal corresponding to the message bit w and the proposed JND model can be used as a slack S to calculate the adaptive quantization step
Thus, when the image is scaled by a fixed gain, the coefficients to be watermarked and the estimated quantization step can ensure stability.
3.3.2. Watermark Embedding Procedure
The proposed watermarking scheme includes two parts, embedding and extraction procedure.
Figure 3 illustrates the embedding steps of the watermarking scheme. Here, taking Lena image as an example, the procedures of the watermark embedding are shown as follows:
Step 1: For an original image, it is first divided into non-overlapped blocks of
size, and each block is converted to the quaternion representation by Equation (
4).
Step 2: Apply QDCT which used the perceptual unit pure quaternion
to each block, and the QDCT spectrum coefficients are obtained by Equation (
11).
Step 3: Estimate the QuatJND factors including the spatial CSF effect, luminance adaptation and contrast masking in
quaternion domain by Equations (
20), (
25) and (
29), respectively.
Step 4: Extract colorfulness feature from
,
and
by Equation (
39). Quantize and calculate the colorfulness masking of each
block for QuatJND profile by Equation (
42).
Step 5: Final QuatJND value of each block combined with colorfulness masking is determined by Equation (
19). The proposed QuatJND value can be served as the perceptual redundancy vector
S.
Step 6: The coefficients from the fourth to tenth except the fifth after zigzag scan are selected to form a host vector X. The host vector X and the perceptual redundancy vector S are used to obtain the transformed vector Y and the adaptive quantization step .
Step 7: One bit of the watermark message
w after Arnold transformation is embedded into the transformed vector
Y as followed:
Step 8: Transform the modulated coefficients back to form the watermarked image .
Step 9: Finally, the inverse QDCT on each block is performed, and then the watermarked image is obtained.
3.3.3. Watermark Extracting Procedure
The extracting algorithm is an inverse procedure of the embedding algorithm.
Figure 4 illustrates the extracting steps of the watermarking scheme. And the procedures of the watermark extracting are shown as follows:
Step 1: For a watermarked image, it is first and divided into non-overlapped blocks of
size, and each block is converted to the quaternion representation by Equation (
4).
Step 2: Apply QDCT which used the perceptual unit pure quaternion
to each block, and the QDCT spectrum coefficients are obtained by Equation (
11).
Step 3: Estimate the QuatJND factors including the spatial CSF effect, luminance adaptation and contrast masking in
quaternion domain by Equations (
20), (
25) and (
29), respectively.
Step 4: Extract colorfulness feature from
,
and
by Equation (
39). Quantize and calculate the colorfulness masking of each
block for QuatJND profile by Equation (
42).
Step 5: Final QuatJND value of each block combined with colorfulness masking is determined by Equation (
19). The proposed QuatJND value can be served as the perceptual redundancy vector
.
Step 6: The coefficients from the fourth to tenth except the fifth after zigzag scan are selected to form a host vector . The host vector and the perceptual redundancy vector are used to obtain the transformed vector and the adaptive quantization step .
Step 7: The watermark can be detected according to the minimum distance detector as follows
Step 8: The final watermark image is obtained by the inverse Arnold transform.
4. Experimental Results and Comparisons
In this section, we show and discuss the experimental results. To prove the effectiveness and robustness performance of our proposed scheme, we perform experiments using the original code in MATLAB (MathWorks, Natick, MA, USA) R2019a on a 64-bit Windows 10 operating system at 16 GB memory, 3.40 GHz frequency of Intel (R) Core (TM) i7-6700 CPU (Intel, Santa Clara, CA, USA).
4.1. Performance Metrics
In the experiments, two objective criteria include Peak Signal to Noise Ratio (PSNR) and Quaternion Structural Similarity Index (QSSIM) have been considered to measure the fidelity. The Bit Error Rate (BER) is computed to evaluate the robustness of algorithms.
- (1)
Peak Signal to Noise Ratio (PSNR)
PSNR provides an objective standard for measuring image distortion or noise level. In this experiment, we use PSNR to evaluate the quality between the embedded image and original image, which means it is used to evaluate the invisibility of the embedded watermark. The evaluation result is expressed in dB (decibel). The larger the PSNR value between the two images, the better the invisibility of the watermarking scheme. Considering the host color image
I of size
and its watermarked version
, the PSNR is defined as
- (2)
Quaternion Structural Similarity Index (QSSIM)
Kolaman et al. [
31] developed a visual quality matrix that will be able to better evaluate the quality of color images, which is named quaternion SSIM (QSSIM). The value of QSSIM ranges is [0, 1]. And the closer the QSSIM value is to 1, the better the image’s visual quality effect. The QSSIM is defined by Equation (
49), which is composed to be the same as SSIM but with quaternion subparts.
where,
, are the quaternion representation (QR) of image I and its watermarked version respectively;
, are the mean of image I and its watermarked version respectively;
, are the variance of image I and its watermarked version respectively;
is the covariance of image I and its watermarked version .
- (3)
Bit Error Rate (BER)
The Bit Error Rate was here utilized to evaluate the quality of the extracted binary watermark image
compared to its original version
w, both of
pixels. The BER between
and
w is given by
4.2. Imperceptibility
To verify the performance of the proposed color image watermarking algorithm, 109 color images available from the Computer Vision Group at the University of Granada (
http://decsai.ugr.es/cvg/dbimagenes/, accessed on 21 September 2020) were considered. A binary watermark “SDNU” of length 4096 bits (
) is embedded into the original cover images as shown in
Figure 5. Eight standard images ‘Lena’, ‘Avion’, ‘Baboon’, ‘House’, ‘Athens’, ‘Sailboat’, ‘Butrfly’ and ‘Goldgate’, were used as testing images. The size of the eight testing images are
shown in
Figure 6.
For evaluating the invisibility of the embedded watermark, we embed the watermark in
Figure 5 in the host images in
Figure 6a–h, respectively. And the proposed scheme was compared with the popular watermarking schemes, referring to QDFT [
16], QSVD [
32], Wang et al. [
10] proposed color image watermarking based on orientation diversity and color complexity (CIW-OCM), Wang et al. [
11] proposed robust image watermarking via perceptual structural regularity-based JND model (RIW-SJM), and Su [
33]. First of all, a good watermarking scheme must show a satisfying invisibility.
Figure 7 gives the visual quality fraction of the watermarking images. The tested images in
Figure 7 are first restrained with the same PSNR = 42 dB and we ensure this by modifying the embedded intensity factor. The QSSIM values are compared, the higher the QSSIM value, the more complete the details and structure of the image preserved. The average QSSIM values of different algorithms are 0.9850, 0.9886, 0.9794, 0.9814 and 0.9864, respectively, and the proposed QSSIM value is 0.9810. Although the results for the proposed image watermarking scheme are not the best compared with other watermarking schemes, the QSSIM values are almost similar to other schemes on average. With the same PSNR guaranteed, the QSSIM of our scheme is comparable to other schemes. This is because in order to achieve a balance between imperceptibility and robustness, our scheme satisfies the imperceptibility while calculating the redundancy of the image more accurately, making the modification of the image larger. Thus the algorithm in this paper can obtain better robustness while satisfying the imperceptibility, while the tests of robustness in
Section 4.3 below also demonstrate this.
To prove that the proposed image watermarking scheme can produce a high watermark quality and the watermark can be extracted correctly without attack. The test images were watermarked with the uniform fidelity, a fixed Peak Signal to Noise Ratio (PSNR) of 42 dB. The bit error rate (BER) was computed to make the objective performance evaluation.
Figure 8 shows the cover images, watermarked images, and extracted watermarks. Intuitively, it is noticeable that the proposed method can provide a good visual quality of the extracted watermark image.
4.3. Robustness
4.3.1. Evaluation of Different Unit Pure Quaternions
In order to prove that the perceptual unit pure quaternion in
Section 3.1 can produce a better watermark quality, we compare the robustness results with different types of pure unit quaternion such as
[
13],
[
34], and
[
20]. It should be noticed that
is the most common unit quaternion used in the quaternion based on image processing literature.
Table 2 shows the performance for different
. From the results, the perceptual unit quaternion has lower BER in JPEG compression. This shows the advantages of QDCT transform itself which is compatible with the JPEG compression standard. Although for the perceptual unit quaternion, the performance is not the best as others under Gaussian noise and Filtering, it also has low BER and shows good robustness. In total, the perceptual pure unit quaternion
has better performance against common signal attacks, especially in JPEG attacks.
4.3.2. Evaluation of Different JND Models within QDCT Watermarking Algorithm
This experiment is used to compare the performance of different JND models used within the proposed QDCT watermarking algorithm. To verify robustness performance of our proposed QuatJND model guided watermarking scheme, the proposed scheme is compared with different JND models, referring to Watson’s model [
4], Kim’s model [
6] and Zhang’s model [
7].
In this experiment, we recomputed the features of Watson’s model, Kim’s model and Zhang’s model in the quaternion DCT domain, respectively. For example, in Kim’s model, we used the
coefficients to calculate the base threshold, luminance adaptation, and contrast masking in the quaternion domain. The tested images are first embedded watermark and restrained with the same PSNR = 42 dB, and the average BER values are compared. As shown in
Table 3, compared with the other JND models, the proposed model always has the lowest BER for different noise intensities. This indicates that the proposed model performed much better than others. As for JPEG compression, different performance emerges in the four JND models within the watermarking algorithms shown about JPEG compression attacks. The average BER of Watson’s, Kim’s, Zhang’s and QuatJND model are 0.0828, 0.1144, 0.0775, and 0.0331 when JPEG compression quality is 30, respectively. And from
Figure 9c, the extracted watermark can be clearly identified when JPEG compression quality is 30. When the Median filtering and Gaussian filtering are used to attack the watermarked image. For filtering with median filter (3,3), the BER value of the proposed model is 4.5% higher than the Kim’s model, but in
Figure 10b, the extracted watermark can also be correctly recognized. In summary, our proposed QuatJND model performs excellently in quaternion domain.
4.3.3. Evaluation of Watermarking Algorithms in Different Domains
This experiment is used to compare the performance of different watermarking algorithms in DCT domain and spatial domain. To verify the effectiveness of quaternion DCT and the advantage of the quaternion, the proposed scheme is compared with CIW-OCM [
10], RIW-SJM [
11]) and Su [
33].
- (1)
Under common attacks
During the image transmission, the watermarked image is attacked easily and inevitably by some common attacks such as Gaussian noise, Salt and Pepper noise, JPEG compression and Amplitude scaling.
Table 4 lists the average robustness results for eight test images using the different watermarking schemes under various attacks, such as Gaussian Noise with zero mean, variances 0.0003, 0.0008 and 0.0012; Salt and Pepper noise with different densities 0.004, 0.008 and 0.015; JPEG compression with quality factors 30, 50, and 80; Amplitude scaling with factors 0.3, 1.2 and 1.5.
First, it is obvious that our proposed scheme can get a minimum bit error rate compared with other schemes after Gaussian noise and Salt and Pepper noise attack from
Table 4. As shown in the robustness results, the proposed scheme has a lower average BER value than CIW-OCM [
10] about 0.3% at least. And with the density of Salt and Pepper noise increased, Su [
33] shows lower BER than ours when density is 0.015. For traditional JPEG compression attacks, our proposed scheme has similar results when JPEG compression quality is greater than 50, which is 0.1–0.4% lower than the CIW-OCM [
10]. In general, the proposed model has the best performance against JPEG compression attacks on average. Finally, while the watermarked image is distorted by Amplitude Scaling attack, although the performance of the proposed model is not the best, the results are almost similar to other schemes on average. And from
Figure 9d, the extracted watermark can be clearly identified when the Amplitude Scaling is 1.5, which can satisfy the robustness of watermarking scheme against Amplitude Scaling attacks.
- (2)
Under filtering attacks
Filtering attacks such as Median filtering and Gaussian filtering are usually used to attack the watermarked image. And the visual perception of the extracted watermark can be destroyed by these attacks. The performance of watermarking model resists the Filter attacks needs to be considered.
Table 5 and
Figure 10a,b present the comparison results of filtering. For filtering with Median filtering (3,3), the BER value of the proposed model is 1% lower than the model of RIW-SJM [
11]. And for Gaussian filtering, the proposed model has the lowest BER values than the rest of models, which can ensure that the extracted watermark image has a higher recognition.
- (3)
Under cropping attacks
In practice, the watermarked image will also be contaminated by other attacks such as Cropping and geometric attacks. Here, in this experiment, image rotation has been considered as a geometric attack, which results in the change of image pixel value and image size. We firstly compared the robustness results after Cropping attacks in
Table 6 and
Figure 11. The watermarking image is affected by Central cropping (1/8 of image), Left upper cropping (1/8 of image), Row cropping (1/8 of image) and Column cropping (1/8 of image). From the results of
Table 6, the proposed model gets the lowest BER value than other algorithms which means that the proposed method provides a good visual quality of the extracted watermark image after different types of cropping attacks.
- (4)
Under rotation attacks
To verify that the proposed image watermarking scheme can be robust to geometric attacks, we test the robustness of the proposed algorithm after image Rotation. In this experiment, one watermarking image is first carried out a forward image Rotation transformation, and is then corrected by an inverse image Rotation transformation. More clearly, the watermarking image rotates clockwise 30, 60, 90, 120, and then rotates counter-clockwise to extract the watermarking.
The robustness of image rotation is listed in
Table 7 and
Figure 10c,d. The results show that our proposed method has the lowest BER value than other methods. For rotation, the value of BER obtained by our method does not exceed 0.2% when the rotation angle is 30, 60, 90, 120, which demonstrates our method can get a significant robustness performance for image rotation.
4.3.4. Evaluation of Different Quaternion Watermarking Schemes
This experiment is used to compare the performance of different quaternion watermarking algorithms. To verify the robustness of the proposed scheme in quaternion DCT domain, the proposed scheme is compared with QDFT [
16] and QSVD [
32].
Table 8 shows the BER values of watermarked images attacked by Gaussian Noise (GN), JPEG compression attacks, Salt and Pepper noise (SPN), Median filtering (MF) Gaussian filtering (GF) and Amplitude Scaling (AS). As shown in the robustness results, for traditional JPEG compression attack, both QSVD [
32] and QDFT [
16] have a poorer performance than the proposed scheme, the reason may be that the proposed scheme enhances the performance to resist JPEG attack by using QDCT domain. When the watermarked image is distorted by Amplitude Scaling attack, the performance of the proposed scheme is better than that of other schemes except QSVD [
32]. In QSVD [
32], they inserted the watermark through moderating the coefficients
and
of the quaternion elements in U matrix, the Amplitude Scaling attack leads to the minimum effect on the relative relationship between
and
. Therefore, QSVD [
32] shows superior performance to Amplitude Scaling attack.
Table 9 demonstrates the comparision of the average BER values between our scheme and other methods for different image attacks with fixed image quality, QSSIM = 0.9820. Although the QDFT [
16] has better robustness to against the process of adding Gaussian noise and Salt and Pepper noise, the scheme has a poorer performance in JPEG compression. For JPEG compression, it can be seen from
Table 9 that our method has a lower BER than other watermarking algorithms when the JPEG with QF is 30 and 50. In addition, the robustness performance of our method is obviously better than others under combined image attacks that performed JPEG compression firstly, followed by the Gaussian noise and Salt and Pepper noise. Above all, the watermarking framework based on the QuatJND model in QDCT domain has better robustness performance than other methods in most cases.
4.3.5. Evaluation of Combined Attacks
Table 3,
Table 4,
Table 5,
Table 6,
Table 7,
Table 8 and
Table 9 list the robustness performance after single image attack. However, in the actual digital signal transmission process, the watermarked image will be destroyed by multiple attacks simultaneously. Here, we further compared the robustness results after various combined attacks by common image processing in
Figure 12 and
Figure 13.
Figure 12 shows the BER after passing JPEG compression processing, followed by common Gaussian noise, Salt and Pepper noise, Gaussian filtering and Median filtering attacks.
Figure 13 shows the BER after passing Gaussian noise, followed by Amplitude Scaling, Cropping and image Rotation. From the results of
Figure 12 and
Figure 13, the human eye still can recognize the extracted watermark information after different combined attacks. In summary, our method shows well robustness performance after combinatorial image attacks, which means that our method can achieve good image copyright protection in practical applications.
On the whole, some exist quaternion watermarking algorithms such as QSVD [
32] and QDFT [
16], some DCT domain watermarking algorithms such as CIW-OCM [
10] and RIW-SJM [
11], and Su [
33] which is an improved watermarking algorithm based on Schur decomposition, although these algorithms show better invisibility for watermarked images from the results of
Figure 7, these algorithms have poorer robustness under some attacks. They can’t achieve a good tradeoff between invisibility and robustness. As for CIW-OCM [
10] and RIW-SJM [
11], although the algorithm achieves a good tradeoff by using JND models, the algorithm neglects the correlation between the three color components. The proposed model exploits the correlation for three color channels and uses a QuatJND model to obtain the optimum quantization step, and the results show our scheme has better robust performance than others.