1. Introduction
In the current digital era, protecting copyright is of utmost importance, as it has become easy to duplicate and distribute content without consent. It is essential to safeguard the rights of creators and owners of original works, and to ensure that they are fairly compensated for their efforts [
1,
2]. The digital image watermarking method has evolved into a vital tool within the realm of multimedia security. Its primary function is the seamless embedding of watermarks into digital media by exploiting data redundancies and human visual limitations. Through the application of advanced techniques, these watermarks are rendered imperceptible to the naked eye [
3,
4,
5]. In cases where it is necessary to determine whether the digital information has been maliciously tampered with or to address copyright problems, specialized algorithms can be used to extract the watermark information that is concealed in the host multimedia. The advent of digital watermarking technology has provided a powerful tool for digital copyright protection, and it continues to advance with the ever-increasing demands of multimedia security.
Digital watermarking algorithms can be categorized based on their work domains, namely, spatial domain watermarking algorithms [
6,
7,
8] and frequency domain watermarking algorithms [
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24]. Although both of these approaches have their respective merits and demerits, spatial domain algorithms enable the direct modification of pixel values within the host image for embedding watermarks, making them simple in computation and expeditious in execution. However, their reliability is comparatively lower than that of frequency domain algorithms.
The spatial domain methods offer innovative approaches to digital watermarking, addressing the need for robust solutions in copyright protection and multimedia security. Some techniques seamlessly blend spatial and frequency domain methods, ensuring robustness even without access to the original image. Others focus on pixel-level watermarking in color images, leveraging the DC coefficient to enhance speed, robustness, and efficiency. Moreover, a Schur decomposition-based approach enables faster processing and imperceptible embedding of messages within image pixels, collectively providing faster, more secure, and robust solutions for digital property marking and copyright protection.
For instance, Yuan Z. [
6] introduced a blind image watermarking method that incorporates both spatial and frequency domain approaches. Their method allows for embedding the watermark without requiring the original image, thus rendering it a blind watermarking method. The proposed method underwent rigorous testing against various types of attacks and demonstrated strong robustness. Similarly, a new method for watermarking color images [
7] hides information directly within the pixels, avoiding complex transformations that slow down traditional techniques. This method utilizes a special value, the DC coefficient, which reflects the image’s average color. By slightly altering some pixel colors, the watermark information is embedded. This approach offers three advantages: speed due to simpler math, robustness against tampering because the watermark is linked to the DC coefficient and efficiency by combining the strengths of spatial and frequency domain watermarking. Tests on various image collections confirm this new method’s effectiveness in hiding watermarks while offering faster processing and stronger robustness compared to existing techniques.
Furthermore, Su et al. [
8] proposed a new method that utilizes Schur decomposition for protecting copyrights of color images. It works by hiding a secret message directly within the tiny colored squares, or pixels, of the image. Unlike older methods that require complex calculations, this one is much faster because it works directly on the pixels themselves. It cleverly uses a hidden property within the image data to embed the message, and these changes are so small that one cannot see them in the final picture. Tests show this method excels in three ways: the message stays hidden even if someone edits the image, it is faster than other techniques because it avoids complex math, and it can be used to watermark images instantly, like when one takes a photo. This new approach provides a faster and more secure way to mark digital property.
Various frequency domain watermarking techniques have emerged to tackle digital copyright challenges and enhance multimedia security. These methods prioritize imperceptibility, robustness, and payload capacity, showcasing exceptional performance against specific noise types. For instance, the technique developed in [
9] uses a unique modulation strategy to embed watermarks within multimedia content, ensuring high image quality and strong resistance to common attacks. The adaptive watermarking method [
10] employs Arnold chaotic maps and redundant transforms to scramble and embed watermarks into color images, balancing invisibility and robustness. Similarly, the technique described in [
11] combines fast Walsh–Hadamard transform (FWHT) and singular value decomposition (SVD) to efficiently embed watermarks, offering superior performance over traditional methods.
Other notable methods include the use of Fourier transforms [
12] to conceal watermarks within color images, providing good imperceptibility and robustness. A new method [
13] leverages a 2D discrete cosine transform (DCT) to embed watermarks by adjusting specific frequencies within image blocks, excelling in invisibility, robustness, and security. The blind watermarking algorithm [
14] integrates discrete Fourier transform (DFT) within the spatial domain for rapid processing and robust watermarking. Additionally, the technique used in [
15] employs a two-level DCT to embed watermarks into both DC and AC coefficients, enhancing resilience against image processing and geometric attacks. The algorithm in [
16] utilizes the Walsh–Hadamard transform (WHT) to embed watermarks in a way that minimizes visual distortion while maintaining high embedding capacity and robustness.
Further advancements include the use of Schur decomposition [
17,
18] for efficient watermark embedding and robust blind detection. These methods use eigenvalue quantization and affine transformation for enhanced security and robustness. The algorithm in [
19] employs LU decomposition and the Arnold transform to embed watermarks with high imperceptibility and robustness. In [
20], QR decomposition is used to embed color watermarks, minimizing visible distortion and improving performance against common attacks. The techniques in [
21,
22] utilize Schur decomposition and a combination of discrete wavelet transform (DWT) and SVD for robust and imperceptible watermarking in color images and medical images, respectively. The methods in [
23,
24] further optimize watermarking using DCT coefficients and QR decomposition, enhancing imperceptibility and security. The blind image watermarking technique in [
25] uses WHT to embed watermarks in color images, demonstrating superior performance in invisibility and robustness against attacks.
The quality of digital watermarking technology depends on several key factors, including its ability to remain invisible, its resistance to tampering, its real-time performance, and its capacity for handling large amounts of information. With color images being so prevalent, there is a need for a watermarking algorithm that can meet these requirements while effectively handling color data. This paper proposes a new blind watermarking technique that builds upon the strengths of the Walsh–Hadamard transform (WHT) in the frequency domain. Unlike the method presented in [
25], this approach optimizes watermark embedding and extraction by strategically selecting specific locations within the WHT coefficients of non-overlapping image blocks. This strategy leverages the WHT’s energy concentration property, potentially leading to improved performance.
The proposed watermarking technique belongs to the double-color image watermarking technique, which is designed for applications like copyright security due to its big-data capacity. Additionally, this watermarking method is blind. Being blind, it does not require the original image for extraction, ensuring the watermark’s information remains protected. The subsequent sections of this manuscript are arranged in the following manner.
Section 2, Preliminaries, presents the rudimentary principles and pertinent knowledge employed in the proposed watermarking technique, encompassing the Arnold transform and WHT.
Section 3 expounds on the watermark insertion and retrieval procedure suggested in this manuscript.
Section 4 examines and assesses the efficacy of our approach utilizing extensive empirical data. Finally, some relevant conclusions are offered in
Section 5.
3. Proposed Method
The method that has been proposed is presented in this section, and it is bifurcated into two primary phases. In one phase, a watermark embedding is done, and in the other phase, an extraction of a watermark is performed. To be more specific, the proposed method’s pipeline is shown in
Figure 1 to provide a visual representation of the process flow. While the first phase involves embedding the watermark into digital content, the second involves recovering the embedded watermark from watermarked content.
At first, the cover digital image is portioned into non-overlapping discrete 4 × 4 blocks, each subsequently undergoing the WHT. In contrast to Siyu Chen’s approach [
25], which inserts two watermark bits in the first row of a 4 × 4 block, and Prabha’s approach [
16], which inserts four watermark bits in the third-row and fourth-row coefficients of a 4 × 4 block, our algorithm inserts four bits in optimal locations to enhance the image’s imperceptibility and visual quality. To identify the best embedding locations, an in-depth investigation has been carried out, as follows.
Suppose a 4 × 4 image block “
a” and it’s WHT “
A” are given by Equations (11) and (12), respectively. In digital image watermarking, watermark information is embedded by strategically modifying specific WHT coefficients within “
A”. This results in a modified WHT block denoted by “
A*”. The watermarked image block “
a*” is derived with an application of an inverse WHT to the modified WHT block “
A*”. The image’s visual quality, i.e., imperceptibility, is determined by how similar the resulting block “
a*” is to the original image block “
a”. Now, we will examine the extent of modification undergone by the watermarked image block “
a*” in comparison to its original image block “
a”, given the scenario where any WHT coefficient of “
A” is modified.
We have derived the matrices of the watermarked image blocks for all sixteen possible cases where a specific WHT coefficient “
Ai j” is modified (
i = 1 to 4 and
j = 1 to 4). We present three cases that describe the process of modifying the coefficients
A11,
A22, and
A43 separately by adding a variation value
d, and provide the equations needed to calculate the modified WHT block and watermarked image block for each case. If we modify
A11 by adding
d, the modified WHT block and watermarked image block are obtained using Equations (13) and (14), respectively. Similarly, if we modify
A22 by adding
d, we can use Equations (15) and (16) to calculate the modified WHT block and watermarked image block, respectively. If we modify
A43 by adding
d, the modified WHT block and watermarked image block can be obtained using Equations (17) and (18), respectively. In all cases, the value of
d depends on the specific embedding bits used.
Upon careful analysis of the sixteen matrices representing the watermarked image blocks, a significant observation was made. It was found that whenever a particular WHT coefficient “Ai j” was modified, there was a corresponding alteration in all the pixels in the column “j” of the watermarked image block. The degree of variation between the pixel values of the watermarked block and those of the original block was directly related to the magnitude of the modified WHT coefficient “Ai j”. This observation sheds light on how a modification WHT coefficient affects the quality of watermarked images, offering a pathway to fine-tune embedding locations for enhanced outcomes.
Siyu Chen’s approach [
25] inserts one watermark bit into the first two elements of the first row (
A11 and
A12) and inserts the second watermark bit into the remaining elements of the first row (
A13 and
A14). Assume that the coefficients
A11,
A12,
A13 and
A14 are modified to
A11 +
d1 and
A12 +
d2,
A13 +
d3 and
A14 +
d4, respectively, after embedding, where
d1,
d2,
d3 and
d4 represent the variations that depend on embedding bits. In this case, the modified WHT block “
A*”and the watermarked image block “
a*” are given by Equations (19) and (20), respectively.
Prabha’s approach [
16] utilizes specific elements of each column of “
A” to embed individual watermark bits. The last two elements of the first column (
A31 and
A41) are employed to embed the first watermark bit, while the last two elements of the second column (
A32 and
A42) are utilized for the second watermark bit. Similarly, the last two elements of the third column (
A33 and
A43) correspond to the third watermark bit, and the last two elements of the fourth column (
A34 and
A44) are used for the fourth watermark bit.
Assume after the embedding process that the coefficients “
Ai j” are modified to “
Aij + dij”, where “
dij” represents variations that depend on the embedded bits (
i = 3 to 4 and
j = 1 to 4). Consequently, the modified Walsh–Hadamard transform (WHT) block is denoted as “
A*”, and the resulting watermarked image block is represented as “
a*”. The equations for “
A*” and “
a*” are given by Equations (21) and (22), respectively.
Drawing from our prior analysis, it becomes evident that modifying a single WHT coefficient in the WHT block causes a corresponding change in all the pixels within the corresponding column of the watermarked image block. Siyu Chen’s approach [
25] inserts the first watermark bit by changing the WHT coefficients
A11 and
A12 in the first and second columns of the WHT block, which leads to changes in all the pixels within these columns of the watermarked image block. Similarly, inserting the second watermark bit involves modifying the WHT coefficients
A13 and
A14 in the third and fourth columns, affecting all pixels in these columns of the watermarked image block.
Prabha’s approach [
16] similarly alters all the pixels of the watermarked image block. This occurs because four watermark bits are inserted in all the columns of the third and fourth rows of the WHT block. Consequently, both Chen’s approach [
25] and Prabha’s approach [
16] share a drawback, as they result in altering the values of all pixels in all columns of the watermarked image block due to watermark bits being embedded in all the columns of the WHT block. To address this limitation, an alternative approach is proposed. In this new approach, four watermark bits are inserted in the first and second columns of the WHT block, as follows.
The first two elements of the first row (A11 and A12) are employed to insert the first watermark bit, while the first two elements of the second row (A21 and A22) are utilized for the second watermark bit. Similarly, the first two elements of the third row (A31 and A32) correspond to the third watermark bit, and the first two elements of the fourth row (A41 and A42) are used for the fourth watermark bit.
This alternative approach overcomes the limitation of both Chen’s and Prabha’s approaches by strategically embedding the watermark bits in the first two columns of the WHT block. By doing so, it capitalizes on the fact that the first two columns have already undergone alterations due to the initial embedding of the first watermark bit in Chen’s approach and the first two watermark bits in Prabha’s approach. Consequently, embedding the remaining watermark bits in the first two elements of the subsequent rows ensures that only the values of the previously altered pixels in the first two columns are modified while leaving the unaltered pixels in the subsequent two columns undisturbed. In this case, the modified WHT block “
A*” and the watermarked image block “
a*” are given by Equations (23) and (24), respectively.
3.1. Watermark Embedding
In [
25], two watermark bits are embedded in the first row of a 4 × 4 block, and in [
16], four watermark bits are inserted in the last two rows’ coefficients of a 4 × 4 block. Our algorithm inserts four watermark bits in the first and second column coefficients of a 4 × 4 block. As such, the embedding capacity of our method is the same as that of [
16] and doubles that of [
25].
The method being proposed embeds a colored digital watermark image into a colored digital host image. The method of watermark embedding is shown in
Figure 2. The host image, denoted as H, is a color picture with a side length of M, while the watermark picture, denoted as W, is also a color picture with a side length of N.
3.1.1. Step 1—Preprocessing the Color Watermark Image
This algorithm begins by preprocessing the color watermark picture. To strengthen this algorithm against potential threats to its security and robustness, a series of intricate processing steps are applied to the color watermark picture W. Initially, the image undergoes a dimensionality cut, giving rise to the creation of Wi (I = 1, 2, 3), which are then arranged in a specific order of red, green, and blue. Subsequently, Arnold scrambling transformation is employed on each of the Wi, rendering the watermark robust against different attacks and making it stronger. In the final step, decimal format representation of each pixel of scrambled watermark picture is converted to an 8-bit number. These sequences of 8-bit binary data, which are obtained for each channel, are then concatenated successively, resulting in the formation of an 8N2- bit of watermark SWi (I = 1, 2, 3).
3.1.2. Step 2—Preprocessing the Color Host Image
In the second step of the algorithm, the color cover picture H is dissected into three distinct color channels, Hi (i = 1, 2, 3): red, green, or blue. Now, each of the color host channels, Hi, is partitioned into discrete 4 × 4 blocks that do not overlap.
3.1.3. Step 3—Applying the WHT to the Image Blocks
Given that
R represents a discrete block of picture data, apply the Walsh–Hadamard transform (WHT) to
R utilizing Equation (25) to acquire its corresponding frequency domain coefficient
HR via a transformational mechanism.
The dimension of the image block is represented by N, and the corresponding Hadamard matrix, HN, which contains 1 and −1, is of order N × N. In this paper, we set the size of the image block as N = 4, thus leading to a corresponding Hadamard matrix of order 4 × 4.
3.1.4. Step 4—Embedding the Watermark
During the watermark embedding process, we extract four watermark bits wj (j = 1 to 4) from the SWi. These four bits are then embedded into a transformed matrix according to the following cases.
Case 1. If
wj = 1 and (
HRj,1 −
HRj,2) ≤
e, modify the values of
HRj1 and
HRj2 (
j = 1 to 4) as follows:
Case 2. If
wj = 0 and (
HRj,2 −
HRj,1)
≤ e, modify the values of
HRj1 and
HRj2 (
j = 1 to 4) as follows:
Equations (26)–(29) dictate the specific location and manner in which the watermark is embedded. They use an average value calculated from the first two elements in a jth row. The error parameter is denoted by e, while T represents the step size. Furthermore, HRm,n refers to the value at position (m, n). Notably, the error parameter e is assigned a value of 5 and the step size is assigned a value of 5 in this paper.
3.1.5. Step 5—Inverse WHT Transform
The image block “
R*” containing the watermark is obtained using the inverse WHT of Equation (30):
where
HR* refers to the matrix in the frequency domain that contains the watermark.
3.1.6. Step 6—Iterative Embedding and Watermarked Image Compilation
To obtain watermarked image “H*”, follow the steps mentioned above (steps 2 to 5) repeatedly until the embedding of the entire binary data of the watermark, resulting in “Hi*”. Once this is done, combine all the channeled host images that contain watermark “Hi*” to obtain the final watermarked image.
Below is an explanation of the proposed algorithm accompanied by an illustrative example. Let us begin with a subblock from the host image denoted as
R, with dimensions of 4 × 4.
The WHT of the 4 × 4 coefficients of
HR is given by:
Here, the first two elements of each row are given as follows. In the first row, the elements are HR11 = 88.75 and HR12 = 88.25, while the second row holds HR21 = 3.25 and HR22 = 1.75. Similarly, the third row has HR31 = 4.75 and HR32 = 2, and the fourth row contains HR41 = 1.25 and HR42 = 6.75. We proceed to compute the average values of these elements for each row. For the first row, the average is avg1 = 88.5, and for the second row, it is avg2 = 2.5. Likewise, the third row’s average is avg3 = 3.5, and the fourth row’s average is avg4 = 4.
The four watermarks to be embedded, w1, w2, w3, and w4, are assumed to be 1, 0, 1, and 1, respectively. These watermark bits are embedded according to Equations (23)–(26). Both the error parameters, denoted as e, and the quantization step size, represented by T, are established at 5. Let us begin with the first watermark, marked as w1 = 1. To embed this watermark, we start by checking if (HR11 − HR12) is smaller than 5. Since (88.75 − 88.25) is indeed less than 5, and considering the average value is avg1 = 88.5, we make changes to HR11 and HR12. We take HR11 and add half of T (which is 2.5) to avg1, resulting in H*R11 = avg1 + T/2 = 88.5 + 2.5 = 91. Similarly, we subtract half of T from avg1 to adjust HR12, which becomes H*R12 = avg1 − T/2 = 88.5 − 2.5 = 86.
Now let us move to the second watermark, w2 = 0. We follow a similar process. We check if (HR22 − HR21) is less than 5, which in this case (01.75 − 03.25) is smaller than 5. Also, considering the average value as avg2 = 2.5, we adjust HR21 and HR22. To adjust HR21, we subtract half of T (2.5) from avg2, making HR21 = avg2 − T/2 = 2.5 − 2.5 = 0. For HR22, we add half of T to avg2, resulting in HR22 = avg2 + T/2 = 2.5 + 2.5 = 5.
Moving on to the third watermark, which is w3 = 1. We check (HR31 − HR32) and see that (04.75 − 02.25) is less than 5. With avg3 = 3.5, we adjust HR31 and HR32 accordingly. To adjust HR31, we add half of T (2.5) to avg3, leading to H*R31 = avg3 + T/2 = 3.5 + 2.5 = 7. Similarly, for HR32, we subtract half of T from avg3, making HR32 = avg3 − T/2 = 3.5 − 2.5 = 1.
Finally, we reach the fourth watermark, w4 = 1. By checking (HR41 − HR42), which is (01.25 − 06.75), we confirm it is less than 5. Also, considering avg4 = 4, we adjust HR41 and HR42. To adjust HR41, we add half of T (2.5) to avg4, resulting in HR41 = avg4 + T/2 = 4 + 2.5 = 6.5. For HR42, we subtract half of T from avg4, making HR42 = avg4 − T/2 = 4 − 2.5 = 2.5.
Once the watermark is successfully inserted, the resulting altered WHT block is denoted as
HR* and can be expressed using Equation (33).
The matrix representation of
R*, as given by Equation (30), can be obtained by applying the inverse WHT to
HR*.
3.2. Watermark Extraction
The extraction of the watermark from the watermarked image is illustrated in
Figure 3.
3.2.1. Step 1—Preprocessing of Watermarked Image
The preprocessing involves the segmentation of the watermarked image into three (red, green, and blue) channels, which are denoted as Hi* where (i = 1, 2, 3). Subsequently, all channels undergo further segmentation into non-overlapping subblocks with dimensions of 4 × 4.
3.2.2. Step 2—Apply WHT
The second step of the watermark extraction process involves obtaining the frequency domain coefficients of the block that contains the watermark. To achieve this, we apply the WHT to the watermarked image block
R* using Equation (35).
The resulting matrix represents the frequency domain coefficients with the watermark. Here, N and HN represent the image block’s size and Hadamard matrix, respectively.
3.2.3. Step 3—Extracting the Watermark
In this step, the watermarks
wj* (
j = 1 to 4) contained in the image block
HR* are extracted using Equation (36):
where
wj* denotes the
j-bit watermark that has been extracted from the image block
HR*, while
H*
Rj,1 and
H*
Rj,2 refer to the elements in the 1st and 2nd columns of the
j throw of
HR* and
j ranges from1 to 4.
3.2.4. Step 4—Iterative Extraction and Watermark Sequence Compilation
The fourth step in the watermark extraction process involves repeating the steps outlined in steps 2 and 3 to extract Swi* for all color channels. Next, each sequence of eight binary digits is transformed into corresponding series of pixel data. These pixel values are then further transformed into a decimal representation. In this way, we obtain the complete watermark sequences Sw1*, Sw2*, and Sw3* for the red, green, and blue channels, respectively.
3.2.5. Step 5—Inverse Arnold Transformation
In this step, the watermark sequence Wi* for each channel is obtained by applying the inverse Arnold transformation to the scrambled watermark sequence for each color channel. The index i (i = 1, 2, 3) signifies that this operation is performed independently on each channel.
3.2.6. Step 6—Watermark Image Reconstruction
In the sixth and final step, the watermark sequences Wi* from the three channels are recombined to reconstruct the complete watermark W*.
To continue with the previous example, let us consider a watermarked image block
R* in Equation (34), which is expressed as shown in Equation (37):
The matrix
HR*, frequency domain coefficients with the watermark, obtained with WHT could be calculated using (38):
By comparing the first two elements within each row (H*Rj1 and H*Rj2) of HR* using Equation (38), we extract watermarks wj* (j = 1 to 4). In the first row, H*R11 (91) was higher than H*R12 (86), leading to the extraction of w1* as 1. In the second row, H*R21 (0) was lower than H*R22 (5), resulting in w*2 as 0. The third row had H*R31 (7) greater than H*R32 (1), leading to w*3 as 1. Similarly, in the fourth row, H*R41 (6.25) was higher than H*R42 (2.5), resulting in w*4 as 1.
4. Experimental Results
To evaluate the effectiveness of the proposed watermarking technique, simulations are performed on 15 host images. The simulations were conducted with the primary objectives of evaluating the invisibility of the watermark, determining the embedding capacity, and assessing the robustness of the algorithm. To comprehensively evaluate the performance of the proposed algorithm, it underwent rigorous testing against a variety of attacks, such as noise addition, filtering operations, JPEG compression, and geometric distortions. The effectiveness and efficiency of the proposed algorithm was thoroughly examined through meticulous evaluation, comparing it with watermarking approaches presented by Siyu Chen [
25] and Prabha [
16]. This comparative analysis yielded invaluable insights into the algorithm’s performance across various dimensions.
Evaluating the invisibility (or imperceptibility) of a digital image watermarking method is a crucial performance criterion. The level of invisibility directly impacts the picture quality of the watermarked image. A significant objective metric that is used to assess the transparency of a watermarking algorithm is the peak signal-to-noise ratio (
PSNR). Within the domain of watermarking, the
PSNR finds common application in quantifying the similarity between the original host image and its watermarked counterpart. The
PSNR value for all image channels can be computed by first calculating the mean square error (MSE) value of each channel using Equation (39), and then substituting it into Equation (40).
In Equations (39) and (40),
j = 1, 2, 3 correspond to the three channels (R, G, and B) of the color image. The dimensions of the image, pertaining to its rows and columns, are denoted by “
m” and “
n”, respectively. The pixel value of the original host image at (
u,
v) is represented by
Hj (
u,
v), and
Hj* (
u,
v) signifies the corresponding pixel value in the watermarked image for the
jth channel. Equation (41) sums up the individual
PSNR values of the three color channels to derive the overall
PSNR value for a color image.
In addition, the assessment of the imperceptibility of watermarking methods involves evaluating how the inclusion of a watermark impacts the visual fidelity and authenticity of the original image, commonly measured using a significant metric referred to as the “structural similarity index measurement (
SSIM)”. In
SSIM implementation, the
SSIM serves to capture the structural details present within images, independently of brightness and contrast. It captures the inherent structural properties of objects within a scene, while simultaneously modeling distortions as a combination of brightness, contrast, and structure. The mathematical expression for calculating the
SSIM between the host and watermark images is as follows:
where the parameters
μ1 and
μ2 symbolize the average values pertaining to cover and watermarked images, in that order. Furthermore,
σ1 and
σ2 represent the variances associated with the cover and watermarked images, while
σj denotes the covariance specifically related to the watermark images. The inclusion of the constants
c1 and
c2 within the equation serves the purpose of preventing any division by zero scenarios. Furthermore, the assessment of watermarking algorithm performance encompasses the crucial aspect of robustness, wherein the normalized cross-coefficient (
NC) serves as a prominent benchmark. The
NC quantifies how similar the original watermark image
W is to the extracted watermark image
W*, thereby offering valuable insights into the robustness of the algorithm. The calculation methodology for the
NC is precisely defined by Equation (43).
In Equation (43), j = 1, 2, 3 correspond to the three channels (R, G, and B) of the color image. The color watermark image has row and column sizes denoted by m and n, respectively. The pixel at (u, v) in the watermark image, and its corresponding pixel in the extracted watermark image for the jth channel are denoted as Wj (u, v) and Wj* (u, v) respectively.
The normalized cross-coefficient (
NC) value ranges from 0 to 1, and the value 1 indicates that the robustness of the watermarking method is strong. For consistency with this research, a quantization step size of
T = 5 has been selected. The results, including
NC,
SSIM, and
PSNR, of our watermarking algorithm are outlined in
Table 1. These metrics are evaluated across various color cover images and a color watermark image illustrated in
Figure 4. Our assessment of the algorithm primarily emphasizes its imperceptibility and robustness.
4.1. Examination and Analysis of Imperceptibility
Ensuring the visual integrity of the watermarked image stands as the paramount objective, making the invisibility of the watermark a fundamental characteristic and a vital criterion for evaluating the performance of a watermarking method. In fact, the essence of invisibility lies in meticulously ensuring that the host image endowed with the embedded watermark image maintains an indiscernible appearance, seamlessly blending with the original host image within the boundaries of human visual perception. This aligns with the objective of measuring the imperceptibility of the watermark, which involves evaluating how effectively the watermark image is concealed in the cover image without noticeable visual alterations.
The imperceptibility of our proposed watermarking technique can be evaluated by objective measures such as the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). A high PSNR value or a small mean squared error (MSE) signifies strong imperceptibility, implying that the watermarked image is minimally distorted and closely resembles the original cover image. Likewise, a higher SSIM value approaching 1 reflects a heightened degree of resemblance between the watermark and the host images, thus indicating an enhanced level of imperceptibility.
The watermark imperceptibility of the proposed watermarking algorithm is given in
Table 1, showcasing the evaluation based on
PSNR and
SSIM between the watermarked and the host images. For color images, most cases show
PSNR values exceeding 34 dB and
SSIM values surpassing 0.96. Similarly, high-resolution images in
Figure 4m–o achieve even better results, with
PSNR exceeding 43 dB and
SSIM surpassing 0.99. This indicates that the difference between the watermarked and the original host images is virtually indistinguishable to the human eye, highlighting the excellent invisibility achieved by the algorithm. Additionally, a visual comparison of the watermarked images in
Figure 5 against the original host images in
Figure 4 reveals no apparent degradation, confirming the algorithm’s ability to preserve the watermarked image relative to the host image. Furthermore, an
NC value of 1 for all images signifies that each extracted watermark picture resembles the original watermark picture.
4.2. Assessing Robustness
Robustness is a crucial requirement for digital watermarking algorithms, ensuring the reliable extraction of watermark information despite intentional or unintentional attacks. The normalized correlation (NC) value is used to evaluate robustness, measuring the resemblance between original and extracted watermark images. In order to assess the proposed algorithm’s ability to withstand a variety of attacks, the NC values are computed by subjecting the algorithm to a variety of attacks like filtering, noise addition, JPEG compression, and geometric distortions. Higher NC values indicate greater resistance and maintenance of watermark integrity. Robustness is a vital consideration when evaluating the performance and effectiveness of digital watermarking algorithms in protecting against unauthorized use or distribution of digital works.
To evaluate the algorithm’s robustness, at first, a color watermark picture in
Figure 4p was embedded into color cover images in
Figure 4 before subjecting them to 12 different attacks. As an example, the resulting watermarked images and their extracted watermark images for host image
Figure 4 care shown in
Figure 6 and
Figure 7, respectively. These figures provide compelling evidence of the algorithm’s remarkable ability to withstand common attacks, except for cases involving geometric distortions such as scaling plus rotation. Even when subjected to attacks, our method excels at extracting the color watermark image with minimal distortion.
4.3. Comparative Performance Analysis
To showcase the efficacy of our proposed algorithm, we conducted a comparison with Siyu Chen’s approach [
25] and Prabha’s approach [
16] concerning both robustness and imperceptibility. To ensure a fair evaluation, we applied Siyu Chen’s approach [
25] and Prabha’s approach [
16] to embed the color watermark image from
Figure 4p into the color host images shown in
Figure 4, using a quantization step size of
T = 5, similar to our proposed algorithm.
Table 2 provides a comprehensive comparison of imperceptibility levels achieved by our proposed method and existing methods, specifically Chen’s method [
16] and Prabha’s method [
25]. The results clearly demonstrate that while our approach may show slightly lower
PSNR for a few images compared to Chen’s algorithm, it consistently achieves higher
PSNR for most images, averaging an improvement of 0.6 dB. In contrast, when compared with Prabha’s method, our algorithm consistently outperforms across all images, showcasing an average improvement of 2.83 dB in
PSNR values.
A key factor contributing to these results is our method’s strategic embedding strategy in the first two columns of the WHT block, minimizing pixel alterations compared to [
16,
25], which affects more pixels across multiple WHT block columns. Importantly, the difference in
PSNR values is more pronounced between our proposed algorithm and Prabha’s approach than between our algorithm and Siyu Chen’s approach. This is due to our algorithm’s double-embedding capacity compared to Chen’s algorithm, while maintaining the same embedding capacity as Prabha’s method. Additionally, our method achieves a mean
SSIM improvement of approximately 0.01 and 0.03 compared to Siyu Chen’s approach [
25] and Prabha’s approach [
16], respectively.
For the high-resolution images of
Figure 4m–o, the
PSNR values of the proposed and existing algorithms are almost the same, with the proposed algorithm showing slightly higher values compared to existing algorithms. Our algorithm achieves this while having double the embedding capacity of Chen’s algorithm and maintaining the same embedding capacity as Prabha’s algorithm.
Table 3 presents a concise comparison of the robustness levels between our proposed algorithm, Siyu Chen’s approach [
25], and Prabha’s approach [
16], evaluated based on mean
NC values. This evaluation encompasses various simulated attacks, such as average filtering, brightening, JPEG compression, darkening, median filtering, rotation, salt-and-pepper noise, scaling, sharpening, scaling plus rotation, speckle noise, and whit noise. The results presented in
Table 3 make it obvious that the proposed algorithm achieves higher average
NC values compared to Siyu Chen’s approach [
25] and Prabha’s approach [
16]. Analyzing these mean
NC values enables us to elucidate the efficiency of our algorithm concerning robustness across a range of simulated scenarios. In summary, our proposed watermarking method not only exhibits superior imperceptibility but also surpasses Siyu Chen’s approach [
25] and Prabha’s approach [
16] in terms of robustness.
The embedding capacity of these watermarking methods is tabulated in
Table 4. In [
25], two watermark bits are embedded in the first row of a 4 × 4 block, and in [
16], four watermark bits are inserted in the last two rows’ coefficients of a 4 × 4 block. Our algorithm inserts four watermark bits in the first and second column coefficients of a 4 × 4 block. The embedding capacity of our method is same as that of [
16] and double that of [
25]. The mean embedding and extraction times of both existing algorithms and the proposed algorithm are presented in
Table 5. From these data, it is evident that the proposed method exhibits significantly lower computational complexity compared to the other existing methods.