Next Article in Journal
An Analytical Study of the Mikhailov–Novikov–Wang Equation with Stability and Modulation Instability Analysis in Industrial Engineering via Multiple Methods
Previous Article in Journal
Study of Morphology of Gas–Liquid Interfaces in Tank with Central Column in CSS under Different Gravity Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Blind Double-Color Image Watermarking Algorithm Utilizing Walsh–Hadamard Transform with Symmetric Embedding Locations

by
KVSV Trinadh Reddy
* and
S. Narayana Reddy
Department of Electronics and Communication Engineering, Sri Venkateswara University College of Engineering, SV University, Tirupathi 517502, Andhra Pradesh, India
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 877; https://doi.org/10.3390/sym16070877
Submission received: 30 April 2024 / Revised: 24 June 2024 / Accepted: 28 June 2024 / Published: 10 July 2024
(This article belongs to the Special Issue Symmetry in Image Encryption)

Abstract

:
This paper introduces an effective blind watermarking algorithm for double-color images utilizing the Walsh–Hadamard Transform (WHT) with symmetric embedding locations to enhance imperceptibility. The proposed algorithm leverages the energy accumulation capability and significant correlations among coefficients of the WHT. First, the color host image undergoes partitioning into its respective red (R), green (G), and blue (B) channels, followed by further subdivision into 4 × 4 blocks. Through research, the algorithm determines which WHT coefficients are least visually sensitive to embedding a color image, and as a result, optimizes the embedding locations to achieve better imperceptibility. The extensive simulation results verify the superior performance of the proposed algorithm compared to other related approaches, showcasing its excellence not only in imperceptibility but also in embedding capacity and robustness.

1. Introduction

In the current digital era, protecting copyright is of utmost importance, as it has become easy to duplicate and distribute content without consent. It is essential to safeguard the rights of creators and owners of original works, and to ensure that they are fairly compensated for their efforts [1,2]. The digital image watermarking method has evolved into a vital tool within the realm of multimedia security. Its primary function is the seamless embedding of watermarks into digital media by exploiting data redundancies and human visual limitations. Through the application of advanced techniques, these watermarks are rendered imperceptible to the naked eye [3,4,5]. In cases where it is necessary to determine whether the digital information has been maliciously tampered with or to address copyright problems, specialized algorithms can be used to extract the watermark information that is concealed in the host multimedia. The advent of digital watermarking technology has provided a powerful tool for digital copyright protection, and it continues to advance with the ever-increasing demands of multimedia security.
Digital watermarking algorithms can be categorized based on their work domains, namely, spatial domain watermarking algorithms [6,7,8] and frequency domain watermarking algorithms [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Although both of these approaches have their respective merits and demerits, spatial domain algorithms enable the direct modification of pixel values within the host image for embedding watermarks, making them simple in computation and expeditious in execution. However, their reliability is comparatively lower than that of frequency domain algorithms.
The spatial domain methods offer innovative approaches to digital watermarking, addressing the need for robust solutions in copyright protection and multimedia security. Some techniques seamlessly blend spatial and frequency domain methods, ensuring robustness even without access to the original image. Others focus on pixel-level watermarking in color images, leveraging the DC coefficient to enhance speed, robustness, and efficiency. Moreover, a Schur decomposition-based approach enables faster processing and imperceptible embedding of messages within image pixels, collectively providing faster, more secure, and robust solutions for digital property marking and copyright protection.
For instance, Yuan Z. [6] introduced a blind image watermarking method that incorporates both spatial and frequency domain approaches. Their method allows for embedding the watermark without requiring the original image, thus rendering it a blind watermarking method. The proposed method underwent rigorous testing against various types of attacks and demonstrated strong robustness. Similarly, a new method for watermarking color images [7] hides information directly within the pixels, avoiding complex transformations that slow down traditional techniques. This method utilizes a special value, the DC coefficient, which reflects the image’s average color. By slightly altering some pixel colors, the watermark information is embedded. This approach offers three advantages: speed due to simpler math, robustness against tampering because the watermark is linked to the DC coefficient and efficiency by combining the strengths of spatial and frequency domain watermarking. Tests on various image collections confirm this new method’s effectiveness in hiding watermarks while offering faster processing and stronger robustness compared to existing techniques.
Furthermore, Su et al. [8] proposed a new method that utilizes Schur decomposition for protecting copyrights of color images. It works by hiding a secret message directly within the tiny colored squares, or pixels, of the image. Unlike older methods that require complex calculations, this one is much faster because it works directly on the pixels themselves. It cleverly uses a hidden property within the image data to embed the message, and these changes are so small that one cannot see them in the final picture. Tests show this method excels in three ways: the message stays hidden even if someone edits the image, it is faster than other techniques because it avoids complex math, and it can be used to watermark images instantly, like when one takes a photo. This new approach provides a faster and more secure way to mark digital property.
Various frequency domain watermarking techniques have emerged to tackle digital copyright challenges and enhance multimedia security. These methods prioritize imperceptibility, robustness, and payload capacity, showcasing exceptional performance against specific noise types. For instance, the technique developed in [9] uses a unique modulation strategy to embed watermarks within multimedia content, ensuring high image quality and strong resistance to common attacks. The adaptive watermarking method [10] employs Arnold chaotic maps and redundant transforms to scramble and embed watermarks into color images, balancing invisibility and robustness. Similarly, the technique described in [11] combines fast Walsh–Hadamard transform (FWHT) and singular value decomposition (SVD) to efficiently embed watermarks, offering superior performance over traditional methods.
Other notable methods include the use of Fourier transforms [12] to conceal watermarks within color images, providing good imperceptibility and robustness. A new method [13] leverages a 2D discrete cosine transform (DCT) to embed watermarks by adjusting specific frequencies within image blocks, excelling in invisibility, robustness, and security. The blind watermarking algorithm [14] integrates discrete Fourier transform (DFT) within the spatial domain for rapid processing and robust watermarking. Additionally, the technique used in [15] employs a two-level DCT to embed watermarks into both DC and AC coefficients, enhancing resilience against image processing and geometric attacks. The algorithm in [16] utilizes the Walsh–Hadamard transform (WHT) to embed watermarks in a way that minimizes visual distortion while maintaining high embedding capacity and robustness.
Further advancements include the use of Schur decomposition [17,18] for efficient watermark embedding and robust blind detection. These methods use eigenvalue quantization and affine transformation for enhanced security and robustness. The algorithm in [19] employs LU decomposition and the Arnold transform to embed watermarks with high imperceptibility and robustness. In [20], QR decomposition is used to embed color watermarks, minimizing visible distortion and improving performance against common attacks. The techniques in [21,22] utilize Schur decomposition and a combination of discrete wavelet transform (DWT) and SVD for robust and imperceptible watermarking in color images and medical images, respectively. The methods in [23,24] further optimize watermarking using DCT coefficients and QR decomposition, enhancing imperceptibility and security. The blind image watermarking technique in [25] uses WHT to embed watermarks in color images, demonstrating superior performance in invisibility and robustness against attacks.
The quality of digital watermarking technology depends on several key factors, including its ability to remain invisible, its resistance to tampering, its real-time performance, and its capacity for handling large amounts of information. With color images being so prevalent, there is a need for a watermarking algorithm that can meet these requirements while effectively handling color data. This paper proposes a new blind watermarking technique that builds upon the strengths of the Walsh–Hadamard transform (WHT) in the frequency domain. Unlike the method presented in [25], this approach optimizes watermark embedding and extraction by strategically selecting specific locations within the WHT coefficients of non-overlapping image blocks. This strategy leverages the WHT’s energy concentration property, potentially leading to improved performance.
The proposed watermarking technique belongs to the double-color image watermarking technique, which is designed for applications like copyright security due to its big-data capacity. Additionally, this watermarking method is blind. Being blind, it does not require the original image for extraction, ensuring the watermark’s information remains protected. The subsequent sections of this manuscript are arranged in the following manner. Section 2, Preliminaries, presents the rudimentary principles and pertinent knowledge employed in the proposed watermarking technique, encompassing the Arnold transform and WHT. Section 3 expounds on the watermark insertion and retrieval procedure suggested in this manuscript. Section 4 examines and assesses the efficacy of our approach utilizing extensive empirical data. Finally, some relevant conclusions are offered in Section 5.

2. Preliminaries

2.1. Arnold Transform

The Arnold transform is a potent technique to manipulate images that is commonly used in digital image decryption and encryption, as well as digital watermarking. Its periodicity property enables the transformation to be reversed after a specified number of iterations. The Arnold transform is often utilized in digital watermarking as a preprocessing step to enhance watermark security. It offers simplicity and efficiency and can be applied to images of varying sizes and formats, making it a versatile tool in image processing. However, the Arnold transform requires careful parameter selection to ensure both security and reversibility, and is susceptible to certain attacks, such as differential attacks. Nevertheless, the Arnold transform remains a valuable tool in image processing and digital watermarking techniques with potential for future development.
To prepare the watermark image for embedding, it undergoes a preprocessing step where it is scrambled using the Arnold transform. The transformation formula for an N × N matrix is represented in Equation (1). This equation utilizes the coordinates (i, j) of a pixel in the watermark image and the coordinates (i, j′) of the corresponding pixel in the scrambled image.
i j = 1 1 1 2 i j   m o d   N ;   i ,   j ,   i   a n d   j = 0 , 1
To restore the original image, the inverse Arnold scrambling can be applied as follows.
i j = 2 1 1 1 i j + N N   m o d   N

2.2. Walsh–Hadamard Transform

2.2.1. Hadamard Matrix

The Walsh–Hadamard Transform (WHT) is a mathematical operation that can convert a signal or a set of data into an alternative representation. This method shares similarities with the Fourier transform, which breaks down a signal into its constituent frequency components. However, the WHT employs a distinct kernel known as the Hadamard matrix, which is a square matrix created using the Hadamard function.
The Hadamard matrix solely comprises elements that are either +1 or −1. This quality allows it to be efficient in matrix transformations, as it can aggregate the energy of different signal segments effectively. By subjecting a signal or a dataset to the WHT, a new set of coefficients can be obtained that can express the signal based on its Hadamard components. To create a Hadamard matrix of order 2n, a recursive method is utilized to construct the matrix from smaller n-sized matrices. This construction procedure includes the flipping and negating of specific sub-matrices in the smaller matrices to attain the larger Hadamard matrix. This iterative construction process guarantees that the resulting matrix is orthogonal and exhibits the necessary characteristics essential for the WHT. The Hadamard matrix with size 2 × 2 can be presented as:
H 2 = 1 1 1 1
Then, the Hadamard matrix with size 4 × 4 can be represented as:
H 4 = H 2 H 2 H 2 H 2 = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Consequently, the general representation for the Walsh–Hadamard transform is given by:
H 2 p = H 2 p 1 H 2 p 1 H 2 p 1 H 2 p 1 ;   p = 1 , 2 , 3

2.2.2. 2D-WHT

The 2D Walsh–Hadamard transform (2D-WHT) is a technique that enables the concentrating of energy in a two-dimensional matrix. This process involves utilizing a Hadamard matrix to multiply the 2D matrix in either the left or right direction. When the Hadamard matrix is left-multiplied with the 2D matrix, the energy present in the matrix becomes focused on the first row. Similarly, when a 2D matrix undergoes right-hand multiplication with a Hadamard matrix, energy becomes significantly concentrated in the first column. However, if the matrix is subjected to multiplication by Hadamard matrices on both the left and right sides, the energy becomes highly concentrated in the upper-left element of the matrix.
The approach of left-hand multiplication with the WHT is employed in this paper. Next, the watermark information is embedded in the optimized embedding locations. This involves manipulating the values of the WHT coefficients in the optimized embedding locations, which have less impact on the image’s visual quality. Given a two-dimensional image represented by the function f(x, y), its WHT is represented by F(U, V) and is defined as follows:
F U , V = 1 N × H N   × f   x ,   y
where HN is the Hadamard matrix of order N and the inverse WHT of F(U, V) is defined as follows:
f x ,   y = H N   ×   F U , V
In the following example, we will illustrate how the 2-D WHT can be used for both forward and inverse transformations. Suppose a block of an image with dimensions 4 × 4 is given by:
f x , y = 58 69 55 58 59 52 51 60 66 61 53 54 62 61 62 65
The WHT of f(x, y) can be calculated as follows:
F ( U , V ) = 1 4 × 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 × 58695558 59525160 66615354 62616265 = 61.25 60.75 55.75 59.25 0.750 4.25 1.25 3.25 2.75 0.25 2.25 0.25 1.250 4.250 3.250 2.25
The inverse WHT of F(U, V) is computed as:
f x , y = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 × 61.25 60.75 55.75 59.25 0.750 4.25 1.25 3.25 2.75 0.25 2.25 0.25 1.250 4.250 3.250 2.25 = 58695558 59525160 66615354 62616265

3. Proposed Method

The method that has been proposed is presented in this section, and it is bifurcated into two primary phases. In one phase, a watermark embedding is done, and in the other phase, an extraction of a watermark is performed. To be more specific, the proposed method’s pipeline is shown in Figure 1 to provide a visual representation of the process flow. While the first phase involves embedding the watermark into digital content, the second involves recovering the embedded watermark from watermarked content.
At first, the cover digital image is portioned into non-overlapping discrete 4 × 4 blocks, each subsequently undergoing the WHT. In contrast to Siyu Chen’s approach [25], which inserts two watermark bits in the first row of a 4 × 4 block, and Prabha’s approach [16], which inserts four watermark bits in the third-row and fourth-row coefficients of a 4 × 4 block, our algorithm inserts four bits in optimal locations to enhance the image’s imperceptibility and visual quality. To identify the best embedding locations, an in-depth investigation has been carried out, as follows.
Suppose a 4 × 4 image block “a” and it’s WHT “A” are given by Equations (11) and (12), respectively. In digital image watermarking, watermark information is embedded by strategically modifying specific WHT coefficients within “A”. This results in a modified WHT block denoted by “A*”. The watermarked image block “a*” is derived with an application of an inverse WHT to the modified WHT block “A*”. The image’s visual quality, i.e., imperceptibility, is determined by how similar the resulting block “a*” is to the original image block “a”. Now, we will examine the extent of modification undergone by the watermarked image block “a*” in comparison to its original image block “a”, given the scenario where any WHT coefficient of “A” is modified.
a = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24 a 31 a 32 a 33 a 34 a 41 a 42 a 43 a 44
A = A 11 A 12 A 13 A 14 A 21 A 22 A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 A 44
We have derived the matrices of the watermarked image blocks for all sixteen possible cases where a specific WHT coefficient “Ai j” is modified (i = 1 to 4 and j = 1 to 4). We present three cases that describe the process of modifying the coefficients A11, A22, and A43 separately by adding a variation value d, and provide the equations needed to calculate the modified WHT block and watermarked image block for each case. If we modify A11 by adding d, the modified WHT block and watermarked image block are obtained using Equations (13) and (14), respectively. Similarly, if we modify A22 by adding d, we can use Equations (15) and (16) to calculate the modified WHT block and watermarked image block, respectively. If we modify A43 by adding d, the modified WHT block and watermarked image block can be obtained using Equations (17) and (18), respectively. In all cases, the value of d depends on the specific embedding bits used.
A * = A 11 + d A 12 A 13 A 14 A 21 A 22 A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 A 44
a * = a 11 + d a 12 a 13 a 14 a 21 + d a 22 a 23 a 24 a 31 + d a 32 a 33 a 34 a 41 + d a 42 a 43 a 44
A * = A 11 A 12 A 13 A 14 A 21 A 22 + d A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 A 44
a * = a 11 a 12 + d a 13 a 14 a 21 a 22 + d a 23 a 24 a 31 a 32 + d a 33 a 34 a 41 a 42 + d a 43 a 44
A * = A 11 + d 1 A 12 + d 2 A 13 A 14 A 21 A 22 A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 + d A 44
a * = a 11 a 12 a 13 + d a 14 a 21 a 22 a 23 + d a 24 a 31 a 32 a 33 + d a 34 a 41 a 42 a 43 + d a 44
Upon careful analysis of the sixteen matrices representing the watermarked image blocks, a significant observation was made. It was found that whenever a particular WHT coefficient “Ai j” was modified, there was a corresponding alteration in all the pixels in the column “j” of the watermarked image block. The degree of variation between the pixel values of the watermarked block and those of the original block was directly related to the magnitude of the modified WHT coefficient “Ai j”. This observation sheds light on how a modification WHT coefficient affects the quality of watermarked images, offering a pathway to fine-tune embedding locations for enhanced outcomes.
Siyu Chen’s approach [25] inserts one watermark bit into the first two elements of the first row (A11 and A12) and inserts the second watermark bit into the remaining elements of the first row (A13 and A14). Assume that the coefficients A11, A12, A13 and A14 are modified to A11 + d1 and A12 + d2, A13 + d3 and A14 + d4, respectively, after embedding, where d1, d2, d3 and d4 represent the variations that depend on embedding bits. In this case, the modified WHT block “A*”and the watermarked image block “a*” are given by Equations (19) and (20), respectively.
A * = A 11 + d 1 A 12 + d 2 A 13 + d 3 A 14 + d 4 A 21 A 22 A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 A 44
a * = a 11 + d 1 a 12 + d 2 a 13 + d 3 a 14 + d 4 a 21 + d 1 a 22 + d 2 a 23 + d 3 a 24 + d 4 a 31 + d 1 a 32 + d 2 a 33 + d 3 a 34 + d 4 a 41 + d 1 a 42 + d 2 a 43 + d 3 a 44 + d 4
Prabha’s approach [16] utilizes specific elements of each column of “A” to embed individual watermark bits. The last two elements of the first column (A31 and A41) are employed to embed the first watermark bit, while the last two elements of the second column (A32 and A42) are utilized for the second watermark bit. Similarly, the last two elements of the third column (A33 and A43) correspond to the third watermark bit, and the last two elements of the fourth column (A34 and A44) are used for the fourth watermark bit.
Assume after the embedding process that the coefficients “Ai j” are modified to “Aij + dij”, where “dij” represents variations that depend on the embedded bits (i = 3 to 4 and j = 1 to 4). Consequently, the modified Walsh–Hadamard transform (WHT) block is denoted as “A*”, and the resulting watermarked image block is represented as “a*”. The equations for “A*” and “a*” are given by Equations (21) and (22), respectively.
A * = A 11 A 12 A 13 A 14 A 21 A 22 A 23 A 24 A 31 + d 31 A 32 + d 32 A 33 + d 33 A 34 + d 34 A 41 + d 41 A 42 + d 42 A 43 + d 43 A 44 + d 44
a * = a 11 + d 1 a 12 + d 2 a 13 + d 3 a 14 + d 4 a 21 + d 1 a 22 + d 2 a 23 + d 3 a 24 + d 4 a 31 + d 1 a 32 + d 2 a 33 + d 3 a 34 + d 4 a 41 + d 1 a 42 + d 2 a 43 + d 3 a 44 + d 4
Drawing from our prior analysis, it becomes evident that modifying a single WHT coefficient in the WHT block causes a corresponding change in all the pixels within the corresponding column of the watermarked image block. Siyu Chen’s approach [25] inserts the first watermark bit by changing the WHT coefficients A11 and A12 in the first and second columns of the WHT block, which leads to changes in all the pixels within these columns of the watermarked image block. Similarly, inserting the second watermark bit involves modifying the WHT coefficients A13 and A14 in the third and fourth columns, affecting all pixels in these columns of the watermarked image block.
Prabha’s approach [16] similarly alters all the pixels of the watermarked image block. This occurs because four watermark bits are inserted in all the columns of the third and fourth rows of the WHT block. Consequently, both Chen’s approach [25] and Prabha’s approach [16] share a drawback, as they result in altering the values of all pixels in all columns of the watermarked image block due to watermark bits being embedded in all the columns of the WHT block. To address this limitation, an alternative approach is proposed. In this new approach, four watermark bits are inserted in the first and second columns of the WHT block, as follows.
The first two elements of the first row (A11 and A12) are employed to insert the first watermark bit, while the first two elements of the second row (A21 and A22) are utilized for the second watermark bit. Similarly, the first two elements of the third row (A31 and A32) correspond to the third watermark bit, and the first two elements of the fourth row (A41 and A42) are used for the fourth watermark bit.
This alternative approach overcomes the limitation of both Chen’s and Prabha’s approaches by strategically embedding the watermark bits in the first two columns of the WHT block. By doing so, it capitalizes on the fact that the first two columns have already undergone alterations due to the initial embedding of the first watermark bit in Chen’s approach and the first two watermark bits in Prabha’s approach. Consequently, embedding the remaining watermark bits in the first two elements of the subsequent rows ensures that only the values of the previously altered pixels in the first two columns are modified while leaving the unaltered pixels in the subsequent two columns undisturbed. In this case, the modified WHT block “A*” and the watermarked image block “a*” are given by Equations (23) and (24), respectively.
A * = A 11 + d 1 A 12 + d 2 A 13 A 14 A 21 + d 3 A 22 + d 4 A 23 A 24 A 31 A 32 A 33 A 34 A 41 A 42 A 43 A 44
a * = a 11 + d 1 + d 3 a 12 + d 2 + d 4 a 13 a 14 a 21 + d 1 d 3 a 22 + d 2 d 4 a 23 a 24 a 31 + d 1 + d 3 a 32 + d 2 + d 4 a 33 a 34 a 41 + d 1 d 3 a 42 + d 2 d 4 a 43 a 44

3.1. Watermark Embedding

In [25], two watermark bits are embedded in the first row of a 4 × 4 block, and in [16], four watermark bits are inserted in the last two rows’ coefficients of a 4 × 4 block. Our algorithm inserts four watermark bits in the first and second column coefficients of a 4 × 4 block. As such, the embedding capacity of our method is the same as that of [16] and doubles that of [25].
The method being proposed embeds a colored digital watermark image into a colored digital host image. The method of watermark embedding is shown in Figure 2. The host image, denoted as H, is a color picture with a side length of M, while the watermark picture, denoted as W, is also a color picture with a side length of N.

3.1.1. Step 1—Preprocessing the Color Watermark Image

This algorithm begins by preprocessing the color watermark picture. To strengthen this algorithm against potential threats to its security and robustness, a series of intricate processing steps are applied to the color watermark picture W. Initially, the image undergoes a dimensionality cut, giving rise to the creation of Wi (I = 1, 2, 3), which are then arranged in a specific order of red, green, and blue. Subsequently, Arnold scrambling transformation is employed on each of the Wi, rendering the watermark robust against different attacks and making it stronger. In the final step, decimal format representation of each pixel of scrambled watermark picture is converted to an 8-bit number. These sequences of 8-bit binary data, which are obtained for each channel, are then concatenated successively, resulting in the formation of an 8N2- bit of watermark SWi (I = 1, 2, 3).

3.1.2. Step 2—Preprocessing the Color Host Image

In the second step of the algorithm, the color cover picture H is dissected into three distinct color channels, Hi (i = 1, 2, 3): red, green, or blue. Now, each of the color host channels, Hi, is partitioned into discrete 4 × 4 blocks that do not overlap.

3.1.3. Step 3—Applying the WHT to the Image Blocks

Given that R represents a discrete block of picture data, apply the Walsh–Hadamard transform (WHT) to R utilizing Equation (25) to acquire its corresponding frequency domain coefficient HR via a transformational mechanism.
H R = 1 N × H N × R
The dimension of the image block is represented by N, and the corresponding Hadamard matrix, HN, which contains 1 and −1, is of order N × N. In this paper, we set the size of the image block as N = 4, thus leading to a corresponding Hadamard matrix of order 4 × 4.

3.1.4. Step 4—Embedding the Watermark

During the watermark embedding process, we extract four watermark bits wj (j = 1 to 4) from the SWi. These four bits are then embedded into a transformed matrix according to the following cases.
Case 1. If wj = 1 and (HRj,1HRj,2) ≤ e, modify the values of HRj1 and HRj2 (j = 1 to 4) as follows:
H R j 1 * = a v g j + T 2 j = 1   t o   4
H R j 2 * = a v g j T 2   ( j = 1   t o   4 )
Case 2. If wj = 0 and (HRj,2HRj,1) ≤ e, modify the values of HRj1 and HRj2 (j = 1 to 4) as follows:
H R j 1 * = a v g j T 2   ( j = 1   t o   4 )  
H R j 2 * = a v g j + T 2   ( j = 1   t o   4 )
Equations (26)–(29) dictate the specific location and manner in which the watermark is embedded. They use an average value calculated from the first two elements in a jth row. The error parameter is denoted by e, while T represents the step size. Furthermore, HRm,n refers to the value at position (m, n). Notably, the error parameter e is assigned a value of 5 and the step size is assigned a value of 5 in this paper.

3.1.5. Step 5—Inverse WHT Transform

The image block “R*” containing the watermark is obtained using the inverse WHT of Equation (30):
R * = H N × H R *
where HR* refers to the matrix in the frequency domain that contains the watermark.

3.1.6. Step 6—Iterative Embedding and Watermarked Image Compilation

To obtain watermarked image “H*”, follow the steps mentioned above (steps 2 to 5) repeatedly until the embedding of the entire binary data of the watermark, resulting in “Hi*”. Once this is done, combine all the channeled host images that contain watermark “Hi*” to obtain the final watermarked image.
Below is an explanation of the proposed algorithm accompanied by an illustrative example. Let us begin with a subblock from the host image denoted as R, with dimensions of 4 × 4.
R = 98 99 95 88 89 82 91 80 86 81 83 63 82 91 62 65
The WHT of the 4 × 4 coefficients of HR is given by:
H R = 88.75 88.25 82.75 74.00 03.25 01.75 06.25 01.50 04.75 02.25 10.25 10.00 01.25 06.75 4.25 02.50
Here, the first two elements of each row are given as follows. In the first row, the elements are HR11 = 88.75 and HR12 = 88.25, while the second row holds HR21 = 3.25 and HR22 = 1.75. Similarly, the third row has HR31 = 4.75 and HR32 = 2, and the fourth row contains HR41 = 1.25 and HR42 = 6.75. We proceed to compute the average values of these elements for each row. For the first row, the average is avg1 = 88.5, and for the second row, it is avg2 = 2.5. Likewise, the third row’s average is avg3 = 3.5, and the fourth row’s average is avg4 = 4.
The four watermarks to be embedded, w1, w2, w3, and w4, are assumed to be 1, 0, 1, and 1, respectively. These watermark bits are embedded according to Equations (23)–(26). Both the error parameters, denoted as e, and the quantization step size, represented by T, are established at 5. Let us begin with the first watermark, marked as w1 = 1. To embed this watermark, we start by checking if (HR11HR12) is smaller than 5. Since (88.75 − 88.25) is indeed less than 5, and considering the average value is avg1 = 88.5, we make changes to HR11 and HR12. We take HR11 and add half of T (which is 2.5) to avg1, resulting in H*R11 = avg1 + T/2 = 88.5 + 2.5 = 91. Similarly, we subtract half of T from avg1 to adjust HR12, which becomes H*R12 = avg1T/2 = 88.5 − 2.5 = 86.
Now let us move to the second watermark, w2 = 0. We follow a similar process. We check if (HR22HR21) is less than 5, which in this case (01.75 − 03.25) is smaller than 5. Also, considering the average value as avg2 = 2.5, we adjust HR21 and HR22. To adjust HR21, we subtract half of T (2.5) from avg2, making HR21 = avg2T/2 = 2.5 − 2.5 = 0. For HR22, we add half of T to avg2, resulting in HR22 = avg2 + T/2 = 2.5 + 2.5 = 5.
Moving on to the third watermark, which is w3 = 1. We check (HR31HR32) and see that (04.75 − 02.25) is less than 5. With avg3 = 3.5, we adjust HR31 and HR32 accordingly. To adjust HR31, we add half of T (2.5) to avg3, leading to H*R31 = avg3 + T/2 = 3.5 + 2.5 = 7. Similarly, for HR32, we subtract half of T from avg3, making HR32 = avg3T/2 = 3.5 − 2.5 = 1.
Finally, we reach the fourth watermark, w4 = 1. By checking (HR41HR42), which is (01.25 − 06.75), we confirm it is less than 5. Also, considering avg4 = 4, we adjust HR41 and HR42. To adjust HR41, we add half of T (2.5) to avg4, resulting in HR41 = avg4 + T/2 = 4 + 2.5 = 6.5. For HR42, we subtract half of T from avg4, making HR42 = avg4T/2 = 4 − 2.5 = 2.5.
Once the watermark is successfully inserted, the resulting altered WHT block is denoted as HR* and can be expressed using Equation (33).
H R * = 91.00 86.00 82.75 74.00 00.00 05.00 06.25 01.50 07.00 01.00 10.25 10.00 06.50 02.50 4.25 02.50
The matrix representation of R*, as given by Equation (30), can be obtained by applying the inverse WHT to HR*.
R * = 104.5 94.50 95.00 88.00 91.50 79.50 91.00 80.00 77.50 87.50 83.00 63.00 90.50 82.50 62.00 65.00

3.2. Watermark Extraction

The extraction of the watermark from the watermarked image is illustrated in Figure 3.

3.2.1. Step 1—Preprocessing of Watermarked Image

The preprocessing involves the segmentation of the watermarked image into three (red, green, and blue) channels, which are denoted as Hi* where (i = 1, 2, 3). Subsequently, all channels undergo further segmentation into non-overlapping subblocks with dimensions of 4 × 4.

3.2.2. Step 2—Apply WHT

The second step of the watermark extraction process involves obtaining the frequency domain coefficients of the block that contains the watermark. To achieve this, we apply the WHT to the watermarked image block R* using Equation (35).
H R * = 1 N × H N × R *
The resulting matrix H R * represents the frequency domain coefficients with the watermark. Here, N and HN represent the image block’s size and Hadamard matrix, respectively.

3.2.3. Step 3—Extracting the Watermark

In this step, the watermarks wj* (j = 1 to 4) contained in the image block HR* are extracted using Equation (36):
w j * = 0 ,     i f   H R j , 1 * H R j , 2 * 1 ,     e l s e
where wj* denotes the j-bit watermark that has been extracted from the image block HR*, while H*Rj,1 and H*Rj,2 refer to the elements in the 1st and 2nd columns of the j throw of HR* and j ranges from1 to 4.

3.2.4. Step 4—Iterative Extraction and Watermark Sequence Compilation

The fourth step in the watermark extraction process involves repeating the steps outlined in steps 2 and 3 to extract Swi* for all color channels. Next, each sequence of eight binary digits is transformed into corresponding series of pixel data. These pixel values are then further transformed into a decimal representation. In this way, we obtain the complete watermark sequences Sw1*, Sw2*, and Sw3* for the red, green, and blue channels, respectively.

3.2.5. Step 5—Inverse Arnold Transformation

In this step, the watermark sequence Wi* for each channel is obtained by applying the inverse Arnold transformation to the scrambled watermark sequence for each color channel. The index i (i = 1, 2, 3) signifies that this operation is performed independently on each channel.

3.2.6. Step 6—Watermark Image Reconstruction

In the sixth and final step, the watermark sequences Wi* from the three channels are recombined to reconstruct the complete watermark W*.
To continue with the previous example, let us consider a watermarked image block R* in Equation (34), which is expressed as shown in Equation (37):
R * = 104.5 94.50 95.00 88.00 91.50 79.50 91.00 80.00 77.50 87.50 83.00 63.00 90.50 82.50 62.00 65.00
The matrix HR*, frequency domain coefficients with the watermark, obtained with WHT could be calculated using (38):
H R * = 91.00 86.00 82.75 74.00 00.00 05.00 06.25 01.50 07.00 01.00 10.25 10.00 06.50 02.50 4.25 02.50
By comparing the first two elements within each row (H*Rj1 and H*Rj2) of HR* using Equation (38), we extract watermarks wj* (j = 1 to 4). In the first row, H*R11 (91) was higher than H*R12 (86), leading to the extraction of w1* as 1. In the second row, H*R21 (0) was lower than H*R22 (5), resulting in w*2 as 0. The third row had H*R31 (7) greater than H*R32 (1), leading to w*3 as 1. Similarly, in the fourth row, H*R41 (6.25) was higher than H*R42 (2.5), resulting in w*4 as 1.

4. Experimental Results

To evaluate the effectiveness of the proposed watermarking technique, simulations are performed on 15 host images. The simulations were conducted with the primary objectives of evaluating the invisibility of the watermark, determining the embedding capacity, and assessing the robustness of the algorithm. To comprehensively evaluate the performance of the proposed algorithm, it underwent rigorous testing against a variety of attacks, such as noise addition, filtering operations, JPEG compression, and geometric distortions. The effectiveness and efficiency of the proposed algorithm was thoroughly examined through meticulous evaluation, comparing it with watermarking approaches presented by Siyu Chen [25] and Prabha [16]. This comparative analysis yielded invaluable insights into the algorithm’s performance across various dimensions.
Evaluating the invisibility (or imperceptibility) of a digital image watermarking method is a crucial performance criterion. The level of invisibility directly impacts the picture quality of the watermarked image. A significant objective metric that is used to assess the transparency of a watermarking algorithm is the peak signal-to-noise ratio (PSNR). Within the domain of watermarking, the PSNR finds common application in quantifying the similarity between the original host image and its watermarked counterpart. The PSNR value for all image channels can be computed by first calculating the mean square error (MSE) value of each channel using Equation (39), and then substituting it into Equation (40).
M S E j = 1 m × n u = 0 m 1 v = 0 n 1 H j   ( u ,   v ) H j *   ( u ,   v ) 2
P S N R j = 10   l o g   255 2 M S E j d B
In Equations (39) and (40), j = 1, 2, 3 correspond to the three channels (R, G, and B) of the color image. The dimensions of the image, pertaining to its rows and columns, are denoted by “m” and “n”, respectively. The pixel value of the original host image at (u, v) is represented by Hj (u, v), and Hj* (u, v) signifies the corresponding pixel value in the watermarked image for the jth channel. Equation (41) sums up the individual PSNR values of the three color channels to derive the overall PSNR value for a color image.
P S N R = j = 1 3 P S N R j
In addition, the assessment of the imperceptibility of watermarking methods involves evaluating how the inclusion of a watermark impacts the visual fidelity and authenticity of the original image, commonly measured using a significant metric referred to as the “structural similarity index measurement (SSIM)”. In SSIM implementation, the SSIM serves to capture the structural details present within images, independently of brightness and contrast. It captures the inherent structural properties of objects within a scene, while simultaneously modeling distortions as a combination of brightness, contrast, and structure. The mathematical expression for calculating the SSIM between the host and watermark images is as follows:
S S I M = 2 μ 1 μ 2 + c 1 ( 2 σ j + c 2 ) ( μ 1 2 + μ 2 2 + c 1 ) ( σ 1 2 + σ 2 2 + c 2 )
where the parameters μ1 and μ2 symbolize the average values pertaining to cover and watermarked images, in that order. Furthermore, σ1 and σ2 represent the variances associated with the cover and watermarked images, while σj denotes the covariance specifically related to the watermark images. The inclusion of the constants c1 and c2 within the equation serves the purpose of preventing any division by zero scenarios. Furthermore, the assessment of watermarking algorithm performance encompasses the crucial aspect of robustness, wherein the normalized cross-coefficient (NC) serves as a prominent benchmark. The NC quantifies how similar the original watermark image W is to the extracted watermark image W*, thereby offering valuable insights into the robustness of the algorithm. The calculation methodology for the NC is precisely defined by Equation (43).
N C = j = 1 3 u = 0 m 1 v = 0 n 1 w j ( u , v ) × w * j   ( u , v ) j = 1 3 u = 0 m 1 v = 0 n 1 [ w j ( u , v ) ] 2 j = 1 3 u = 0 m 1 v = 0 n 1 [ w * j ( u , v ) ] 2
In Equation (43), j = 1, 2, 3 correspond to the three channels (R, G, and B) of the color image. The color watermark image has row and column sizes denoted by m and n, respectively. The pixel at (u, v) in the watermark image, and its corresponding pixel in the extracted watermark image for the jth channel are denoted as Wj (u, v) and Wj* (u, v) respectively.
The normalized cross-coefficient (NC) value ranges from 0 to 1, and the value 1 indicates that the robustness of the watermarking method is strong. For consistency with this research, a quantization step size of T = 5 has been selected. The results, including NC, SSIM, and PSNR, of our watermarking algorithm are outlined in Table 1. These metrics are evaluated across various color cover images and a color watermark image illustrated in Figure 4. Our assessment of the algorithm primarily emphasizes its imperceptibility and robustness.

4.1. Examination and Analysis of Imperceptibility

Ensuring the visual integrity of the watermarked image stands as the paramount objective, making the invisibility of the watermark a fundamental characteristic and a vital criterion for evaluating the performance of a watermarking method. In fact, the essence of invisibility lies in meticulously ensuring that the host image endowed with the embedded watermark image maintains an indiscernible appearance, seamlessly blending with the original host image within the boundaries of human visual perception. This aligns with the objective of measuring the imperceptibility of the watermark, which involves evaluating how effectively the watermark image is concealed in the cover image without noticeable visual alterations.
The imperceptibility of our proposed watermarking technique can be evaluated by objective measures such as the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). A high PSNR value or a small mean squared error (MSE) signifies strong imperceptibility, implying that the watermarked image is minimally distorted and closely resembles the original cover image. Likewise, a higher SSIM value approaching 1 reflects a heightened degree of resemblance between the watermark and the host images, thus indicating an enhanced level of imperceptibility.
The watermark imperceptibility of the proposed watermarking algorithm is given in Table 1, showcasing the evaluation based on PSNR and SSIM between the watermarked and the host images. For color images, most cases show PSNR values exceeding 34 dB and SSIM values surpassing 0.96. Similarly, high-resolution images in Figure 4m–o achieve even better results, with PSNR exceeding 43 dB and SSIM surpassing 0.99. This indicates that the difference between the watermarked and the original host images is virtually indistinguishable to the human eye, highlighting the excellent invisibility achieved by the algorithm. Additionally, a visual comparison of the watermarked images in Figure 5 against the original host images in Figure 4 reveals no apparent degradation, confirming the algorithm’s ability to preserve the watermarked image relative to the host image. Furthermore, an NC value of 1 for all images signifies that each extracted watermark picture resembles the original watermark picture.

4.2. Assessing Robustness

Robustness is a crucial requirement for digital watermarking algorithms, ensuring the reliable extraction of watermark information despite intentional or unintentional attacks. The normalized correlation (NC) value is used to evaluate robustness, measuring the resemblance between original and extracted watermark images. In order to assess the proposed algorithm’s ability to withstand a variety of attacks, the NC values are computed by subjecting the algorithm to a variety of attacks like filtering, noise addition, JPEG compression, and geometric distortions. Higher NC values indicate greater resistance and maintenance of watermark integrity. Robustness is a vital consideration when evaluating the performance and effectiveness of digital watermarking algorithms in protecting against unauthorized use or distribution of digital works.
To evaluate the algorithm’s robustness, at first, a color watermark picture in Figure 4p was embedded into color cover images in Figure 4 before subjecting them to 12 different attacks. As an example, the resulting watermarked images and their extracted watermark images for host image Figure 4 care shown in Figure 6 and Figure 7, respectively. These figures provide compelling evidence of the algorithm’s remarkable ability to withstand common attacks, except for cases involving geometric distortions such as scaling plus rotation. Even when subjected to attacks, our method excels at extracting the color watermark image with minimal distortion.

4.3. Comparative Performance Analysis

To showcase the efficacy of our proposed algorithm, we conducted a comparison with Siyu Chen’s approach [25] and Prabha’s approach [16] concerning both robustness and imperceptibility. To ensure a fair evaluation, we applied Siyu Chen’s approach [25] and Prabha’s approach [16] to embed the color watermark image from Figure 4p into the color host images shown in Figure 4, using a quantization step size of T = 5, similar to our proposed algorithm. Table 2 provides a comprehensive comparison of imperceptibility levels achieved by our proposed method and existing methods, specifically Chen’s method [16] and Prabha’s method [25]. The results clearly demonstrate that while our approach may show slightly lower PSNR for a few images compared to Chen’s algorithm, it consistently achieves higher PSNR for most images, averaging an improvement of 0.6 dB. In contrast, when compared with Prabha’s method, our algorithm consistently outperforms across all images, showcasing an average improvement of 2.83 dB in PSNR values.
A key factor contributing to these results is our method’s strategic embedding strategy in the first two columns of the WHT block, minimizing pixel alterations compared to [16,25], which affects more pixels across multiple WHT block columns. Importantly, the difference in PSNR values is more pronounced between our proposed algorithm and Prabha’s approach than between our algorithm and Siyu Chen’s approach. This is due to our algorithm’s double-embedding capacity compared to Chen’s algorithm, while maintaining the same embedding capacity as Prabha’s method. Additionally, our method achieves a mean SSIM improvement of approximately 0.01 and 0.03 compared to Siyu Chen’s approach [25] and Prabha’s approach [16], respectively.
For the high-resolution images of Figure 4m–o, the PSNR values of the proposed and existing algorithms are almost the same, with the proposed algorithm showing slightly higher values compared to existing algorithms. Our algorithm achieves this while having double the embedding capacity of Chen’s algorithm and maintaining the same embedding capacity as Prabha’s algorithm.
Table 3 presents a concise comparison of the robustness levels between our proposed algorithm, Siyu Chen’s approach [25], and Prabha’s approach [16], evaluated based on mean NC values. This evaluation encompasses various simulated attacks, such as average filtering, brightening, JPEG compression, darkening, median filtering, rotation, salt-and-pepper noise, scaling, sharpening, scaling plus rotation, speckle noise, and whit noise. The results presented in Table 3 make it obvious that the proposed algorithm achieves higher average NC values compared to Siyu Chen’s approach [25] and Prabha’s approach [16]. Analyzing these mean NC values enables us to elucidate the efficiency of our algorithm concerning robustness across a range of simulated scenarios. In summary, our proposed watermarking method not only exhibits superior imperceptibility but also surpasses Siyu Chen’s approach [25] and Prabha’s approach [16] in terms of robustness.
The embedding capacity of these watermarking methods is tabulated in Table 4. In [25], two watermark bits are embedded in the first row of a 4 × 4 block, and in [16], four watermark bits are inserted in the last two rows’ coefficients of a 4 × 4 block. Our algorithm inserts four watermark bits in the first and second column coefficients of a 4 × 4 block. The embedding capacity of our method is same as that of [16] and double that of [25]. The mean embedding and extraction times of both existing algorithms and the proposed algorithm are presented in Table 5. From these data, it is evident that the proposed method exhibits significantly lower computational complexity compared to the other existing methods.

5. Conclusions

The proposed blind watermarking algorithm utilizes the Walsh–Hadamard transform and optimized embedding locations, providing an effective solution for enhancing imperceptibility in double-color images. By partitioning the color host image into red, green, and blue channels and utilizing 4 × 4 blocks, the algorithm identifies the least visually sensitive WHT coefficients for watermark embedding. This optimization greatly improves imperceptibility. The experimental results validate the algorithm’s effectiveness concerning both robustness and imperceptibility. The proposed algorithm achieves interesting and encouraging outcomes concerning imperceptibility. Additionally, it exhibits prominent robustness against a diverse array of attacks, encompassing median filtering, JPEG compression, and noise. The superior performance of our proposed algorithm, demonstrated thorough analysis and evaluations, positions it as a promising solution for safeguarding digital works against unauthorized use or distribution.

Author Contributions

Conceptualization, K.T.R.; methodology, K.T.R.; software, K.T.R.; validation, K.T.R. and S.N.R.; formal analysis, K.T.R. and S.N.R.; investigation, K.T.R.; resources, K.T.R.; data curation, K.T.R. and S.N.R.; writing—original draft preparation, K.T.R.; writing—review and editing, K.T.R. and S.N.R.; visualization, K.T.R.; supervision, S.N.R.; project administration, K.T.R.; funding acquisition, K.T.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Akhtarkavan, E.; Majidi, B.; Mandegari, A. Secure Medical Image Communication Using Fragile Data Hiding Based on Discrete Wavelet Transform and A5 Lattice Vector Quantization. IEEE Access 2023, 11, 9701–9715. [Google Scholar] [CrossRef]
  2. Xiao, D.; Zhao, A.; Li, F. Robust Watermarking Scheme for Encrypted Images Based on Scrambling and Kronecker Compressed Sensing. IEEE Signal Process. Lett. 2022, 29, 484–488. [Google Scholar] [CrossRef]
  3. Tang, M.; Song, W.; Chen, X.; Hu, J. An image information hiding using adaptation and radix. Optik Int. J. Light Electron. 2015, 126, 4136–4141. [Google Scholar] [CrossRef]
  4. Chang, C.-S.; Shen, J.-J. Features Classification Forest: A Novel Development that is Adaptable to Robust Blind Watermarking Techniques. IEEE Trans. Image Process. 2017, 26, 3921–3935. [Google Scholar] [CrossRef]
  5. Su, Q.; Wang, H.; Liu, D.; Yuan, Z.; Zhang, X. A combined domain watermarking algorithm of color image. Multimed. Tools Appl. 2020, 79, 30023–30043. [Google Scholar] [CrossRef]
  6. Yuan, Z.; Su, Q.; Liu, D.; Zhang, X. A blind image watermarking scheme combining spatial domain and frequency domain. Vis. Comput. 2020, 37, 1867–1881. [Google Scholar] [CrossRef]
  7. Su, Q.; Liu, D.; Yuan, Z.; Wang, G.; Zhang, X.; Chen, B.; Yao, T. New Rapid and Robust Color Image Watermarking Technique in Spatial Domain. IEEE Access 2019, 7, 30398–30409. [Google Scholar] [CrossRef]
  8. Su, Q.; Yuan, Z.; Liu, D. An approximate Schur decomposition-based spatial domain color image watermarking algorithm. IEEE Access 2019, 7, 4358–4370. [Google Scholar] [CrossRef]
  9. Hsu, L.Y.; Hu, H.T. Robust blind image watermarking using criss cross inter-block prediction in the DCT domain. J. Vis. Commun. Image Represent. 2017, 46, 33–47. [Google Scholar] [CrossRef]
  10. Sharma, S.; Sharma, H.; Sharma, J.B. An adaptive color image watermarking using RDWT-SVD and artificial bee colony based quality metric strength factor optimization. Appl. Soft Comput. 2019, 84, 105696. [Google Scholar] [CrossRef]
  11. Meenakshi, K.; Rao, C.S.; Prasad, K.S. A Robust watermarking scheme based on Walsh-Hadamard Transform and SVD using Zig-Zag Scanning. In Proceedings of the IEEE/2014 International Conference on Information Technology, Bhubaneswar, India, 22–24 December 2014; pp. 167–172. [Google Scholar] [CrossRef]
  12. Fares, K.; Amine, K.; Salah, E. A robust blind color image watermarking based on Fourier transform domain. Optik 2020, 208, 164562. [Google Scholar] [CrossRef]
  13. Yuan, Z.; Liu, D.; Zhang, X.; Su, Q. New image blind watermarking method based on two-dimensional discrete cosine transform. Optik 2020, 204, 164152. [Google Scholar] [CrossRef]
  14. Zhang, X.; Su, Q.; Yuan, Z.; Liu, D. An efficient blind color image watermarking algorithm in spatial domain combining DFT. Optik 2020, 219, 165272. [Google Scholar] [CrossRef]
  15. Su, Q.; Wang, G.; Jia, S.; Zhang, X.; Liu, Q.; Liu, X. Embedding color image watermark in color image based on two-level DCT. Signal Image Video Process. 2015, 9, 991–1007. [Google Scholar] [CrossRef]
  16. Prabha, K.; Sam, I.S. An effective robust and imperceptible blind color image watermarking using WHT. J. King Saud Univ. Comput. Inform. Sci. 2020, 10, 1319–1578. [Google Scholar] [CrossRef]
  17. Liu, D.; Su, Q.; Yuan, Z.; Zhang, X. A color watermarking scheme in frequency domain based on quaternary coding. Vis. Comput. 2020, 37, 2355–2368. [Google Scholar] [CrossRef]
  18. Liu, D.; Yuan, Z.; Su, Q. A blind color image watermarking scheme with variable steps based on Schur decomposition. Multimedia Tools Appl. 2020, 79, 7491–7513. [Google Scholar] [CrossRef]
  19. Su, Q.; Wang, G.; Zhang, X.; Lv, G.; Chen, B. A new algorithm of blind color image watermarking based on LU decomposition. Multidimens. Syst. Signal Process. 2018, 29, 1055–1074. [Google Scholar] [CrossRef]
  20. Su, Q.; Niu, Y.; Zou, H.; Zhao, Y.; Yao, T. A blind double color image watermarking algorithm based on QR decomposition. Multimed. Tools Appl. 2014, 72, 987–1009. [Google Scholar] [CrossRef]
  21. Su, Q.; Niu, Y.; Liu, X.; Zhu, Y. Embedding color watermarks in color images based on Schur decomposition. Opt. Commun. 2012, 285, 1792–1802. [Google Scholar] [CrossRef]
  22. Zermi, N.N.; Amine, K.; Redouane, K.; Fares, K.; Salah, E. A DWT-SVD based robust digital watermarking for medical image security. Forensic Sci. Int. 2021, 320, 110691. [Google Scholar] [CrossRef] [PubMed]
  23. Moosazadeh, M.; Ekbatanifard, G. A new DCT-based robust image watermarking method using teaching-learning-Based optimization. J. Inf. Secur. Appl. 2019, 47, 28–38. [Google Scholar] [CrossRef]
  24. Su, Q.; Wang, G.; Zhang, X.; Lv, G.; Chen, B. An improved color image watermarking algorithm based on QR decomposition. Multimed. Tools Appl. 2017, 76, 707–729. [Google Scholar] [CrossRef]
  25. Chen, S.; Su, Q.; Wang, H.; Wang, G. A high-efficiency blind watermarking algorithm for double color image using Walsh Hadamard transform. Vis. Comput. 2021, 38, 2189–2205. [Google Scholar] [CrossRef]
Figure 1. Flowchart illustrating the step-by-step process of the proposed algorithm. The process includes embedding processing to embed a watermark image on the host image to obtain the watermarked image and extraction process for extracting the watermark image.
Figure 1. Flowchart illustrating the step-by-step process of the proposed algorithm. The process includes embedding processing to embed a watermark image on the host image to obtain the watermarked image and extraction process for extracting the watermark image.
Symmetry 16 00877 g001
Figure 2. Flowchart illustrating the watermark embedding process. This process outlines the specific steps involved in embedding a watermark image onto the host image to generate the watermarked image.
Figure 2. Flowchart illustrating the watermark embedding process. This process outlines the specific steps involved in embedding a watermark image onto the host image to generate the watermarked image.
Symmetry 16 00877 g002
Figure 3. Flowchart illustrating the proposed watermark extraction process, outlining the steps involved in extracting the watermark from the watermarked image.
Figure 3. Flowchart illustrating the proposed watermark extraction process, outlining the steps involved in extracting the watermark from the watermarked image.
Symmetry 16 00877 g003
Figure 4. The color host images and a color watermark image used for the experiments. (ao) The host images with dimensions of ((a)–(l)—512 × 512 pixels, (m)–(o) 2048 × 2048 pixels), and 24-bit color depth, and (p) a color watermark image.
Figure 4. The color host images and a color watermark image used for the experiments. (ao) The host images with dimensions of ((a)–(l)—512 × 512 pixels, (m)–(o) 2048 × 2048 pixels), and 24-bit color depth, and (p) a color watermark image.
Symmetry 16 00877 g004
Figure 5. Watermarked images and extracted watermark: (ao) the watermarked images of the original color cover images in Figure 4a–o, respectively, and (p) the extracted color watermark image.
Figure 5. Watermarked images and extracted watermark: (ao) the watermarked images of the original color cover images in Figure 4a–o, respectively, and (p) the extracted color watermark image.
Symmetry 16 00877 g005
Figure 6. The resulting watermarked images for the host image in Figure 4c when subjected to different attacks: (a) average filtering; (b) brightening; (c) JPEG compression; (d) darkening; (e) median filtering; (f) rotation; (g) salt-and-pepper noise; (h) scaling; (i) sharpening; (j) scaling plus rotation (k); speckle noise; and (l) white noise.
Figure 6. The resulting watermarked images for the host image in Figure 4c when subjected to different attacks: (a) average filtering; (b) brightening; (c) JPEG compression; (d) darkening; (e) median filtering; (f) rotation; (g) salt-and-pepper noise; (h) scaling; (i) sharpening; (j) scaling plus rotation (k); speckle noise; and (l) white noise.
Symmetry 16 00877 g006
Figure 7. The extracted watermark images from the watermarked images in Figure 6 subjected to various attacks: (a) average filtering; (b) brightening; (c) JPEG compression; (d) darkening; (e) median filtering; (f) rotation; (g) salt-and-pepper noise; (h) scaling; (i) sharpening; (j) scaling plus rotation (k); speckle noise; and (l) white noise.
Figure 7. The extracted watermark images from the watermarked images in Figure 6 subjected to various attacks: (a) average filtering; (b) brightening; (c) JPEG compression; (d) darkening; (e) median filtering; (f) rotation; (g) salt-and-pepper noise; (h) scaling; (i) sharpening; (j) scaling plus rotation (k); speckle noise; and (l) white noise.
Symmetry 16 00877 g007
Table 1. Insights into the efficacy of the implemented approach: evaluation of watermark imperceptibility based on PSNR and SSIM between watermarked and host images. An NC value of 1 indicates how closely each extracted watermark image resembles the original.
Table 1. Insights into the efficacy of the implemented approach: evaluation of watermark imperceptibility based on PSNR and SSIM between watermarked and host images. An NC value of 1 indicates how closely each extracted watermark image resembles the original.
Cover Image in Figure 4Performance Results
PSNR (dB)SSIMNC
(a)35.040.90931
(b)36.450.98151
(c)35.820.99241
(d)34.650.97601
(e)35.780.97071
(f)34.110.97431
(g)35.330.96241
(h)35.670.99261
(i)33.840.97551
(j)34.100.96121
(k)31.060.95081
(l)33.080.96291
(m)43.470.99841
(n)48.540.99601
(o)48.350.99551
Table 2. Comparison of imperceptibility between the proposed method and existing methods measured by peak signal-to-noise ratio (PSNR) in decibels (dB) and structural similarity index (SSIM).
Table 2. Comparison of imperceptibility between the proposed method and existing methods measured by peak signal-to-noise ratio (PSNR) in decibels (dB) and structural similarity index (SSIM).
Cover Image in Figure 4Siyu Chen’s Approach [25]Prabha’s Approach [16]Proposed Method
PSNR (dB)SSIMPSNR (dB)SSIMPSNR (dB)SSIM
(a)33.270.832031.320.739435.040.9093
(b)38.310.988533.060.968836.450.9815
(c)34.550.991333.060.987335.820.9924
(d)32.410.975229.920.954434.650.9760
(e)35.710.967334.100.954635.780.9707
(f)31.670.975727.200.932634.110.9743
(g)32.720.963231.410.939135.330.9624
(h)33.730.991232.400.986435.670.9926
(i)32.490.976829.120.949433.840.9755
(j)34.120.955932.830.933734.100.9612
(k)30.880.943628.550.912231.060.9508
(l)32.740.954932.530.941333.080.9629
(m)43.550.998143.000.997443.470.9984
(n)48.460.989148.320.983048.540.9960
(o)47.950.991347.990.985748.350.9955
Table 3. Comparative analysis of robustness between our proposed algorithm and existing methods based on mean NC values for various attacks.
Table 3. Comparative analysis of robustness between our proposed algorithm and existing methods based on mean NC values for various attacks.
Type of AttackSiyu Chen’s ApproachPrabha’s ApproachProposed Method
Average filtering [3, 3]0.91310.90040.9401
Brighten (add 30 to each pixel)0.99560.99480.9952
JPEG compression (50)0.89270.89110.8973
Darken (subtract 60 from each pixel)0.95530.94820.9516
Median filtering [5, 5]0.82090.83930.8613
Rotation 45°0.87920.84190.8837
Salt-and-pepper noise (V = 0.01)0.95850.94960.9551
Scaling (1.5)0.91470.91330.9284
Sharpening (0.2)0.93290.91440.9306
Scaling (1.5) + rotation 45°0.81130.80930.8009
Speckle noise (V = 0.01)0.95640.95170.9601
White noise (V = 0.005)0.95290.94730.9584
Table 4. Comparative analysis of watermark embedding capacity, showing the amount of data (in bits) that can be embedded using the proposed algorithm compared to existing algorithms.
Table 4. Comparative analysis of watermark embedding capacity, showing the amount of data (in bits) that can be embedded using the proposed algorithm compared to existing algorithms.
MethodsWatermark Length (Bits)Host Image
(Pixels)
Bits/
Pixel
Siyu Chen’s approach [25]
(Figure 4a–l)
64 × 64 × 3× 8512 × 512 × 30.125
Prabha’s approach [16]
(Figure 4a–l)
64 × 128 × 3 × 8512 × 512 × 30.250
Proposed method
(Figure 4a–l)
64 × 128 × 3 × 8512 × 512 × 30.250
Siyu Chen’s approach [25]
(Figure 4m–o)
64 × 64 × 3× 82048 × 2048 × 30.0078
Prabha’s approach [16]
(Figure 4m–o)
64 × 128 × 3 × 82048 × 2048 × 30.0156
Proposed method
(Figure 4m–o)
64 × 128 × 3× 82048 × 2048 × 30.0156
Table 5. Comparative analysis of computational complexity between the proposed method and existing algorithms, focusing on mean embedding time (time taken to embed a watermark), mean extraction time (time taken to extract a watermark), and mean overall computational complexity (combined embedding time and extraction time), measured in milliseconds.
Table 5. Comparative analysis of computational complexity between the proposed method and existing algorithms, focusing on mean embedding time (time taken to embed a watermark), mean extraction time (time taken to extract a watermark), and mean overall computational complexity (combined embedding time and extraction time), measured in milliseconds.
MethodsMean Embedding
Time (ms)
Mean Extraction
Time (ms)
Mean Overall
Computational
Complexity (ms)
Siyu Chen’s approach [25]459.8153.6613.4
Prabha’s approach [16]754.1198.5952.6
Proposed method 748.5193.6942.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Trinadh Reddy, K.; Reddy, S.N. A Novel Blind Double-Color Image Watermarking Algorithm Utilizing Walsh–Hadamard Transform with Symmetric Embedding Locations. Symmetry 2024, 16, 877. https://doi.org/10.3390/sym16070877

AMA Style

Trinadh Reddy K, Reddy SN. A Novel Blind Double-Color Image Watermarking Algorithm Utilizing Walsh–Hadamard Transform with Symmetric Embedding Locations. Symmetry. 2024; 16(7):877. https://doi.org/10.3390/sym16070877

Chicago/Turabian Style

Trinadh Reddy, KVSV, and S. Narayana Reddy. 2024. "A Novel Blind Double-Color Image Watermarking Algorithm Utilizing Walsh–Hadamard Transform with Symmetric Embedding Locations" Symmetry 16, no. 7: 877. https://doi.org/10.3390/sym16070877

APA Style

Trinadh Reddy, K., & Reddy, S. N. (2024). A Novel Blind Double-Color Image Watermarking Algorithm Utilizing Walsh–Hadamard Transform with Symmetric Embedding Locations. Symmetry, 16(7), 877. https://doi.org/10.3390/sym16070877

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop