Next Article in Journal
Study on Rock Failure Criterion Based on Elastic Strain Energy Density
Previous Article in Journal
A Stereo-Vision-Based Spatial-Positioning and Postural-Estimation Method for Miniature Circuit Breaker Components
Previous Article in Special Issue
Adversarial Attack for Deep Steganography Based on Surrogate Training and Knowledge Diffusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust and Secure Watermarking Approach Based on Hermite Transform and SVD-DCT

by
Sandra L. Gomez-Coronel
1,†,
Ernesto Moya-Albor
2,*,†,
Jorge Brieva
2,*,† and
Andrés Romero-Arellano
2,†
1
Instituto Politécnico Nacional, UPIITA. Av. IPN No. 2580, Col. La Laguna Ticoman, CDMX 07340, Mexico
2
Facultad de Ingeniería, Universidad Panamericana, Augusto Rodin 498, Ciudad de México 03920, Mexico
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2023, 13(14), 8430; https://doi.org/10.3390/app13148430
Submission received: 1 June 2023 / Revised: 14 July 2023 / Accepted: 15 July 2023 / Published: 21 July 2023

Abstract

:
Currently, algorithms to embed watermarks into digital images are increasing exponentially, for example in image copyright protection. However, when a watermarking algorithm is applied, the preservation of the image’s quality is of utmost importance, for example in medical images, where improper embedding of the watermark could change the patient’s diagnosis. On the other hand, in digital images distributed over the Internet, the owner of the images must also be protected. In this work, an imperceptible, robust, secure, and hybrid watermarking algorithm is presented for copyright protection. It is based on the Hermite Transform (HT) and the Discrete Cosine Transform (DCT) as a spatial–frequency representation of a grayscale image. Besides, it uses a block-based strategy and a perfectibility analysis of the best embedding regions inspired by the Human Vision System (HVS), giving the imperceptibility of the watermark, and a Singular-Value Decomposition (SVD) approach improved robustness against attacks. In addition, the proposed method can embed two watermarks, a digital binary image (LOGO) and information about the owner and the technical data of the original image in text format (MetaData). To secure both watermarks, the proposed method uses the Jigsaw Transform (JST) and the Elementary Cellular Automaton (ECA) to encrypt the image LOGO and a random sequence generator and the XOR operation to encrypt the image MetaData. On the other hand, the proposed method was tested using a public dataset of 49 grayscale images to assess the effectiveness of the watermark embedding and extraction procedures. Furthermore, the proposed watermarking algorithm was evaluated under several processing and geometric algorithms to demonstrate its robustness to the majority, intentional or unintentional, attacks, and a comparison was made with several state-of-the-art techniques. The proposed method obtained average values of PSNR = 40.2051 dB, NCC = 0.9987, SSIM = 0.9999, and MSSIM = 0.9994 for the watermarked image. In the case of the extraction of the LOGO, the proposal gave MSE = 0, PSNR ≫ 60 dB, NCC = 1, SSIM = 1, and MSSIM = 1, whereas, for the image MetaData extracted, it gave BER = 0% and B e r r o r = 0 . Finally, the proposed encryption method presented a large key space ( K = 1.2689 × 10 89 ) for the LOGO image.

1. Introduction

Currently, digital watermarking has become a way to embed information into an image and protect it from unauthorized access and manipulation. Depending on digital content such as video, image, audio, and text and the application, algorithms can be developed for authentication, material security, trademark protection, and the tracking of digital content. The objective is to insert a watermark (digital image or text) into the digital content. There are important requirements to take into account when a watermarking algorithm is designed: imperceptibility, robustness, security, capacity, and computational cost. It is difficult to have an algorithm that embraces all requirements due to robustness, which refers to the ability to withstand image distortions that may compromise the imperceptibility of the watermark. Because of that, different techniques have been developed in order to improve robustness without compromising the original content. The state-of-the-art suggests that these algorithms can be designed in the spatial, transform, or hybrid domain. Thus, the algorithms alter the marked pixels to embed the watermark in the intensity domain of the image. The advantage of this is low computational complexity; however, the image suffers visible alterations, and the algorithm does not possess robustness against geometric transformations. In the transform domain, the watermark is embedded within specific elements to ensure enhanced resilience. Furthermore, the transforms can be combined to have hybrid domain watermarking. These kinds of methods increase the performance of the watermarking technique.
Therefore, in a watermarking image method, the watermark must be robust and imperceptible or perceptible, depending on the application. In this paper, we propose a hybrid, robust, and imperceptibility watermarking approach using the Hermite Transform (HT), Singular-Value Decomposition (SVD), the Human Vision System (HVS), and the Discrete Cosine Transform (DCT) to protect digital images. As watermarks, we used a digital image LOGO and image MetaData (with information about the original image or the owner), so we inserted two different digital contents into a digital image. The Hermite transform is based on the Gaussian function derivatives, and it incorporates human visual system properties, so it allows a perfect reconstruction of the image. To have more security, the LOGO is encrypted using the Jigsaw Transform (JST) before inserting. In addition, the indexes to decrypt the LOGO are secured using the Elementary Cellular Automaton (ECA), increasing the security of the proposal. In addition, the image MetaData are secured using a random sequence generator and the XOR operation. Finally, the Hamming error correcting code was applied to the image MetaData to reduce channel distortion.
The rest of the paper is divided as follows: Section 2 presents the related work describing the image watermarking methods using the DCT and SVD techniques and other space–frequency decomposition methods similar to the HT. We describe all the elements used to design the watermark algorithm such as the public dataset, JST, SVD, HT, DCT, HVS, and Elementary Cellular Automata (ECA) in Section 3. Section 4 details the proposed watermarking algorithm for the insertion/extraction of the watermarks. In Section 5, we report the experiments and results obtained in the insertion and extraction stages of the watermarks, including the computational complexity of the algorithm. In addition, we report the robustness analysis of the proposed method against the most-common processing and geometric attacks using the public image datasets, and we compare the algorithm with other methods of the state-of-the-art. Section 6 includes an analysis of the achieved results in this study, along with a comparison to other related works. Finally, Section 7 presents the conclusions and future work.

2. Related Work

There are many methods for watermarking images presented in the literature, and depending on the application, the requirements of the methodologies vary. The algorithms designed for watermarking have advantages and disadvantages. The most-representative work is in the transformation domain. For example, in Mokashi et al. [1], a strategy for watermarking images was introduced, which combines the Discrete Wavelet Transform (DWT), Discrete Cosine Transform, and Singular-Value Decomposition. The watermarks utilized in this approach are the users’ biometrics and their signature. During the embedding process, the biometrics acts as the host image, while the signature serves as the watermark. In contrast, for a second embedding process, the resulting watermark of the first process is embedded into the primary host image. In both embedding processes, the host image undergoes decomposition using the DWT, and the watermark is inserted inside the low-frequency coefficient by means of SVD.
In [2], Dharmika et al. preserved medical records by incorporating them into Magnetic Resonance Imaging (MRI) patient scans. The authors used the Advanced Encryption Standard (AES) to secure the medical records and SVD to compress the MRI scan reports. Then, the DCT was applied to embed the encrypted medical health record over the compressed MRI scan.
Sharma et al. [3] presented a combined approach for watermarking images using a resilient watermark (frequency domain) through the DWT and DCT and a fragile watermark (spatial domain). In the robust watermark, the Fisher–Yates shuffle method was used to scramble the watermark, and the LH and HL sub-bands were used to embed the watermark. On the other hand, a bitwise approach was used for the fragile watermark, including a halftoning operation in conjugation with the XOR and concatenation operations. In addition, a fragile watermarking method was used to perceive and locate the manipulated regions through the XOR operator in the extraction stage.
Nguyen [4] proposed a fragile-watermarking-based approach using the DWT, DCT, and SVD techniques. The watermark was inserted into the low-frequency coefficient of the DWT using the Quantization Index Modulation (QIM) technique, and the feature coefficients were adjusted using the Gram–Schmidt procedure. Besides, a tamper detection process under different attacks was incorporated.
In [5], Li et al. introduced an encryption/watermarking algorithm using the Fractional Fourier Transform (FRFT) in a hybrid domain. The Redistributed Invariant Wavelet Transform (RIDWT) and Discrete Cosine Transform (DCT) were applied to the enlarged host image. The resulting low-frequency and high-frequency components underwent SVD, and the watermark image was subjected to double-encryption using the Arnold Transform (AT). To achieve adaptive embedding, multi-parameter Particle Swarm Optimization (PSO) was utilized.
Alam et al. [6] reported a frequency-domain-based approach using the DWT and DCT and applying a two-level singular-value decomposition and a three-dimensional discrete hyper-chaotic map. The HH sub-band of the DWT was used to incorporate the watermark, which contains some image parameters, and it was encrypted through the Rivest–Shamir–Adleman (RSA), AT, and SHA-1 techniques.
Sharma and Chandrasekaran [7] investigated the robustness of popular image watermarking schemes using combinations of the DCT, DWT, and SVD, as well as their hybrid variations. These approaches were evaluated against traditional image-processing attacks and an adversarial attack utilizing a Deep Convolutional Neural Network (CNN) and an Autoencoder (CAE) technique.
In [8], Garg and Kishore analyzed various watermarking techniques to test robustness, imperceptibility, security, capacity, transparency, computational cost, and the false positive rate. The methods studied were classified into multiple categorizations of watermarking: perceptibility (visible and invisible watermark), accessibility (private and public), document type (text, audio, image, video), application (copyright protection, image authentication, fingerprinting, copy and device control, fraud and temper detection), domain-based (spatial domain, transform/frequency domain), type of schema (blind and non-blind), and cover image. The techniques analyzed were tested against several attacks: image-processing, geometric, cryptographic, and protocol attacks, using the more-representative evaluation measures, for example the PSNR, NCC, BER, and SSIM.
Zheng and Zhang [9] proposed a DWT-, DCT-, and SVD-based watermarking method to address common watermarking and rotation attacks. The scrambled watermark was inserted into the LL sub-band. In addition, the authors signed the U and V matrices to avoid the false positive problem.
In [10], Kang et al. reported a hybrid watermarking method of grayscale images based on DWT, DCT, and SVD for later embedding the watermark into the LH and HL sub-bands. Multi-dimensional PSO and an intertwining logistic map were used as the optimization algorithms and encryption models for watermarking robustness enhancement.
Taha et al. [11] evaluated two watermarking methods, a DWT based and an approach using the Lifting Wavelet Transform (LWT) under the same watermark and embedding it into the middle-frequency band. The results showed that, in terms of objective image quality, the LWT method outperformed the DWT method, whereas the DWT watermarking technique exhibited superior resilience against various attacks compared to the LWT approach.
Thanki and Kothari [12] proposed a watermarking technique using human speech signals as the watermark. For this, the watermark’s hybrid coefficients were derived using the DCT and subsequently subjected to SVD. Then, these coefficients were inserted into the coefficients of the host image, which were generated by a DWT followed by a Fast Discrete Curvelet Transform (FDCuT).
In [13], Kumar et al. presented a DWT-, DCT-, and SVD-based watermarking method. In addition, security was accomplished through a Set Partitioning in a Hierarchical Tree (SPIHT) and by the AT.
Zheng et al. [14] proposed a zero-watermarking approach applied to color images using the DWT, DCT, and SVD, taking advantage of the multi-level decomposition of the DWT, the concentration of the energy of the DCT, and the robustness of the SVD. Due to three color channels being used to embed the watermark, it was extracted by a voting strategy.
In [15], Yadav and Goel presented a composed watermarking proposal that involved DWT ad DCT analysis and an SVD approach to insert binary watermarks. The approach was image-adaptive, which identified blocks with high entropy to determine where the watermark should be embedded.
Takore et al. [16] reported a watermarking hybrid approach for digital images using LWT and DCT analysis and an SVD technique. Their proposal applied the Canny filter to identify regions with a higher number of edges, which were used to create two sub-images. These sub-images served as the reference points for both the embedding and extracting stages. Moreover, during the marking stage, the method used Multiple Scaling Factors (MSFs) to adjust various ranges of the singular-value coefficients. Kang et al. [17] reported a watermarking schema in digital images through a composed method applying DCT and DWT analysis and an SVD approach. In addition, the method used a logistic chaotic map.
Sridhar [18] proposed a scheme that protected the information with an adjustable balance between image quality and watermark resilience against image-processing and geometric attacks. The method was based on the DWT, DCT, and SVD techniques and provided an adaptive PSNR for the imperceptibility of the watermarks.
Madhavi et al. [19] investigated different digital watermarking schemes, comparing the protection and sensible limit. Moreover, the authors introduced a combined watermarking technique that leveraged the advantages of multiple spatial–frequency decomposition approaches such as the DWT and DCT, robust insertion analysis such as SVD, and security such as the AT.
Gupta et al. [20] used a cryptographic technique called Elliptic Curve Cryptography (ECC) in a semi-blind strategy of digital image watermarking. The proposed watermarking method was implemented within the DWT and SVD domain. Furthermore, the parameters of the entropy based on the HVS were calculated on a blockwise basis to determine the most-appropriate spatial locations.
Rosales et al. [21] presented a spectral domain watermarking technique that utilized QR codes and QIM in the YCbCr color domain, and the luminance channel underwent processing through SVD, the DWT, and the DCT to insert a binary watermark using QIM.
In [22], El-Shafai et al. presented two hybrid watermarking schemes for securing 3D video transmission. The first one was based on the SVD in the DWT domain, and the second scheme was based on the three-level discrete stationary wavelet transform in the DCT domain. In addition, El-Shafai et al. [23] proposed a fusion technique utilizing wavelets to combine two depth watermark frames into a unified one. The resulting fused watermark was subsequently secured using a chaotic Bakermap before being embedded in the color frames of 3D-High-Efficiency Video Coding (HEVC).
Xu et al. [24] introduced a robust and imperceptible watermarking technique for RGB images in the combined DWT-DCT-SVD domain. Initially, the luminance component undergoes decomposition using DWT and DCT. The feature matrix is generated by extracting the low and middle frequencies of the DCT from each region, which is subsequently subjected to SVD for watermark embedding.
In [25], Ravi Kumar et al. reported an image watermarking algorithm using hybrid transforms. In this approach, using SVD analysis, the decomposition of the image watermark was embedded in the decomposition of the cover image using the Normalized Block Processing (NBP) to obtain the invariant features. Then, the integer wavelet transform was applied, followed by the DCT and SVD.
In [26], Magdy et al. provided an overview of the watermarking techniques used in medical image security. The authors described the elements to design a watermarking algorithm. Furthermore, they presented a brief explanation of cryptography, steganography, and watermarking. Regarding watermarking, they took as an example different algorithms such as that in [27], where Kahlessenane et al. presented a watermarking algorithm to ensure the copyright protection of medical images. They used as the watermark patient information and used the DWT. The results showed high PSNR values (147 dB), demonstrating the imperceptibility of the watermark and the robustness of the method against attacks. However, they did not present any results about the extraction process.
In the paper [28], Dixit et al. described a watermarking algorithm using thirty different images and used two watermarks: one of them to authenticate (fragile), and the other one focused on robustness (information watermark). To insert the authentication watermark, they used the DCT, and for the information watermark, the process included the DWT and SVD. The results showed robustness for Salt and Pepper (SP) noise, rotation, translation, and cropping (even though the PSNR of the recovered watermark was low). The same authors proposed another watermark algorithm in [29]. This algorithm was non-blind and used the LWT on the cover image to decompose the image into four coefficient matrices; with this transform, the image had better reconstruction. Furthermore, the authors employed the DCT and SVD. The authors reported better robustness and mentioned that they reduced the time complexity of traditional watermarking techniques. The results showed high PSNR values (about 200 dB) without attacks. They applied different attacks, compared their technique with other techniques, and demonstrated that their technique had better robustness. They did not include the watermarks extracted. Therefore, to evaluate different techniques and compare them, some papers focused on describing different watermarking algorithms. For example, Gupta et al. [30] explained that, to achieve the security of digital data, it is necessary to improve the watermarking techniques and to provide better robustness. The authors clarified that several algorithms utilize SVD to enhance the quality aspect of the embedded image, aiming to increase its resilience against various signal-processing attacks. The authors presented different metrics that are possible to use to evaluate different techniques and different transformations that researchers use commonly.
In [31], Mahbuba Begum et al. presented a combined bling digital image watermarking method using the DCT and DWT as spatial–frequency decomposition and SVD analysis to ensure all requirements, according to the authors, that a watermarking algorithm must satisfy, for example imperceptibility, safety, resilience, and capacity of the payload. As a watermark, they used a digital image and encrypted it with the Arnold map. They presented results using only one image and only one watermark.
D. Rajani et al. [32] proposed a new technique called the Porcellio Scaber Algorithm (PSA). They explained that, with this algorithm, the visual perception of the extracted watermark was good and, at the same time, maintained robustness. Their proposal was a bling watermarking and used a redundant version of the DWT (RDWT), DCT, and SVD. In addition, they embedded a LOGO into the host image. They reported a high PSNR value of 73.7205 dB in the watermarked image (Lena).
Other hybrid algorithms were developed by Wu, J.Y. et al. [33,34]. On the one hand, in [33], they presented a scheme using SVD (to improve robustness), the DWT, and the DCT. Their proposal included a process to encrypt the watermark by an SVD ghost imaging system. As a watermark, they used a digital image with a size of 32 × 32 . The authors did not indicate the parameters of the attacks that they employed to evaluate their method. On the other hand, in [34], a watermarking method using a decomposition by the DWT of four levels in conjunction with an SVD analysis was presented. They proposed four levels of the DWT to significantly enhance the imperceptibility and the robustness of the method. The evaluation of the algorithm showed good results using the PSNR, NCC, and SSIM. As a watermark, they used a digital image with a size of 32 × 32 .
Seif Eddine Naffou et al. described in [35] a hybrid SVD-DWT. They explained that the Human Visual System (HVS) is less sensitive to high-frequency coefficients, so they chose them to insert the watermark and to avoid poor results when extracting the watermark, they aggregated SVD.
As we can see, different watermarking algorithms for digital images have been developed for copyright protection, and the majority are focused on the principal problem, which is robustness. In this paper, a watermarking method including imperceptibility, robustness, watermark capacity, and computational cost for copyright protection is presented.

3. Materials and Methods

3.1. Description of the Dataset

To evaluate the watermarking proposal, we selected 49 grayscale images of 512 × 512 px (Figure 1) from public datasets: the USC-SIPI Image Database [36], the Waterloo Fractal Coding and Analysis Group [37], Fabien Petitcolas [38], the Computer Vision Group of University of Granada [39], and Gonzalez and Woods [40]. The collection of 49 grayscale images utilized in this study can be accessed publicly through our website: https://sites.google.com/up.edu.mx/invico-en/resources/image-dataset-watermarking.
As a watermark, we used a digital image (LOGO) of 100 × 100 px, as is shown in Figure 2a. In addition, we used the image MetaData of the Barbara image in plaintext, as we show in Figure 2b.

3.2. Jigsaw Transform and Cellular Automata

Image encryption is of great importance currently to ensure the protection of sensitive information by preventing unauthorized access to it. Cryptography techniques used for image encryption are required to have features such as hiding the visual information of the image, having a large key space to resist brute force attacks, and having high key sensitivity to prevent differential attacks [41]. A recent example of an algorithm that meets these requirements is that proposed by Sambas et al. [42], where they developed a three-dimensional chaotic system with line equilibrium, which was used along with Josephus traversal to implement an image encryption scheme based on pixel and bit scrambling and pixel value diffusion, resulting in an encryption method secure against brute force attacks and differential attacks. In addition, the authors implemented the proposed chaotic system into an electronic circuit. In contrast, in the present work, we used the Jigsaw transform and a cellular automaton to encrypt the watermarked image, improving the key space of the Jigsaw-transform-alone implementation.

3.2.1. Jigsaw Transform

The Jigsaw transform is a popular scrambling technique to hide visual information in digital images. Its name is reminiscent of an image cut into pieces of different sizes that must be joined correctly to form the picture again. It is considered a nonlinear operator, which rearranges sections of an image following a random ordering [43]. The direct JST breaks a grayscale image into M non-overlapping blocks of s 1 × s 2 pixels, each one of which is moved to a location following a random order. In the same way as the direct JST ( J M < > ), the inverse Jigsaw transform ( J M 1 < > ) uses the initial order of the sections to recover the original image. The JST holds the energy of a grayscale image ( I ( x , y ) ) and is, therefore, considered a unitary transform (Equation (1)).
I ( x , y ) = J M 1 J M I ( x , y ) .
Figure 3 shows a grayscale image of 512 × 512 px and the corresponding results for the JST, giving M non-overlapping blocks of 64 × 64 px for M = 64 , 32 × 32 px for M = 256 , and 8 × 8 px for M = 4096 .
The set of each possible combination of the security keys used in the encryption of a digital image is called the key space. Thus, for the JST, the key space is related to the number of blocks M , i.e., K J = M ! .

3.2.2. Elementary Cellular Automata

Elementary Cellular Automata consist of a grid of cells of width X and height Y. Each cell has two possible states: ON and OFF. Initially, all cells start turned OFF, except for the first row, which has an arbitrary configuration. The grid will evolve through a series of iterations. In iteration i, the i + 1 th row is modified. The new value of a cell is determined by the neighborhood of the cell in the same column in the row above (the parent row). The neighborhood of a cell consists of three cells: the cell itself and its two horizontal neighbors (handling the edge cases with modular arithmetic). Therefore, there are 2 3 = 8 possible neighborhoods and, consequently, 2 8 = 256 possible rules for the next iteration of the automaton. In this paper, we focused on the rule known as Rule 105 according to the Wolfram code described in [44] to classify the rules of the ECA, depicted in Figure 4.
For the purposes of our work, we considered a variant of the ECA, where cells can have values ranging from 0 to an integer k, and instead of using Rule 105 to determine if a cell should be ON or OFF, we used it to determine if a cell should increase its value by an odd number n or not (changing its parity) based on the parity of the values of the corresponding neighborhood, considering even cells as OFF states and odd cells as ON states. If the new value is greater than k, we made use of modular arithmetic to return it to our desired range of values. We also considered a finite grid in the vertical direction; if we performed a number of iterations greater than the amount of rows and we ran out of rows to modify, we continued with the top row considering the bottom row as its parent row. This allowed us to start the first iteration in the first row and to easily apply elementary cellular automata to the gray images. In Figure 5, we show this variation of Rule 105 on a gray image. The process is reversible by applying the algorithm to the rows in reverse order and subtracting by k. The exact size of the key space of our ECA is unknown; its upper bound must be Y × k X × Y since that is the total amount of configurations the matrix could have considering all possible values of the matrix and all possible rows that can be modified in a given iteration of the automaton. Evolving the ECA beyond that amount of iterations would lead to a repeated state. Since it is likely that a repeated state will occur before that amount of iterations, the key space of our ECA was K C A Y × k X × Y .

3.2.3. ECA Applied on Jigsaw Transform

For our work, we used the Jigsaw transform in combination with the elementary cellular automaton described in Section 3.2.2. Given that the JST has a limited key space, the ECA previously described was used to help achieve a broader key space. We used the JST with M = 5 × 5 subsections, storing the index of each subsection in a 5 × 5 matrix to be able to reverse the algorithm. We can encrypt this matrix using our ECA with an arbitrary number of iterations and considering values in the range from 1 to 25. We chose a value of k = 17 for our work. The key space of the ECA applied on the Jigsaw transform would be K = ( M ! ) ( Y ) ( k X × Y ) in general; substituting for the variables we chose for our work, we obtained a key space of K ( 25 ! ) ( 5 ) ( 25 5 × 5 ) = 1.2689 × 10 89 . We used this algorithm to encrypt our watermark image, as described later in Section 4; given that we used 5 × 5 subsections, we can achieve full image encryption of the watermark image by using the JST along with 5 iterations of the ECA to modify all the rows of the JST index matrix. Since the main theme of this work was watermarking, a thorough analysis of the JST with the ECA for image encryption is beyond the scope of this article but can be studied in future work.

3.3. SVD Analysis

Singular-value decomposition, used in linear algebra, performs an expansion of a rectangular matrix A R M × N in a coordinate system where M and N are the dimensions of A and the covariance matrix is diagonal. Equation (2) shows the SVD theorem:
A = U S V T A M × N a 11 a 1 n a m 1 a m n = U M × M u 11 u m 1 u 1 m u m m S M × N σ 1 0 σ r 0 0 V N × N v 11 v 1 n v n 1 v n n ,
where U R M × M and V R N × N are orthogonal matrices defined by:
U T U = I M × M V T V = I N × N .
S R M × N is a diagonal matrix, and σ i with i = 1 , , M are the singular values that satisfy σ 1 σ 2 σ r = σ r + 1 = σ M × N = 0 with r M × N representing the rank of the matrix A.
Thus, SVD calculates the eigenvalues of A A T , forming the columns of V and the eigenvectors of A T A and generating the columns of U, and the singular values in S are obtained by the square roots of the eigenvalues from A A T and A T A .
SVD has been successfully used for a variety of applications. In particular, in the signal- and image-processing areas, SVD has been applied to image compression and image completion [45], dimensionality reduction, facial recognition, background video removal, image noise reduction, cryptography, and digital watermarking [46].
In the following, we describe some properties of SVD:
  • A few singular values contain the majority of the signal’s energy, which has been exploited in compression applications.
  • The decomposition/reconstruction could be applied to both square and non-square images.
  • When a slight interference, e.g., noise, alters the values of the image, the singular values remain relatively stable.
  • Singular values of an image represent its intrinsic algebra.
SVD generates the sorted matrices U, S, and V, following how they contribute to the matrix A. Thus, we obtained an approximation of the input image when only a number k of singular values was used. In addition, if k is very near M, the quality of the reconstructed image increases. From Equation (2), an image approximation is obtained taking r columns of U and V and the upper left r × r square of S.
Figure 6 shows the image approximation using SVD over a grayscale image of 256 × 256 for r = 8 , 32 , 64 , 128 , 256 , and Table 1 reports the correlation coefficient (R), which is calculated using the original image and the reconstructed image, where using only 25 % of the singular values ( r = 64 of 256), a correlation value of 0.994 is achieved. In addition, Figure 7 shows the zooming of a region of the original image, where both homogeneous and texture regions are presented.

3.4. Hamming Code

Linear block codes, defined in coding theory, are a kind of error-correcting code, where a linear combination of codewords is also a codeword. Hamming codes are efficient error-correcting binary linear block codes used to detect and correct errors when data are stored or transmitted.
For a ( 7 , 4 ) Hamming code, the encoding operation is performed by the 4 × 7 generator matrix shown in Equation (3) [47]:
G = 1 1 0 1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 0 1 0 0 0 0 1 1 1 0 .
When we talk about a ( 7 , 4 ) Hamming code, we are referring to a code that generates seven bits for four input bits. Through a combination, e.g., lineal, of rows of G and the modulo-2 computation, the codewords are obtained for each input element, where the code corresponds to the length of a row of the matrix G. Thus, c = m G is the codeword for m = m 1 m 2 m 3 m 4 as the input message [47].
For the decoding process, the ( 7 , 4 ) Hamming code has associated with it a 3 × 3 parity check matrix H, with v H T = 0 if v is a codeword. Thus, for the matrix G of Equation (3), the corresponding parity check matrix H is given by Equation (4):
H = h 1 h 2 h 3 h 4 h 5 h 6 h 7 = 1 0 1 1 1 0 0 0 1 0 1 1 1 0 0 0 1 0 1 1 1 ,
where h k is the column vector k, and it can be verified that G H T = 0 .

3.5. Discrete Cosine Transform

The DCT serves as the foundation for numerous image compression techniques and lossy digital image compression systems: Joint Photographic Experts Group (JPEG) for still images and Moving Picture Experts Group (MPEG) for video images [48]. It is a mathematical tool to perform frequency analysis without complex numbers and to approximate a typical signal using fewer coefficients (low-, high-, and middle-frequency components), i.e., it can pack the most information in the fewest coefficients and pack energy in the low-frequency regions [49,50]. One of the most-usual applications is in signal and image processing for lossy compression because of its property to compact strong energy, creating predictions according to its local uniqueness. Besides, as mentioned in [51], this transform has entropy retention, decorrelation, and energy retention–energy concentration, among which energy concentration is of great significance to digital image encryption. Therefore, the most-important DCT advantages, such as a high compression ratio and low error rates [52], are taken into account in different applications, such as digital image encryption, because the energy concentration is a very important element. When we applied the DCT to an image (matrix), we obtained a DCT coefficient matrix that contained the DC coefficient and the AC coefficient. The energy was concentrated in the DC element. As an example, the Lena image and its transformation applying the DCT are shown in Figure 8.
Another application is in steganography systems and watermark systems, which embed the information of a signal in the transform domain. These systems are more robust if they operate in the transform domain (Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT), Contourlet Transform (CT), etc.). Specifically, in the DCT domain, the algorithms are more robust against common processing operations (JPEG and MPEG compression) compared with spatial domain techniques, and also, the DCT offers the possibility of directly realizing the embedding operator in the compressed domain (i.e., inside a JPEG or MPEG encoder) to minimize the computation time [53].
The 2D DCT of a grayscale image I ( x , y ) is as follows (Equation (5)):
I ( u , v ) = y = 1 Y x = 1 X w ( u , v ) I ( x , y ) c o s ( 2 x 1 ) ( u 1 ) π 2 M c o s ( 2 y 1 ) ( v 1 ) π 2 N ,
where X represents the number of columns and Y the number of rows of the image I ( x , y ) , x , y its spatial coordinates, u , v the corresponding frequency coordinates, and
w ( u , v ) = 1 X Y when u = 1 , v = 1 2 X Y when u = 1 , v 2 4 X Y otherwise ,
where w ( u , v ) is a weight factor, y , v [ 1 , , Y ] and x , u [ 1 , , X ] [54].
The 2D Inverse Discrete Transform (IDCT) is given as follows (Equation (6)):
I ( x , y ) = v = 1 Y u = 1 X w ( u , v ) I ( u , v ) c o s ( 2 x 1 ) ( u 1 ) π 2 X c o s ( 2 y 1 ) ( v 1 ) π 2 Y .

3.6. Hermite Transform

The Cartesian Hermite transform is a technique of signal decomposition. To analyze the visual information, it is necessary to use a Gaussian window function v 2 ( x , y ) = 1 σ π exp ( x 2 + y 2 2 σ 2 ) .
The information within the window is expanded to a family of polynomials G o , p o ( x , y ) . These polynomials have the characteristic of orthogonality in the function of the Gaussian window and are defined in terms of the Hermite polynomials as Equation (7):
G o , p o ( x , y ) = 1 2 p o ! ( p o ) ! H o x σ H p o y σ ,
where o and ( p o ) indicate the analysis order in the spatial directions x and y, respectively, p = 0 , , and o = 0 , , p , H p are the generalized Hermite polynomials, and σ 2 represents the variance of the Gaussian window.
Equation (8) defines the Hermite polynomials.
H p x σ = ( 1 ) p exp x 2 σ 2 d p dx p exp x 2 σ 2 .
Convoluting the image I ( x , y ) with the Hermite analysis filters D o , p o ( x , y ) followed by a sub-sampling (T) as follows in Equation (9), we obtained the HT.
I o , p o ( x 0 , y 0 ) = I ( x , y ) D o , p o ( x 0 x , y 0 y ) d x d y ,
where I o , p o ( x , y ) are the Cartesian Hermite coefficients:
D o , p o ( x , y ) = G o , p o ( x , y ) v 2 ( x , y ) ,
and ( x 0 , y 0 ) is the spatial position in the sampling lattice S.
The Hermite filters are obtained by Equation (10):
D k ( x ) = ( 1 ) k 2 k k ! 1 σ π H k x σ exp x 2 σ 2 ,
with k = 0 , 1 , 2 , , .
On the other hand, the original image could be reconstructed through Equation (11):
I ( x , y ) = n p = 0 n ( x 0 , y 0 ) S I o , p o ( x 0 , y 0 ) · P o , p o ( x x 0 , y y 0 ) ,
where P o , p o are the Hermite synthesis filters of Equation (12) and V ( x , y ) is the weight function of Equation (13).
P o , p o ( x , y ) = D o , p o ( x , y ) V ( x , y ) .
V ( x , y ) = ( x 0 , y 0 ) S v 2 ( x x 0 , y y 0 ) 0 .
For the discrete implementation, we used the binomial window function to approximate the Gaussian window function (Equation (14)):
ω 2 ( x ) = 1 2 N C N x , x = 0 , 1 , , N 1 , N ,
where N represents the order of the binomial window (Equation (15)).
C N x = N ! ( N x ) ! x ! , x = 0 , 1 , , N 1 , N .
Thus, the Krawtchouk polynomials, defined in Equation (16), are the orthogonal polynomials G o , p o ( x , y ) associated with the binomial window.
K n [ x ] = 1 ( C N n k = 0 n ( 1 ) n k C N x n k C x k .
In the discrete implementation, the signal reconstruction from the expansion coefficients is perfect because the window function support is finite ( N ) and the expansion with the Krawtchouk polynomials is also finite. To implement the Hermite transform, it is necessary to select the size of the Gaussian window spread ( σ ) , the order N for binomial windows, and the subsampling factor that defines the sampling lattice S. The resulting Hermite coefficients are arranged as a set of ( N × N ) equally sized sub-bands: one coarse sub-band I 0 , 0 representing a Gaussian-weighted image average and detail sub-bands I n , m corresponding to higher-order Hermite coefficients, as we can see in Figure 9.

3.7. Human Vision System

For several years, some characteristics of the HVS have been applied to address various image-processing challenges. For example, in [55], the authors proposed a watermarking approach considering that the determined mechanisms of the HVS are less sensitive to the redundancy of image information. Thus, the entropy was used to determine the regions with more redundant image information and to select the visually significant embedding regions.
On the other hand, entropy is a metric widely used to measure the spatial correlation of a local region of the image, for example a pixel neighborhood. It could be defined for an N-state, as is shown in Equation (17) [55]:
E = j = 1 N p j log 2 p j + ϵ ,
where p j defines the probability of the appearance of the j t h pixel in the pixel neighborhood, N is the number of elements within the neighborhood, and ϵ 1 is a small constant value to avoid log 2 ( 0 ) .
In addition, image edges contain relevant information about the image characteristics. Thus, the edge entropy of an image block is taken into consideration to identify the specific areas in the image where the watermark will be inserted. It is calculated by means of Equation (18) [55]:
E e d g e = j = 1 N p j exp 2 1 p i ,
where 1 p j represents the uncertainty of the j-th pixel value in the block.
In [55], the combination between the entropy and edge entropy was used to determine the suitable insertion locations, as is shown in Equation (19):
HVS = j = 1 N ] p j log 2 p j + ϵ p j exp 2 1 p j .

4. Proposed HT, DCT, and SVD and Block-Based Watermarking Method

The present work reports a blockwise image watermarking method to insert two watermarks, a digital image (LOGO) and information about the owner or the host image (MetaData). The proposed method is based on the HT and DCT as a spatial–frequency representation of the cover image with the HVS characteristics to add imperceptibility to the watermark. In addition, an SVD strategy adds robustness against attacks.

4.1. Watermarking Insertion Process

Figure 10 shows a schema of the proposed watermarking insertion process.
The reason to use DCT is that, according to Dowling et al. [56], block-based watermarking is more effective because we have smaller block sizes and the DCT concentrates the energy. After decomposing the original image into space–frequency bands using the HT, the selected sub-bands were partitioned into 4 × 4 blocks. Subsequently, each block was transformed into its DCT representation. On the one hand, an SVD analysis allows high robustness against attacks. On the other hand, the entropy values of the cover image are used to choose the suitable regions for embedding the watermarks, giving an adaptive approach that identifies blocks with high entropy. Thus, HVS values (Equation (19)) in each block are sorted in ascending order, where the lowest values correspond to the best embedding regions. Figure 11a shows a grayscale image of 512 × 512 px, and Figure 11b represents through a color bar, with descending and normalized values, those regions where a watermark, in this case of 100 × 100 px, could be inserted, where high values (light color) correspond to the most-suitable regions and low values (dark color) are the worst regions.
In addition, to increase the performance of the image MetaData against attacks and reduce channel distortion, a ( 7 , 4 ) Hamming error-correcting code was applied to the MetaData. Finally, the LOGO image was encrypted using the Jigsaw transform in combination with elementary cellular automata to increase the security of the proposal.
The steps of embedding both an image watermark (LOGO) ( W 1 ) and the image MetaData ( W 2 ) are explained next regarding the block diagram of Figure 10:
  • Image watermark (LOGO) W 1 :
    Input the binary image watermark LOGO of size k 1 × k 2 .
    Apply the Jigsaw transform to the LOGO with key 1 = M as the first secret key, obtaining the watermark matrix W A of size k 1 × k 2 , where M corresponds to the number of non-overlapping subsections of s 1 × s 2 px.
    Convert W A to binary, obtaining the sequence W 1 R 1 × L b i n .
    The Jigsaw transform generates the indexes i d x J S , which represent the original locations of each subsection of s 1 × s 2 px.
    The ECA algorithm encrypts i d x J S through key 2 , obtaining the encrypted indexes idx J S _ E n c , where key 2 = N i t e k is the second secret key, with N i t e representing the number of iterations, and k is an odd number.
  • Image MetaData W 2 :
    Enter the image MetaData as a character string.
    Convert each alphanumeric character of the image MetaData into a binary string, obtaining the vector V.
    Calculate the ( 7 , 4 ) Hamming code over V, obtaining the vector V H of length P.
    Generate a pseudo-random binary string k r of length P with a uniform distribution; key 3 = k r will be the third secret key.
    Perform the bitwise operation to obtain the watermark W 2 :
    W 2 = ( V H ) XOR ( key 3 ) .
    Since the image MetaData size is small compared tothe LOGO image size, an adjustment of the dimensions of W 2 by adding binary zeros to correspond to the dimensions of W 1 R 1 × L b i n is performed.
  • Input host image to watermark:
    Input the host image. For an RGB image, convert it to grayscale, obtaining a matrix I ( x , y ) of size m 1 × n 1 .
    Perform the Hermite transform decomposition to I ( x , y ) , to obtain nine coefficients. Each one is a matrix I o , p o ( x , y ) of size m 2 × n 2 , where m 2 = m 1 T and n 2 = n 1 T , where T is a sub-sampling factor, e.g., T = 2 .
    Select the low-spatial-frequency Hermite coefficient M = I 0 , 0 to embed the watermarks W 1 and W 2 .
    Divide M into blocks of size b × b , obtaining the multidimensional array B composed of L = m 2 b × n 2 b blocks.
    Apply the HVS analysis to each block of matrix B through Equation (19), obtaining  HVS B .
    Sort each value of HVS B in ascending order, storing the position idx H V S of each ordered block, where the lowest values of HVS B correspond to the best embedding regions.
    Apply the DCT to each block of B, obtaining C = DCT { B } , where C R b × b × L and L is the number of blocks of size b × b .
  • SVD-based insertion process:
    Embed the watermarks W 1 and W 2 using the SVD-based Algorithm 1, with blocks of b × b = 4 × 4 , obtaining the multidimensional output array C .
    To fully embed the LOGO and image MetaData within the host image, the following relation must be satisfied:
    L b i n L .
  • Making the watermarking image:
    Apply the inverse DCT to each block of C , obtaining M = DCT 1 { C } .
    Substitute the low-spatial-frequency Hermite coefficient of I o , p o ( x , y ) : I 0 , 0 ( x , y ) = M .
    Perform the inverse Hermite transform of I o , p o ( x , y ) to obtain I ( x , y ) marked .
Algorithm 1: SVD-based insertion algorithm.
Applsci 13 08430 i001

4.2. Watermarking Extraction Process

Since the watermarking approach is symmetric, the extraction stage is similar to the process shown in Figure 10, with the only change that the inverse operations are applied. Thus, the steps of extracting both the LOGO and the plaintext MetaData from the watermarked image are explained next:
  • SVD-based extraction process:
    Perform the Hermite transform decomposition to I ( x , y ) marked .
    Select the low-spatial-frequency Hermite coefficient M = I 0 , 0 .
    Divide M into blocks of size 4 × 4 , obtaining the multidimensional array B composed of L blocks.
    Apply the DCT to each block of B, obtaining C = DCT { B } .
    Extract the matrices W 1 and W 2 using the SVD-based Algorithm 2.
  • LOGO image extraction:
    Convert array W 1 R 1 × L b i n into a matrix W A R k 1 × k 2 .
    Decrypt i d x J S through the inverse ECA using key 2 = N i t e k , obtaining i d x J S indexes.
    Apply the inverse JST to W A using i d x J S indexes to obtain the LOGO image.
  • Image MetaData extraction:
    Remove the extra zeros of W 2 R 1 × L b i n to obtain the array W 2 R 1 × P .
    Perform the bitwise operation between W 2 and key 3 to obtain V H :
    V H = ( W 2 ) XOR ( key 3 ) .
    Decode V H using the ( 7 , 4 ) Hamming code to obtain the binary array V.
    Convert the binary array V to an alphanumeric array, obtaining the the image MetaData as a character string.
Algorithm 2: SVD-based extraction algorithm.
Applsci 13 08430 i002

5. Experiments and Results

The proposed watermarking method was run on a laptop computer with an Intel Core i7 @ 1.6 GHz, 16 GB of RAM, and without a GPU card. The watermarking algorithm had a time consumption of 2.971 s for the insertion stage and 2.640 s for the extraction process. The method was implemented in a non-optimized and non-parallelized script in MATLAB using images of 512 × 512 px. The parameters used were s 1 = s 2 = 100 for the JST, N = 2 , maximum order decomposition D = 4 and T = 1 for the HT, N i t e = 275 and k = 17 for the ECA, b × b = 4 × 4 for the HVS analysis, and α = 0.00001 for the SVD-based insertion.
We evaluated our algorithm in different experiments: the insertion and extraction processes, robustness against attacks, and a comparison with other methods.
All the experiments were carried out using the 49 publicly available grayscale images of 512 × 512 px (see Section 3.1).

5.1. Performance Measures

To assess the performance of our watermarking algorithm, we employed the following metrics [53]: Mean-Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Mean Structural Similarity Index (MSSIM), Normalized Cross-Correlation (NCC), Bit Error ( B e r r o r ), and Bit Error Rate (BER). In the cases of the MSE, PSNR, SSIM, MSSIM, and NCC, the images were X × Y px, where x and y represent the spatial coordinates:
  • The MSE refers to a statistical metric used to measure the image’s quality. It evaluates the squared difference between a pixel in the original image I 1 and the watermarked image I 2 . After calculating this result for all pixels in the image, it returns the average result, as is shown in Equation (23):
    M S E = 1 X Y x = 1 X y = 1 Y I 1 ( x , y ) I 2 ( x , y ) 2 .
    If the MSE = 0, this indicates that there is no error between the original image and the watermarked image.
  • The PSNR is defined in Equation (24):
    P S N R = 10 log 10 255 2 M S E .
    On the one hand, a higher image quality is achieved with a higher PSNR value, so it approaches infinity. On the other hand, low PSNR values report high differences between the images [57].
  • The Mean Structure Similarity Index (MSSIM) has a value determined by Equation (25):
    M S S I M ( I , I ^ ) = 1 M j = 1 M S S I M ( I j , I ^ j ) ,
    where I and I ^ correspond to the original and the distorted image, respectively, I j and I ^ j represent their j-th local window, and M stands for the amount of local windows of the image. In the case that I j and I ^ j have no negative values, the value of the SSIM is calculated as shown in Equation (26):
    S S I M ( I , I ^ ) = ( 2 μ I μ I ^ + C 1 ) ( 2 σ I I ^ + C 2 ) ( μ I 2 + μ I ^ 2 + C 1 ) ( σ I 2 + σ I ^ 2 + C 2 ) ,
    where the averages of I and I ^ are given by μ I and μ I ^ , respectively, their standard deviations are given by σ I and σ I ^ , the covariance between both images is represented by σ I I ^ , and the constants C 1 and C 2 are used to prevent instability if the denominator happens to have a value close to zero [58].
    The SSIM is a metric that quantifies the similarity between two images and is believed to correlate with the quality perception of the human visual system [57]. The SSIM ranges in the interval [ 0 , 1 ] . Thus, close to zero values indicate uncorrelated images, and values closer to 1 represent equal images.
  • The Normalized Cross-Correlation coefficient (NCC) measures the amount of similarity between two images ( I 1 and I 2 ) given their gray level intensities, as illustrated by Equation (27):
    N C C = x X y Y I 1 ( x , y ) I ¯ 1 I 2 ( x , y ) I ¯ 2 x X y Y I 1 ( x , y ) I ¯ 1 2 y Y y Y I 2 ( x , y ) I ¯ 2 2 1 / 2 ,
    where I ¯ * represents the average value of I * .
  • The Bit error ( B e r r o r ) denotes the number of wrong bits extracted from the binary string ( W ) ˜ ( x ) , regarding the total bits (N) embedded in the binary string W ( x ) [53].
  • The Bit Error Rate (BER) is similar to the bit error, but it measures the ratio between the number of wrong bits extracted from the binary string ( W ) ˜ ( x ) , regarding the total bits (N) embedded in the binary string W ( x ) [53] (see Equation (28)):
    B E R = | W ( x ) ( W ) ˜ ( x ) | N .

5.2. Sensitivity Analysis of the Scaling Factor

A critical parameter in the proposed SVD-based watermarking method (see Algorithm 1) corresponds to the scaling factor α . This parameter defines the imperceptibility, on the one hand, and robustness, on the other, of the proposed watermarking method. A low value ensures imperceptibility, but minimizes robustness, while a high value gives strong robustness, but neglects the imperceptibility of the watermarks. Thus, to fix a suitable value of α , we performed a sensitivity analysis of the scaling factor by varying α from 10 6 to 10 3 with steps of 10 6 , obtaining a set of one-hundred different values. Then, we embedded the watermarks into the Lena image and calculated the performance metrics. Thus, for the watermarked image, we computed the NC and SSIM values; for the extracted LOGO image, the NCC values; and for the image MetaData recovered, the BER values. We obtained an NCC = 1 for the LOGO watermark and a BER = 0 for the image MetaData extracted using the one-hundred values of α , which showed that α did not affect the extraction process. However, the NCC and SSIM metrics for the watermarked image showed a behavior dependent on α . In Figure 12a, we show a plot of the NCC (solid red line) and SSIM (dashed blue line) values of the watermarked image as a function of α , where both metrics decreased when α increased. Then, Figure 12b shows an enlargement of the left-upper rectangle region (black dotted line) of Figure 12a, showing that, for values greater than α t h = 10 5 (vertical black dotted line), both the NCC and SSIM values decreased considerably. For this, we fixed the scaling factor to α = 10 5 to ensure, on the one hand, the quality of the marked image and, on the other hand, the correct extraction of both watermarks.

5.3. Watermarking Insertion and Extraction Performance Analysis

We tested our algorithm on the 49 grayscale images shown in Section 3.1 to insert and extract the LOGO watermark (Figure 2a) and the image MetaData (Figure 2b).
In Table 2, we present the metrics’ averages by applying the algorithm to the 49 grayscale images. In addition, in Table 3, we show only the results using six representative images. However, the results were similar for the other images. As representative images, we selected the following images commonly used to test image-processing algorithms, and at the same time, they represent the variability of both low and high spatial frequencies: Baboon, Barbara, Boat, Cameraman, Lena, and Peppers. The experimental results using these representative images are shown in Figure 13, where each pair shows the original image on the left and the watermarked image on the right.
Table 2 and Table 3 show, through the values of the metrics (PSNR, MSSIM), that the watermark is visually imperceptible, so both the LOGO and the image MetaData did not present perceptible changes. In relation to the extracted watermark and the MetaData, all metrics demonstrated that we can recover them perfectly. Among the six representative images, the best results were for the Peppers image; after inserting the watermark, the quantity of the pixels modified was 5.8020, and the rest of the metrics demonstrated that there were no visual changes. The Cameraman image presented the highest MSE value and the lowest PSNR value, but also these values indicated that the watermark was not perceptible.

5.4. Watermarking Robustness against Attacks

To evaluate the robustness against attacks of the proposed watermarking method, we applied the most-common processing and geometric attacks to the 49 grayscale watermarked images of Section 3.1.
Processing operations: Gaussian filter and median filter; Gaussian noise and salt and pepper noise. Gaussian filter—window of N × N and varying N from 2 to 11; median filter—window of N × N , and N varies between 2 and 9; Gaussian noise—with σ 2 [ 0 , 0.5 ] and increments of 0.05; SP noise—with noise density varying between 0 and 1 and increments of 0.1. Additionally, we applied image compression and image scaling. JPEG compression—with a variation of the quality factor between 0% and 100% and steps of 10%; Scale—with a scale varying between 0.25 and 4 and steps of 0.25. Furthermore, we applied histogram equalization and contrast enhancement. Equalization—varying the discrete equalization levels 2 n , where n is from 2 to 8. Contrast enhancement—varying f from 0.01 to 0.45 with increments of 0.05, such as suturing the bottom f % and the top ( 1 f % ) of all pixel values using the histogram information.
Geometric attacks: Rotation—varying the rotation angle from 1° to 180° with steps of 5°. Translation—the variation was from 10 to 100 px, with increments of 10. Finally, we cropped the watermark image, substituting the original pixels for black pixels. In percentage p was from 0 to 95%. Table 4 shows the metrics’ averages using the 49 grayscale watermarked images for all attacks with parameter variation, including the metrics of the LOGO and MetaData recovered. As we can see, only with the Rotation and the Cropping was it difficult to recover the LOGO and the MetaData. For both cases, the bits modified in the MetaData were significant, and the LOGO images had visual modifications because the PSNR values and the SSIM values were high. With Gaussian noise, SP noise, scale, and contrast enhancement, B E R = 0, P S N R 60 , NCC = 1, SSIM = 1, MSSIM = 1, and MSE = 0, indicating that the recovered watermark was equal to the inserted watermark. In the rest of the attacks (Gaussian filter, median filter, translation, JPEG compression, and histogram equalization), the values of the PSNR, NCC, SSIM, and MSSIM indicated that there was a high similarity between the original watermark and the extracted watermark. In the case of the LOGO image, it could present visual changes, but it was still visible, and the MetaData changed in some characters. It is important to note that, in this table, we included the B E R (number of modified bits) and B e r r o r (BER ratio) to identify if a LOGO had modified bits. For example, with the Gaussian filter, B E R = 0 indicates that the watermark did not have modified bits, but if we calculated the BER ratio ( B e r r o r ), it was clear that it had few modifications.
To analyze in detail the results of the robustness against attacks, in Table 5, we show the metrics’ averages for all attacks with parameter variation, showing the metrics for the recovery of the LOGO and MetaData extracted after applying the algorithm using only the Lena image. The results for each metric demonstrated that our proposal is robust and imperceptible. In Figure 14, Figure 15 and Figure 16, we present the original Lena image, recovered watermark, and recovery MetaData after applying some attacks. In Figure 14, we can observe that, after applying the median filter, SP noise, and JPEG compression, it was possible to recover the watermark and MetaData. The same result was obtained if we applied histogram equalization and translation (Figure 15). Finally, in Figure 16, we can see that, if we rotate or crop the watermark image, in some cases, it was not possible to recover the watermark and the MetaData without modifications, but in the majority of the cases, we had good results.

5.5. Computational Complexity

Since the watermarking insertion/extraction processes are composed of several stages, we give the complexity for those stages that involved the host image in the insertion process, and we did not include neither the pre-processing of the LOGO watermark and image MetaData because of their small dimensions compared to the host image. Thus, the computational complexity for a grayscale image of M × N px is given as follows:
  • Hermite transform (for both the decomposition and reconstruction stages): O ( 2 × N O C × ( N + 1 ) 2 × M × N ) , where N O C is the number of coefficients and N represents the size of the binomial window.
  • HVS stage: O ( b 2 × ( M / b ) × ( N / b ) ) , where b × b = b 2 is the size of each block.
  • DCT and inverse DCT: O ( 2 × ( M / b ) ( N / b ) × ( b 2 log 2 b ) ) .
  • SVD, which is applied several time in the insertion process: O ( ( 2 × 4 ) × ( 2 × M / b ) × ( 2 × N / b ) × m a x ( 2 × M / b , 2 × N / b ) ) , and it could be simply O ( ( 2 × 4 ) × ( 2 × M / b ) × ( 2 × N / b ) × ( 2 × M / b ) ) considering that M N .
  • SVD reconstruction: O ( 4 × 2 × ( 2 × M / b × 2 × N / b × m a x ( 2 × M / b , 2 × N / b ) ) ) = O ( 4 × 2 × ( 2 × M / b × 2 × N / b × 2 × M / b ) ) , for M N .
Finally, fixing N O C = 9 , N = 2 , and b = 4 , the simplified computational complexity is given by: O ( M × N + M × N + M × N + ( M 2 × N ) + ( M 2 × N ) ) , and resolving it, we obtain the following total computational complexity: O ( M × N × ( M + 1 ) ) .

5.6. Comparison with other Methods

In order to evaluate our proposed scheme, we compared it to other similar approaches. To have a valid comparison, it is important to have elements in common, such as the database, watermark type, the metrics to evaluate each algorithm, and the attacks applied. After reviewing the state-of-the-art, we decided to compare our algorithm to algorithms that use typical images for this application (Baboon, Barbara, Boat, Cameraman, Lena, and Peppers)and at least used as metrics the PSNR, SSIM, and 2D correlation. Implementing the algorithms that we selected to compare our method was difficult because, in some cases, the authors did not include their proposals with details, so we took their published results. In Table 6, we present the results of our proposal compared with [1,5,6,9,13,17,29,31,32,33,34,35]. In all cases, the original image used was Lena. The metrics shown are of the watermarked image.
As we can see from Table 6, the proposals reported good results for all metrics, including ours, but a small difference implies better development. Concerning the PSNRs, even though [29] reported the highest PSNR value, success against attacks was not guaranteed. Furthermore, we can assume that, when two images are equal, their PSNR = 60 dB, so any algorithm that reported PSNR values = 60 or ≫60 indicated that both the watermarked and the original image were equal. In this situation, our algorithm and the algorithms [6,29,31,32] had PSNR values ≫60. Regarding the SSIM and NCC, the best results were obtained by our proposal and [6,29,35], so we can assume that the watermark image did not have perceptual modifications. Finally, with our algorithm, there was no error (MSE = 0) between the original and the marked image. Only two techniques [32,35] reported MSE values, and they were different from zero. Therefore, our method did not have errors in the insertion process and extraction process. Finally, after reviewing each result of [6,29] and our proposal, the three methods had the same values for the SSIM and NNC, in addition to reported values of PSNR = ≫60 dB; there was no difference between them, so we can conclude that the three methods are good watermarking techniques. However, there was a difference between them: the watermark. In [29], the authors used an image ( 256 × 256 ); in [6], they used a LOGO ( 64 × 64 ); with our proposal, it is possible to insert both an image ( 100 × 100 ) and information about the owner or the technical data of the original image in text format. Therefore, we can conclude that our proposal is competitive concerning other state-of-the-art works, giving similar performance evaluations, but with the advantage of a higher loading capacity.
To have more statistical significance, we included a comparison with [1,5,9,13,17,33,34,35] using other images (Barbara, Baboon, Peppers, and Pirate). In Table 7, we present the obtained results for each method after inserting the watermark. It is important to see that the methods did not use all the indicated images and the metrics, but we included them because it is important to compare our technique and demonstrate its effectiveness.
Once again, we demonstrated that our proposal is competitive compared to other techniques using different images (Table 7), with optimal results for the metrics used.
To determine the effectiveness (robustness) of the proposed method, it was necessary to have a valid comparison, so we chose the proposals [1,13,17,29,31,32,34,35], which reported their results with the same attacks and the same parameters. In addition, we took into account the metrics employed by each one. Table 8 presents the results obtained after applying the Gaussian filter to the watermarked image. Table 9 shows the metrics’ values after applying the median filter. The comparison of the JPEG compression is presented in Table 10. Finally, Table 11 shows the results after applying the scale attack, and Table 12 the results for the rotation attack.
From Table 8, Table 9 and Table 10, it is clear that our algorithm had the best results for all metrics, demonstrating that the recovered watermark did not suffer alterations (LOGO and MetaData) after applying the Gaussian filter, median filter, and JPEG compression. About the scale attack (Table 11), we can determine that the watermark extracted with our algorithm is the same as the original watermark. Finally, with the rotation attack, the proposal [31] had the highest NCC values (very close to 1), and it is clear that our proposal did not overcome this attack.
Regarding Gaussian noise, only the algorithm from [34] used the same parameter of the noise density to probe their proposal as ours. In Table 13, we can see the results. Meanwhile, in Table 14, we present the results of SP noise compared with [13,34].
From the results shown in Table 13 and Table 14, we can confirm that, when we applied Gaussian noise or SP noise to the marked image, we could fully extract the watermarks. The metrics’ values for SSIM and NCC were equal to 1 using our algorithm, and BER = 0.
Finally, we decided to include a comparison with other techniques using different images. The comparison of Gaussian noise with [33] is presented in Table 15 using the Barbara image. Table 16 shows the metrics’ values after applying SP noise compared with [34], using the Pirate image, and Table 17 presents the comparison of JPEG compression, using the Baboon image, with [17,34].
As we can see from Table 15, Table 16 and Table 17, we demonstrated with our method that it is possible to extract the watermark without errors even using different images, not only the Lena image.

6. Discussion

The experiments and results demonstrated that the image watermarking method based on SVD, the HVS, the HT, and the DCT is a robust and secure technique with the capacity to insert two different watermarks, the image LOGO and the image MetaData, in plaintext format containing information about the cover image or the image’s owner. Compared with the majority of the state-of-the-art proposals, we had an advantage because they only used one watermark.
The evaluation of the algorithm was presented by applying different attacks (processing and geometric operations), using two watermarks, inserting both at the same time, and 49 digital images. We used four different metrics to demonstrate that the watermarked images did not suffer visual alterations and that the watermark extracted, in the majority of cases, was recovered perfectly.
To have an imperceptible and robust algorithm, our proposal is a hybrid approach because we used the Hermite Transform (HT), Singular-Value Decomposition (SVD), the Human Vision System (HVS), and the Discrete Cosine Transform (DCT), and to have major security, we encrypted the watermark. On the one hand, we encrypted the watermark (LOGO) by combining the Jigsaw transform and ECA. On the other hand, we applied a Hamming error-correcting code to the MetaData, to reduce channel distortion.
The insertion process (Figure 10) shows all the elements we considered. First, we applied the HT to the original image, because this transform guarantees imperceptibility. We chose the low-frequency coefficient and divided it into blocks of size 4 × 4. Then, to determine the best regions (with more redundant information) to insert the watermark, we used a combination of entropy and edge entropy (HVS analysis). Figure 11b shows an example highlighting the most-suitable regions to insert the watermark. This HVS analysis was applied to each block, and then, we used the DCT (this transformation demonstrated greater effectiveness when applied to smaller block sizes). To insert the watermark, we used SVD because, as we explained, the SVD of a digital image in the majority of cases is rarely affected by various attacks. We inserted the watermark into S coefficients. Finally, we applied the IDCT, and the blocks were joined to calculate the inverse HT. An important element to take into account is the scaling factor α because it defines the imperceptibility and robustness of the watermarking method. Both insertion and extraction processes were similar. Therefore, the proposed method is symmetric, and the extraction stage applied the inverse operations to those used in the insertion.
To probe the effectiveness of this method, we applied the insertion and extraction process to 49 different digital images, evaluated its robustness against attacks, and compared it with other methods. To probe the quality of our algorithm, we used typical metrics employed in this kind of application (MSE, PSNR, SSIM, MSSIM, NCC, and BER). For the original image, the watermarked image, the original watermark (LOGO), and the extracted watermark, we employed the metrics that indicated if an image had suffered visual alterations or if two images were equal, and in the case of the MetaData, we calculated the BER to measure how many bits were modified in the recovered watermark. The metrics’ values of Table 2 demonstrated that the watermarked images did not have visual modifications and the extracted watermarks (LOGO and MetaData) were the same as the originals. Furthermore, we presented the results of six representative images (Table 3). In all cases, the extracted watermark (LOGO) and MetaData were equal to the original. The watermarked image did not have visual modifications, although the worst MSE was obtained with the Cameraman image (MSE = 9.9479). Therefore, this algorithm guarantees imperceptibility and perfect extraction.
To evaluate the algorithm regarding the robustness, we probed it with the majority of attacks that are common in watermark applications. In total, we applied 11 attacks (common processing and geometrics). From Table 4, we can see that, for four attacks, Gaussian noise, SP noise, scaling, and contrast enhancement, the watermark could be recovered perfectly without errors, while for the rest of the attacks, the metrics’ values indicated that the extracted watermark could have some difference in comparison with the watermarked original. However, this difference did not prevent the identification of the LOGO; however, it was clear that the modification of the bits in the MetaData did change its meaning. However, if one of the two watermarks is clear, we can validate the method. The worst cases to recover the watermark were when we applied the cropping and rotation attacks.
Finally, in comparison with other similar algorithms, it is clear that our proposal presented equal or higher values for all metrics (Table 6). It is important to note that it was difficult to compare with other proposals because, in some cases, we used stronger attacks. Therefore, we presented the outcomes of the algorithms employing identical parameters to ours (Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14). Is clear that our method had better robustness and watermark capacity. Another difficulty was comparing with other methods, but using different images, because this depended on the published results for each research work. Therefore, from the state-of-the-art evaluation, we could select some of them that presented tests using different images from the Lena image.

7. Conclusions and Future Work

We presented a robust and invisible hybrid watermark algorithm for digital images in the transformed domain. We proposed a combination of the HT, DCT, and SVD techniques to have more robustness and imperceptibility for the watermarked images. With our proposal, it was possible to use as a watermark both the digital image information (LOGO) and information about the owner (MetaData) and insert them at the same time. In the state-of-the-art, we reported algorithms that use as a watermark only digital images or information about the owner, and their robustness is better because the watermark has less information. Therefore, we integrate different mathematical tools to insert two different watermarks without compromising imperceptibility and robustness. In addition, we included an encryption process to have more security, which could have a thorough performance analysis in future work.
With tests and results, we demonstrated that our technique is robust to the majority of attacks used to prove it. The parameters that we considered to apply the attacks, in some cases, were stronger than the parameters employed by other proposals. In Table 4, we present each attack that we applied and its parameters, indicating the value of each metric obtained after applying the algorithm. The results showed that, on the one hand, with Gaussian and SP noises, the scale attack, and contrast enhancement, our proposal had excellent performance because, in all cases, the watermark extracted did not have errors and the watermarked image did not present visual modifications. On the other hand, the worst results were obtained when we applied the rotation and cropping attacks, because it was not possible to extract the watermark in some cases.
In terms of the comparison with other proposals, as we explained in Section 5.6, our results were better in all cases (Table 6). It is important to note that, despite the fact that some papers [6,29,32] reported PSNR values above 60 dB, this factor does not ensure robustness. In our case, all metrics showed that our method was robust, secure, and ensured high imperceptibility, making it suitable for effective copyright protection.
As future work, we believe that is necessary to improve the algorithm for the rotation and cropping attacks, because, of all the attacks, only these were the ones that it did not overcome. In addition, we will carry out a thorough analysis of the JST with the ECA for the encryption of the image watermarking, and we will explore a combined watermarking/encryption approach to insert information into a host image and encrypt it in the frequency domain.

Author Contributions

Conceptualization, S.L.G.-C., E.M.-A., J.B. and A.R.-A.; methodology, S.L.G.-C., E.M.-A., J.B. and A.R.-A.; software, S.L.G.-C., E.M.-A. and A.R.-A.; validation, S.L.G.-C., E.M.-A., J.B. and A.R.-A.; formal analysis, S.L.G.-C., E.M.-A. and J.B.; investigation, S.L.G.-C., E.M.-A., J.B. and A.R.-A.; resources, S.L.G.-C., E.M.-A. and J.B.; data curation, S.L.G.-C. and E.M.-A.; writing—original draft preparation, S.L.G.-C., E.M.-A., J.B. and A.R.-A.; writing—review and editing, S.L.G.-C., E.M.-A. and J.B.; visualization, S.L.G.-C. and E.M.-A.; supervision, S.L.G.-C., E.M.-A. and J.B.; project administration, S.L.G.-C. and E.M.-A. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by Instituto Politécnico Nacional and by Universidad Panamericana through the Institutional Program “Fondo Open Access” of the Vicerrectoría General de Investigación.

Data Availability Statement

The complete dataset is publicly available on our website: https://sites.google.com/up.edu.mx/invico-en/resources/image-dataset-watermarking.

Acknowledgments

Sandra L. Gomez-Coronel thanks the financial support from Instituto Politecnico Nacional IPN (COFFA, EDI, and SIP). Ernesto Moya-Albor, Jorge Brieva, and Andrés Romero-Arellano thank the School of Engineering of the Universidad Panamericana for all the support in this work.

Conflicts of Interest

The authors affirm that they have no conflict of interest.

Abbreviations

The manuscript employs the following abbreviations:
AESAdvanced Encryption Standard
ATArnold Transform
B e r r o r Bit error
BERBit Error Rate
CAEConvolutional Autoencoder
CNNDeep Convolutional Neural Network
DCTDiscrete Cosine Transform
DWTDiscrete Wavelet Transform
ECAElementary Cellular Automata
ECCElliptic Curve Cryptography
FDCuTFast Discrete Curvelet Transform
FRFTFractional Fourier Transform
HEVCHigh-Efficiency Video Coding
HTHermite Transform
HVSHuman Vision System
IDCTInverse Discrete Transform
JSTJigsaw Transform
LWTLifting Wavelet Transform
MRIMagnetic Resonance Imaging
MSEMean-Squared Error
MSFMultiple Scaling Factors
MSSIMMean Structural Similarity Index
NBPNormalized Block Processing
NCCNormalized Cross-Correlation
SPSalt and Pepper
PSNRPeak Signal To Noise Ratio
PSOParticle Swarm Optimization
QIMQuantization Index Modulation
RIDWTRedistributed Invariant Wavelet Transform
RSARivest–Shamir–Adleman
SPIHTSet Partitioning In A Hierarchical Tree
SSIMStructural Similarity Index
SVDSingular-Value Decomposition

References

  1. Mokashi, B.; Bhat, V.; Pujari, J.; Roopashree, S.; Mahesh, T.; Alex, D. Efficient Hybrid Blind Watermarking in DWT-DCT-SVD with Dual Biometric Features for Images. Contrast Media Mol. Imaging 2022, 2022, 2918126. [Google Scholar] [CrossRef] [PubMed]
  2. Dharmika, B.; Rupa, C.; Haritha, D.; Vineetha, Y. Privacy Preservation of Medical Health Records using Symmetric Block Cipher and Frequency Domain Watermarking Techniques. In Proceedings of the 2022 International Conference on Inventive Computation Technologies (ICICT), Lalitpur, Nepal, 20–22 July 2022; pp. 96–103. [Google Scholar] [CrossRef]
  3. Sharma, S.; Zou, J.; Fang, G. A Novel Multipurpose Watermarking Scheme Capable of Protecting and Authenticating Images with Tamper Detection and Localisation Abilities. IEEE Access 2022, 10, 85677–85700. [Google Scholar] [CrossRef]
  4. Nguyen, T.S. Fragile watermarking for image authentication based on DWT-SVD-DCT techniques. Multimed. Tools Appl. 2021, 80, 25107–25119. [Google Scholar] [CrossRef]
  5. Li, Y.M.; Wei, D.; Zhang, L. Double-encrypted watermarking algorithm based on cosine transform and fractional Fourier transform in invariant wavelet domain. Inf. Sci. 2021, 551, 205–227. [Google Scholar] [CrossRef]
  6. Alam, S.; Ahmad, T.; Doja, M. A Novel Hybrid Watermarking scheme with Image authentication based on frequency domain, 2- Level SVD using chaotic map. EAI Endorsed Trans. Energy Web 2021, 8, e7. [Google Scholar] [CrossRef]
  7. Sharma, S.; Chandrasekaran, V. A robust hybrid digital watermarking technique against a powerful CNN-based adversarial attack. Multimed. Tools Appl. 2020, 79, 32769–32790. [Google Scholar] [CrossRef]
  8. Garg, P.; Kishore, R. Performance comparison of various watermarking techniques. Multimed. Tools Appl. 2020, 79, 25921–25967. [Google Scholar] [CrossRef]
  9. Zheng, P.; Zhang, Y. A robust image watermarking scheme in hybrid transform domains resisting to rotation attacks. Multimed. Tools Appl. 2020, 79, 18343–18365. [Google Scholar] [CrossRef]
  10. Kang, X.; Chen, Y.; Zhao, F.; Lin, G. Multi-dimensional particle swarm optimization for robust blind image watermarking using intertwining logistic map and hybrid domain. Soft Comput. 2020, 24, 10561–10584. [Google Scholar] [CrossRef]
  11. Taha, D.; Taha, T.; Dabagh, N. A comparison between the performance of DWT and LWT in image watermarking. Bull. Electr. Eng. Inform. 2020, 9, 1005–1014. [Google Scholar] [CrossRef]
  12. Thanki, R.; Kothari, A. Hybrid domain watermarking technique for copyright protection of images using speech watermarks. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 1835–1857. [Google Scholar] [CrossRef]
  13. Kumar, C.; Singh, A.; Kumar, P. Improved wavelet-based image watermarking through SPIHT. Multimed. Tools Appl. 2020, 79, 11069–11082. [Google Scholar] [CrossRef]
  14. Zheng, Q.; Liu, N.; Cao, B.; Wang, F.; Yang, Y. Zero-Watermarking Algorithm in Transform Domain Based on RGB Channel and Voting Strategy. J. Inf. Process. Syst. 2020, 16, 1391–1406. [Google Scholar] [CrossRef]
  15. Yadav, N.; Goel, N. An effective image-Adaptive hybrid watermarking scheme with transform coefficients. Int. J. Image Graph. 2020, 20, 2050002. [Google Scholar] [CrossRef]
  16. Takore, T.; Rajesh Kumar, P.; Lavanya Devi, G. A new robust and imperceptible image watermarking scheme based on hybrid transform and PSO. Int. J. Intell. Syst. Appl. 2018, 10, 50–63. [Google Scholar] [CrossRef]
  17. Kang, X.B.; Zhao, F.; Lin, G.F.; Chen, Y.J. A novel hybrid of DCT and SVD in DWT domain for robust and invisible blind image watermarking with optimal embedding strength. Multimed. Tools Appl. 2018, 77, 13197–13224. [Google Scholar] [CrossRef]
  18. Sridhar, P. A robust digital image watermarking in hybrid frequency domain. Int. J. Eng. Technol. (UAE) 2018, 7, 243–248. [Google Scholar] [CrossRef] [Green Version]
  19. Madhavi, K.; Rajesh, G.; Sowmya Priya, K. A secure and robust digital image watermarking techniques. Int. J. Innov. Technol. Explor. Eng. 2019, 8, 2758–2761. [Google Scholar] [CrossRef]
  20. Gupta, R.; Mishra, A.; Jain, S. A semi-blind HVS based image watermarking scheme using elliptic curve cryptography. Multimed. Tools Appl. 2018, 77, 19235–19260. [Google Scholar] [CrossRef]
  21. Rosales-Roldan, L.; Chao, J.; Nakano-Miyatake, M.; Perez-Meana, H. Color image ownership protection based on spectral domain watermarking using QR codes and QIM. Multimed. Tools Appl. 2018, 77, 16031–16052. [Google Scholar] [CrossRef]
  22. El-Shafai, W.; El-Rabaie, S.; El-Halawany, M.; Abd El-Samie, F. Efficient hybrid watermarking schemes for robust and secure 3D-MVC communication. Int. J. Commun. Syst. 2018, 31. [Google Scholar] [CrossRef]
  23. El-Shafai, W.; El-Rabaie, E.S.; El-Halawany, M.; El-Samie, F. Efficient multi-level security for robust 3D color-plus-depth HEVC. Multimed. Tools Appl. 2018, 77, 30911–30937. [Google Scholar] [CrossRef]
  24. Xu, H.; Kang, X.; Wang, Y.; Wang, Y. Exploring robust and blind watermarking approach of colour images in DWT-DCT-SVD domain for copyright protection. Int. J. Electron. Secur. Digit. Forensics 2018, 10, 79–96. [Google Scholar] [CrossRef]
  25. Ravi Kumar, C.; Surya Prakasa Rao, R.; Rajesh Kumar, P. GA based lossless and robust image watermarking using NBP-IWT-DCT-SVD transforms. J. Adv. Res. Dyn. Control. Syst. 2018, 10, 335–342. [Google Scholar]
  26. Magdy, M.; Hosny, K.M.; Ghali, N.I.; Ghoniemy, S. Security of medical images for telemedicine: A systematic review. Multimed. Tools Appl. 2022, 81, 25101–25145. [Google Scholar] [CrossRef] [PubMed]
  27. Kahlessenane, F.; Khaldi, A.; Kafi, R.; Euschi, S. A DWT based watermarking approach for medical image protection. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 2931–2938. [Google Scholar] [CrossRef]
  28. Dixit, R.; Nandal, A.; Dhaka, A.; Kuriakose, Y.; Agarwal, V. A DCT Fractional Bit Replacement Based Dual Watermarking Algorithm for Image Authentication. Recent Adv. Comput. Sci. Commun. 2021, 14, 2899–2919. [Google Scholar] [CrossRef]
  29. Dixit, R.; Nandal, A.; Dhaka, A.; Agarwal, V.; Kuriakose, Y. LWT-DCT based Image Watermarking Scheme using Normalized SVD. Recent Adv. Comput. Sci. Commun. 2021, 14, 2976–2991. [Google Scholar] [CrossRef]
  30. Gupta, S.; Saluja, K.; Solanki, V.; Kaur, K.; Singla, P.; Shahid, M. Efficient methods for digital image watermarking and information embedding. Meas. Sens. 2022, 24, 100520. [Google Scholar] [CrossRef]
  31. Begum, M.; Ferdush, J.; Uddin, M. A Hybrid robust watermarking system based on discrete cosine transform, discrete wavelet transform, and singular value decomposition. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 5856–5867. [Google Scholar] [CrossRef]
  32. Rajani, D.; Kumar, P. An Optimized Hybrid Algorithm for Blind Watermarking Scheme Using Singular Value Decomposition in RDWT-DCT Domain. J. Appl. Secur. Res. 2022, 17, 103–122. [Google Scholar] [CrossRef]
  33. Wu, J.Y.; Huang, W.L.; Wen, R.H.; Gong, L.H. Hybrid watermarking scheme based on singular value decomposition ghost imaging. Opt. Appl. 2020, 50, 633–647. [Google Scholar] [CrossRef]
  34. Wu, J.Y.; Huang, W.L.; Xia-Hou, W.M.; Zou, W.P.; Gong, L.H. Imperceptible digital watermarking scheme combining 4-level discrete wavelet transform with singular value decomposition. Multimed. Tools Appl. 2020, 79, 22727–22747. [Google Scholar] [CrossRef]
  35. Naffouti, S.E.; Kricha, A.; Sakly, A. A sophisticated and provably grayscale image watermarking system using DWT-SVD domain. In The Visual Computer; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–21. [Google Scholar]
  36. University of Southern California. The USC-SIPI Image Database. Available online: http://sipi.usc.edu/database (accessed on 27 January 2023).
  37. University of Waterloo. The Waterloo Fractal Coding and Analysis Group. 2007. Available online: https://links.uwaterloo.ca/Repository/ (accessed on 23 January 2023).
  38. Fabien Petitcolas. The Information Hiding Homepage. 2023. Available online: https://www.petitcolas.net/watermarking/stirmark/ (accessed on 23 January 2023).
  39. Computer Vision Group, University of Granada. Dataset of Standard 512X512 Grayscale Test Images. 2002. Available online: https://ccia.ugr.es/cvg/CG/base.htm (accessed on 23 January 2023).
  40. Gonzalez, R.C.; Wood, R.E. Image Databases: “Standard” Test Images. Available online: https://www.imageprocessingplace.com/root_files_V3/image_databases.htm (accessed on 23 January 2023).
  41. Moya-Albor, E.; Romero-Arellano, A.; Brieva, J.; Gomez-Coronel, S.L. Color Image Encryption Algorithm Based on a Chaotic Model Using the Modular Discrete Derivative and Langton’s Ant. Mathematics 2023, 11, 2396. [Google Scholar] [CrossRef]
  42. Sambas, A.; Vaidyanathan, S.; Zhang, X.; Koyuncu, I.; Bonny, T.; Tuna, M.; Alcin, M.; Zhang, S.; Sulaiman, I.M.; Awwal, A.M.; et al. A Novel 3D Chaotic System with Line Equilibrium: Multistability, Integral Sliding Mode Control, Electronic Circuit, FPGA Implementation and Its Image Encryption. IEEE Access 2022, 10, 68057–68074. [Google Scholar] [CrossRef]
  43. Hennelly, B.M.; Sheridan, J.T. Image encryption techniques based on the fractional Fourier transform. In Proceedings of the Optical Information Systems; Javidi, B., Psaltis, D., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2003; Volume 5202, pp. 76–87. [Google Scholar] [CrossRef]
  44. Wolfram, S. Statistical mechanics of cellular automata. Rev. Mod. Phys. 1983, 55, 601–644. [Google Scholar] [CrossRef]
  45. Chen, Z. Singular Value Decomposition and Its Applications in Image Processing. In Proceedings of the 2018 International Conference on Mathematics and Statistics, Porto, Portugal, 15–17 July 2018; ICoMS 2018. Association for Computing Machinery: New York, NY, USA, 2018; pp. 16–22. [Google Scholar] [CrossRef]
  46. Chang, C.C.; Hu, Y.S.; Lin, C.C. A digital watermarking scheme based on singular value decomposition. In Proceedings of the Combinatorics, Algorithms, Probabilistic and Experimental Methodologies: First International Symposium, ESCAPE 2007, Hangzhou, China, 7–9 April 2007; pp. 82–93. [Google Scholar] [CrossRef]
  47. Moon, T.K. Error Correction Coding: Mathematical Methods and Algorithms; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2005. [Google Scholar]
  48. Blinn, J. What’s that deal with the DCT? IEEE Comput. Graph. Appl. 1993, 13, 78–83. [Google Scholar] [CrossRef]
  49. Deshlahra, A.; Shirnewar, G.; Sahoo, A. A comparative study of DCT, DWT & hybrid (DCT-DWT) transform. In Proceedings of the International Conference on Emerging Trends in Computer and Image Processing (ICETCIP), IRD India, Bangalore, India, 24 February 2013; Available online: http://dspace.nitrkl.ac.in/dspace/handle/2080/1879 (accessed on 5 May 2023).
  50. Khayam, S.A. The discrete cosine transform (DCT): Theory and application. Mich. State Univ. 2003, 114, 31. [Google Scholar]
  51. Wen, H.; Ma, L.; Liu, L.; Huang, Y.; Chen, Z.; Li, R.; Liu, Z.; Lin, W.; Wu, J.; Li, Y.; et al. High-quality restoration image encryption using DCT frequency-domain compression coding and chaos. Sci. Rep. 2022, 12, 16523. [Google Scholar] [CrossRef]
  52. Begum, M.; Uddin, M.S. Digital Image Watermarking Techniques: A Review. Information 2020, 11, 110. [Google Scholar] [CrossRef] [Green Version]
  53. Katzenbeisser, S.; Petitcolas, F.A. Information Hiding Techniques for Steganography and Digital Watermarking; Artech House, Inc.: Norwood, MA, USA, 2000. [Google Scholar]
  54. Luhach, A.K.; Jat, D.S.; Hawari, K.B.G.; Gao, X.Z.; Lingras, P. Advanced Informatics for Computing Research: Third International Conference, ICAICR 2019, Shimla, India, 15–16 June 2019, Revised Selected Papers, Part I; Springer: Berlin/Heidelberg, Germany, 2019; Volume 1075. [Google Scholar]
  55. Ernawan, F.; Liew, S.C.; Mustaffa, Z.; Moorthy, K. A blind multiple watermarks based on human visual characteristics. Int. J. Electr. Comput. Eng. 2018, 8, 2578–2587. [Google Scholar] [CrossRef]
  56. Dowling, J.; Planitz, B.M.; Maeder, A.J.; Du, J.; Pham, B.; Boyd, C.; Chen, S.; Bradley, A.P.; Crozier, S. A comparison of DCT and DWT block based watermarking on medical image quality. In Proceedings of the Digital Watermarking: 6th International Workshop, IWDW 2007, Guangzhou, China, 3–5 December 2007; Springer: Berlin/Heidelberg, Germany, 2008; pp. 454–466. [Google Scholar]
  57. Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
  58. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Complete image dataset; 49 grayscale images of 512 × 512 px.
Figure 1. Complete image dataset; 49 grayscale images of 512 × 512 px.
Applsci 13 08430 g001
Figure 2. (a) Watermark image LOGO. (b) Image MetaData of the Barbara image in plaintext.
Figure 2. (a) Watermark image LOGO. (b) Image MetaData of the Barbara image in plaintext.
Applsci 13 08430 g002
Figure 3. Examples of the JST using an image grayscale image of 512 × 512 px. (a) Barbara image. (b) JST result using blocks of 64 × 64 ( M = 64 ). (c) JST result using blocks of 32 × 32 ( M = 256 ). (d) JST result using blocks of 8 × 8 ( M = 4096 ).
Figure 3. Examples of the JST using an image grayscale image of 512 × 512 px. (a) Barbara image. (b) JST result using blocks of 64 × 64 ( M = 64 ). (c) JST result using blocks of 32 × 32 ( M = 256 ). (d) JST result using blocks of 8 × 8 ( M = 4096 ).
Applsci 13 08430 g003
Figure 4. Elementary cellular automaton with Rule 105. (a) The new state of a cell is based on each possible neighborhood of the cell in the same column in its parent row. (b) ECA is applied on a 50 × 51 grid using 34 iterations, with the first row starting with all cells in the OFF state, except for the cell at the center.
Figure 4. Elementary cellular automaton with Rule 105. (a) The new state of a cell is based on each possible neighborhood of the cell in the same column in its parent row. (b) ECA is applied on a 50 × 51 grid using 34 iterations, with the first row starting with all cells in the OFF state, except for the cell at the center.
Applsci 13 08430 g004
Figure 5. Variation of the ECA with Rule 105. (a) Increment on the value of a cell based on each possible neighborhood of the cell in the same column in its parent row. (b) Our modified ECA was applied on a 128 × 128 gray image using 80 iterations, with k = 157 .
Figure 5. Variation of the ECA with Rule 105. (a) Increment on the value of a cell based on each possible neighborhood of the cell in the same column in its parent row. (b) Our modified ECA was applied on a 128 × 128 gray image using 80 iterations, with k = 157 .
Applsci 13 08430 g005
Figure 6. Image approximation using a different number of singular values (r) of SVD.
Figure 6. Image approximation using a different number of singular values (r) of SVD.
Applsci 13 08430 g006
Figure 7. Zoom of the image approximation using a different number of singular values (r) of SVD.
Figure 7. Zoom of the image approximation using a different number of singular values (r) of SVD.
Applsci 13 08430 g007
Figure 8. Discrete cosine transform. (a) Original Lena image; (b) DCT of Lena image.
Figure 8. Discrete cosine transform. (a) Original Lena image; (b) DCT of Lena image.
Applsci 13 08430 g008
Figure 9. Hermite transform coefficients ( L o , p o ( x , y ) with N = 4 ) of Lena image and the spatial order representation: ( o , p o ) = 0 , 0 1 , 0 2 , 0 3 , 0 4 , 0 0 , 1 1 , 1 2 , 1 3 , 1 4 , 1 0 , 2 1 , 2 2 , 2 3 , 2 4 , 2 0 , 3 1 , 3 2 , 3 3 , 3 4 , 3 0 , 4 1 , 4 2 , 4 3 , 4 4 , 4 .
Figure 9. Hermite transform coefficients ( L o , p o ( x , y ) with N = 4 ) of Lena image and the spatial order representation: ( o , p o ) = 0 , 0 1 , 0 2 , 0 3 , 0 4 , 0 0 , 1 1 , 1 2 , 1 3 , 1 4 , 1 0 , 2 1 , 2 2 , 2 3 , 2 4 , 2 0 , 3 1 , 3 2 , 3 3 , 3 4 , 3 0 , 4 1 , 4 2 , 4 3 , 4 4 , 4 .
Applsci 13 08430 g009
Figure 10. Watermarking insertion process.
Figure 10. Watermarking insertion process.
Applsci 13 08430 g010
Figure 11. Embedding regions to insert a watermarking using the HVS values. (a) Grayscale original image. (b) The suitable regions to insert a watermark in descending order. Best locations (light color) and worst ones (dark color).
Figure 11. Embedding regions to insert a watermarking using the HVS values. (a) Grayscale original image. (b) The suitable regions to insert a watermark in descending order. Best locations (light color) and worst ones (dark color).
Applsci 13 08430 g011
Figure 12. Sensitivity analysis of the scaling factor. (a) NCC (solid red line) and SSIM (dashed blue line) values of the watermarked image versus α . (b) Enlargement of the left-upper rectangle region (black dotted line) of (a) showing the defined limit value of α .
Figure 12. Sensitivity analysis of the scaling factor. (a) NCC (solid red line) and SSIM (dashed blue line) values of the watermarked image versus α . (b) Enlargement of the left-upper rectangle region (black dotted line) of (a) showing the defined limit value of α .
Applsci 13 08430 g012
Figure 13. Results of original images and their watermarked images without attack. (a,c,e,g,i,k) correspond to the original images Baboon, Barbara, Boat, Cameraman, Lena, and Peppers, respectively. (b,d,f,h,j,l) correspond to the watermarked images.
Figure 13. Results of original images and their watermarked images without attack. (a,c,e,g,i,k) correspond to the original images Baboon, Barbara, Boat, Cameraman, Lena, and Peppers, respectively. (b,d,f,h,j,l) correspond to the watermarked images.
Applsci 13 08430 g013
Figure 14. Median filter (9): (a) Filtered Lena image. (b) Recovered watermark (LOGO). (c) Recovered MetaData. SP noise (0.5): (d) Noisy Lena image. (e) Recovered watermark (LOGO). (f) Recovered MetaData. JPEG compression (0): (g) Compressed Lena image. (h) Recovered watermark (LOGO). (i) Recovered MetaData.
Figure 14. Median filter (9): (a) Filtered Lena image. (b) Recovered watermark (LOGO). (c) Recovered MetaData. SP noise (0.5): (d) Noisy Lena image. (e) Recovered watermark (LOGO). (f) Recovered MetaData. JPEG compression (0): (g) Compressed Lena image. (h) Recovered watermark (LOGO). (i) Recovered MetaData.
Applsci 13 08430 g014
Figure 15. Histogram equalization (128): (a) Equalized Lena image. (b) Recovered watermark (LOGO). (c) Recovered MetaData. Histogram equalization (4): (d) Equalized Lena image. (e) Recovered watermark (LOGO). (f) Recovered MetaData. Translation (100): (g) Translated Lena image. (h) Recovered watermark (LOGO). (i) Recovered MetaData.
Figure 15. Histogram equalization (128): (a) Equalized Lena image. (b) Recovered watermark (LOGO). (c) Recovered MetaData. Histogram equalization (4): (d) Equalized Lena image. (e) Recovered watermark (LOGO). (f) Recovered MetaData. Translation (100): (g) Translated Lena image. (h) Recovered watermark (LOGO). (i) Recovered MetaData.
Applsci 13 08430 g015
Figure 16. Rotation (90): (a) Rotated Lena image. (b) Recovered watermark (LOGO). (c) Recovered MetaData. Rotation (45): (d) Rotated Lena image. (e) Recovery Watermark (LOGO). (f) Recovered MetaData. Cropping (30): (g) Cropped Lena image. (h) Recovered watermark (LOGO). (i) Recovered MetaData. Cropping (45): (j) Cropped Lena image. (k) Recovered watermark (LOGO). (l) Recovered MetaData.
Figure 16. Rotation (90): (a) Rotated Lena image. (b) Recovered watermark (LOGO). (c) Recovered MetaData. Rotation (45): (d) Rotated Lena image. (e) Recovery Watermark (LOGO). (f) Recovered MetaData. Cropping (30): (g) Cropped Lena image. (h) Recovered watermark (LOGO). (i) Recovered MetaData. Cropping (45): (j) Cropped Lena image. (k) Recovered watermark (LOGO). (l) Recovered MetaData.
Applsci 13 08430 g016
Table 1. Examples showing the relation between the number of singular values (r) used to reconstruct the image and the correlation coefficient value obtained.
Table 1. Examples showing the relation between the number of singular values (r) used to reconstruct the image and the correlation coefficient value obtained.
rR
80.903
320.981
640.994
1280.999
2561
Table 2. Metrics’ averages using 49 grayscale images, showing the metrics over the watermarked images and the metrics of the LOGO and MetaData extracted.
Table 2. Metrics’ averages using 49 grayscale images, showing the metrics over the watermarked images and the metrics of the LOGO and MetaData extracted.
Number of Images Insertion (Image Watermarked) Extraction (LOGO and MetaData)
MSEPSNR (dB)NCCSSIMMSSIMMSEPSNR (dB)NCCSSIMMSSIMBER
497.171040.20510.99870.99990.99940≫601110
Table 3. Metrics’ averages using only six representative images, showing the metrics over the watermarked images and the metrics of the LOGO (MSE, PSNR, NCC, SSIM, MSSIM) and MetaData (BER) extracted.
Table 3. Metrics’ averages using only six representative images, showing the metrics over the watermarked images and the metrics of the LOGO (MSE, PSNR, NCC, SSIM, MSSIM) and MetaData (BER) extracted.
Image Insertion (Image Watermarked) Extraction (LOGO and MetaData)
MSEPSNR (dB)NCCSSIMMSSIMMSEPSNR (dB)NCCSSIMMSSIMBER
Baboon6.147740.24360.99820.99990.99990≫601110
Barbara7.716939.25630.99870.99990.99980≫601110
boat8.221938.98100.99810.99990.99980≫601110
Cameraman9.947938.15340.99870.99980.99850≫601110
Lena7.950639.12670.99820.99990.99940≫601110
Peppers5.802840.49430.99900.99990.99980≫601110
Table 4. Metrics’ averages using 49 grayscale images for all attacks with parameter variation, showing the metrics of the LOGO (MSE, PSNR, NCC, SSIM, MSSIM) and MetaData ( B e r r o r , BER) extracted.
Table 4. Metrics’ averages using 49 grayscale images for all attacks with parameter variation, showing the metrics of the LOGO (MSE, PSNR, NCC, SSIM, MSSIM) and MetaData ( B e r r o r , BER) extracted.
Attack/(Parameter)Value Range (Step)MSEPSNR (dB)NCCSSIMMSSIM B e r r o r BER
Gaussian Filter/ ( N × N ) 2 to 11 (1)0.039859.87761110.00610
Median Filter/ ( N × N ) 2 to 9 (1)1.891058.82760.99990.99910.99850.55360.0001
Gaussian Noise/ ( σ 2 ) 0.05 to 0.5 (0.05)0≫6011100
Salt and Pepper Noise/(density)0.1 to 1 (0.1)0≫6011100
Scaling/(scale factor)0.25 to 4 (0.25)0≫6011100
Translation/(pixels)10 to 100 (10)157.825054.35280.99340.96640.958916.21020.0040
Rotation/(angle )1 to 180 (15)4760.595620.60530.82620.54550.5376185.46620.0462
Cropping/( p % )0 to 95 (5)23,540.78848.40220.48940.26960.2636916.12240.2280
Contrast Enhancement/ ( f ) 0.01 to 0.45 (0.04)0≫6011100
JPEG Compression/ ( q u a l i t y f a c t o r ) 100 to 0 (10)0.458459.67411.00000.99990.999819.38590.0048
Histogram Equalization/(discrete levels)2 n , n was from 2 to 8615.007643.51600.97650.90190.882636.62100.0091
Table 5. Metrics obtained from all attacks, with parameter variation, after applying the watermarking algorithm over Lena’s image and showing the metrics of the LOGO (MSE, PSNR, NCC, SSIM, MSSIM) and MetaData ( B e r r o r , BER) extracted.
Table 5. Metrics obtained from all attacks, with parameter variation, after applying the watermarking algorithm over Lena’s image and showing the metrics of the LOGO (MSE, PSNR, NCC, SSIM, MSSIM) and MetaData ( B e r r o r , BER) extracted.
Attack/(Parameter)Value Range (Step)MSEPSNR (dB)NCCSSIMMSSIM B e r r o r BER
Gaussian Filter/ ( N × N ) 2 to 11 (1)0≫6011100
Median Filter/ ( N × N ) 2 to 9 (1)0≫6011100
Gaussian Noise/ ( σ 2 ) 0.05 to 0.5 (0.05)0≫6011100
Salt and Pepper Noise/(density)0.1 to 1 (0.1)0≫6011100
Scaling/(scale factor)0.25 to 4 (0.25)0≫6011100
Translation/(pixels)10 to 100 (10)0≫6011100
10≫6011100
154564.755011.53660.82010.44160.43261470.0366
307601.42259.32190.73210.38900.38432610.0650
458908.42508.63280.69920.37390.36993270.0814
≫607724.97009.25180.72890.39090.38632990.0744
754493.227511.60520.82240.44360.43421720.0428
Rotation/(angle )900≫6011100
1054564.755011.53660.82010.44160.43261470.0366
1207601.42259.32190.73210.38900.38432610.0650
1358908.42508.63280.69920.37390.36993270.0814
1507724.97009.25180.72890.39090.38632990.0744
1654493.227511.60520.82240.44360.43421720.0428
1800≫6011100
51703.655015.81700.92390.55950.5079280.0070
103290.265012.95850.86330.47350.4589620.0154
155806.732510.49150.78190.42250.41651090.0271
208973.45008.60120.69770.37720.37342260.0562
2511,724.00757.44000.63610.34560.34253420.0851
3014,741.16756.44550.57750.31330.31074750.1182
3518,011.92505.57520.52160.27770.27545850.1456
4020,840.51254.94170.47810.24750.24537010.1745
4523,974.71754.33330.43380.21660.21468290.2063
Cropping/( p % )5027,167.44503.79030.39190.18630.18449670.2407
5529,989.53003.36110.35680.15890.157110820.2693
6033,370.83002.89710.31630.12780.126012390.3084
6535,991.33752.56880.28550.10420.102513480.3355
7037,766.52002.35970.26470.08920.087514200.3534
7540,159.44002.09290.23630.07070.069115180.3778
8042,727.92751.82370.20480.04730.045816370.4074
8544,685.18001.62920.17930.03410.032717160.4271
9047,214.65251.39000.14310.01880.017518110.4507
9549,327.96501.19990.10700.00970.008618910.4706
Contrast Enhancement/ ( f ) 0.01 to 0.45 (0.04)0≫6011100
JPEG Compression/(quality factor)100 down to 10 (−10)0≫6011100
00≫601111890.0470
Histogram Equalization/(discrete levels)47822.50759.19730.72630.38720.38241810.0450
8481.185021.30770.97720.78830.6868150.0037
1619.507535.22880.99910.99750.997120.0005
326.5025400.99970.99940.999400
640≫6011100
1280≫6011100
2560≫6011100
Table 6. Comparison between different types of watermarking systems [1,5,6,9,13,17,29,31,32,33,34,35] with our algorithm using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 6. Comparison between different types of watermarking systems [1,5,6,9,13,17,29,31,32,33,34,35] with our algorithm using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
MethodType of WatermarkPSNR (dB)SSIMNCCMSE
ProposalLogo (100 × 100) and MetaData60110
Mahbuba Begum et al. [31]Grayscale Image57.63000.9984--
Rahul Dixit et al. [29]Grayscale Image (256 × 256)291.485311-
D. Rajani et al. [32]Logo73.72050.98840.99991.0827 × 10 5
Bhargavi Mokashi et al. [1]Signature (Biometric Image 256 × 256)39.48430.9964--
Yuanmin Li et al. [5]Grayscale Image (64 × 64)41.7050-1-
Shahzad Alam et al. [6]Copyright Logo (64 × 64)74.403711-
Jun-Yun Wu et al. [33]Grayscale Image (32 × 32)-0.99790.9956-
Peijia Zheng et al. [9]Grayscale Image (64 × 64)53.9470-1-
Chandan Kumar et al. [13]Grayscale Image (256 × 256)300.99380.9965-
Xiao-bing Kang et al. [17]Binary Logo (32 × 32)400.9720--
Seif Eddine Naffouti et al. [35]Logo (512 × 512)48.1308111
Jun-Yun Wu et al. [34]Image (32 × 32)46.38050.99790.9956-
Table 7. Comparison between different types of watermarking systems [1,5,9,13,17,33,34,35] with our algorithm using the Barbara, Peppers, and Pirate images. The best results for each proposal are indicated by the values highlighted in bold in each column.
Table 7. Comparison between different types of watermarking systems [1,5,9,13,17,33,34,35] with our algorithm using the Barbara, Peppers, and Pirate images. The best results for each proposal are indicated by the values highlighted in bold in each column.
Method Baboon Barbara Peppers Pirate
PSNR (dB)SSIMNCCPSNR (dB)SSIMNCCPSNR (dB)SSIMNCCPSNR (dB)SSIMNCC
Proposal6011601160116011
Bhargavi Mokashi et al. [1]39.48370.9967----39.48200.9943-39.48270.9958-
Yuanmin Li et al. [5]42.22631-------41.62751-
Jun-Yun Wu et al. [33]45.4523--46.6571-- --46.5118--
Xiao-bing Kang et al. [17]37.14000.9795----42.25000.9747-39.38000.9572-
Seif Eddine Naffouti et al. [35]48.131411---48.130911---
Jun-Yun Wu et al. [34]45.4523--46.6571-----46.5118--
Table 8. Comparison between different types of watermarking systems [1,29] with our algorithm, after applying the Gaussian filter using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 8. Comparison between different types of watermarking systems [1,29] with our algorithm, after applying the Gaussian filter using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Window Size Proposal Rahul Dixit et al. [29] Bhargavi Mokashi et al. [1]
BERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCC
(2 × 2)06011-31.91870.92730.9945---0.9380
(3 × 3)06011-27.32280.80530.9838---0.9370
(4 × 4)06011-------0.9370
(5 × 5)06011-22.43220.54740.9509----
(7 × 7)06011-19.85260.35800.9141----
(10 × 10)06011-17.62940.17770.8641----
Table 9. Comparison between different types of watermarking systems [17,29,31,32] with our algorithm, after applying the median filter using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 9. Comparison between different types of watermarking systems [17,29,31,32] with our algorithm, after applying the median filter using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Window Size Proposal Mahbuba Begum et al. [31] Rahul Dixit et al. [29] D. Rajani et al. [32] Xiao-bing Kang et al. [17]
BERPSNRSSIMNCCBERPSNRSSIMNCCBERPSNRSSIMNCCBERPSNRSSIMNCCBERPSNRSSIMNCC
(dB) (dB) (dB) (dB) (dB)
(2 × 2)06011---1-31.91870.92720.9945--------
(3 × 3)06011---1-27.18950.80020.9833-68.7692-0.99440.0049--0.9967
(4 × 4)06011---1------------
(5 × 5)06011---1-22.31730.53970.9497-63.0684-0.97840.0752--0.9486
(7 × 7)06011---1-19.74690.34950.9121-60.5588-0.9610----
(9 × 9)06011---1------------
Table 10. Comparison between different types of watermarking systems [13,17,29,34] with our algorithm, after applying JPEG compression using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 10. Comparison between different types of watermarking systems [13,17,29,34] with our algorithm, after applying JPEG compression using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Quality Factor Proposal Rahul Dixit et al. [29] Chandan Kumar et al. [13] Xiao-bing Kang et al. [17] Jun-Yun Wu et al. [34]
BERPSNRSSIMNCCBERPSNRSSIMNCCBERPSNRSSIMNCCBERPSNRSSIMNCCBERPSNRSSIMNCC
(dB) (dB) (dB) (dB) (dB)
10006011------0.98930.9992--------
8006011------0.98930.9990--------
7006011-38.86760.99190.9986----0.0205--0.9859----
6006011------0.9877010.9988------0.94110.8396
5006011-38.03340.99050.9984----0.0791--0.9449----
3006011-36.96100.98810.9981--0.98620.99870.1582--0.8867--0.80540.6651
1006011-35.82370.98420.9977--0.97940.99690.2266--0.8382----
Table 11. Comparison between different types of watermarking systems [13,17,35] with our algorithm, after applying the scale attack using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 11. Comparison between different types of watermarking systems [13,17,35] with our algorithm, after applying the scale attack using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Factor Scale Proposal Chandan Kumar et al. [13] Xiao-bing Kang et al. [17] Seif Naffouti et al. [35]
BERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCC
0.506011--0.70750.55630.0020--0.9987---0.9993
1.506011--0.81250.5227--------
Table 12. Comparison between different types of watermarking systems [29,31] with our algorithm, after applying the rotation attack using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 12. Comparison between different types of watermarking systems [29,31] with our algorithm, after applying the rotation attack using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Angle Proposal Mahbuba Begum et al. [31] Rahul Dixit et al. [29]
BERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCC
30 0.05679.32190.38900.7321---0.9988-8.69420.33220.8279
60 0.05829.25180.39090.7289---0.9988-8.59530.31870.8244
90 06011---1-291.736711
120 0.06129.32190.38900.7320---0.9988----
Table 13. Comparison between watermarking systems [34] with our algorithm, after applying Gaussian noise using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 13. Comparison between watermarking systems [34] with our algorithm, after applying Gaussian noise using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
σ 2 Proposal Jun-Yun Wu et al. [34]
BERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCC
0.106011--0.90670.9241
0.306011--0.89150.9279
0.506011--0.88080.9188
Table 14. Comparison between different types of watermarking systems [13,34] with our algorithm, after applying SP noise using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 14. Comparison between different types of watermarking systems [13,34] with our algorithm, after applying SP noise using the Lena image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Noise Density Proposal Chandan Kumar et al. [13] Jun-Yun Wu et al. [34]
BERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCCBERPSNR (dB)SSIMNCC
0.106011--0.6773890.7005--0.93020.9512
0.306011------0.88160.9287
0.506011--0.1904520.5810--0.87930.9141
Table 15. Comparison between watermarking systems [33] with our algorithm, after applying SP noise using the Barbara image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 15. Comparison between watermarking systems [33] with our algorithm, after applying SP noise using the Barbara image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Noise Density Proposal Jun-Yun Wu et al. [33]
SSIMNCCSSIMNCC
0.1110.91720.9664
0.3110.90560.9483
0.5110.89360.9395
Table 16. Comparison between watermarking systems [34] with our algorithm, after applying SP noise using the Pirate image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 16. Comparison between watermarking systems [34] with our algorithm, after applying SP noise using the Pirate image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Noise Density Proposal Jun-Yun Wu et al. [34]
SSIMNCCSSIMNCC
0.1110.92820.9577
0.3110.89040.9238
0.5110.87060.9065
Table 17. Comparison between different types of watermarking systems [17,34] with our algorithm, after applying JPEG compression using the Baboon image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Table 17. Comparison between different types of watermarking systems [17,34] with our algorithm, after applying JPEG compression using the Baboon image. The best result for each parameter is indicated by the values highlighted in bold in each column.
Quality Factor Proposal Xiao-bing Kang et al. [17] Jun-Yun Wu et al. [34]
BERSSIMNCCBERSSIMNCCBERSSIMNCC
900 1 1----0.99730.9987
700 1 10.0010-0.9993---
600 1 1----0.98430.9053
500 1 10.0029-0.9980---
300 1 10.0078-0.9947-0.93650.7868
100 1 10.1143-0.9195---
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gomez-Coronel, S.L.; Moya-Albor, E.; Brieva, J.; Romero-Arellano, A. A Robust and Secure Watermarking Approach Based on Hermite Transform and SVD-DCT. Appl. Sci. 2023, 13, 8430. https://doi.org/10.3390/app13148430

AMA Style

Gomez-Coronel SL, Moya-Albor E, Brieva J, Romero-Arellano A. A Robust and Secure Watermarking Approach Based on Hermite Transform and SVD-DCT. Applied Sciences. 2023; 13(14):8430. https://doi.org/10.3390/app13148430

Chicago/Turabian Style

Gomez-Coronel, Sandra L., Ernesto Moya-Albor, Jorge Brieva, and Andrés Romero-Arellano. 2023. "A Robust and Secure Watermarking Approach Based on Hermite Transform and SVD-DCT" Applied Sciences 13, no. 14: 8430. https://doi.org/10.3390/app13148430

APA Style

Gomez-Coronel, S. L., Moya-Albor, E., Brieva, J., & Romero-Arellano, A. (2023). A Robust and Secure Watermarking Approach Based on Hermite Transform and SVD-DCT. Applied Sciences, 13(14), 8430. https://doi.org/10.3390/app13148430

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop