ZWNet: A Deep-Learning-Powered Zero-Watermarking Scheme with High Robustness and Discriminability for Images
Abstract
:1. Introduction
2. Related Work
2.1. Zero-Watermarking
2.2. ConvNeXt
2.3. LK-PAN
3. Proposed Scheme
3.1. Main Idea
3.2. Backbone Component
3.3. Neck Component
3.4. Head Component
3.5. Training
3.6. Application Usage of ZWNet
4. Experimental Results and Analysis
4.1. ZWNet Training
4.2. Robustness
4.3. Discriminability
4.4. Comparisons with Existing Methods
4.4.1. Robustness
4.4.2. Discriminability
4.4.3. Efficiency
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Costa, G.; Degano, P.; Galletta, L.; Soderi, S. Formally verifying security protocols built on watermarking and jamming. Comput. Secur. 2023, 128, 103133. [Google Scholar] [CrossRef]
- Razaq, A.; Alhamzi, G.; Abbas, S.; Ahmsad, M.; Razzaque, A. Secure communication through reliable S-box design: A proposed approach using coset graphs and matrix operations. Heliyon 2023, 9, e15902. [Google Scholar] [CrossRef] [PubMed]
- Tao, H.; Chongmin, L.; Zain, J.M.; Abdalla, A.N. Robust Image Watermarking Theories and Techniques: A Review. J. Appl. Res. Technol. 2014, 12, 122–138. [Google Scholar] [CrossRef]
- Liu, X.; Wang, Y.; Sun, Z.; Wang, L.; Zhao, R.; Zhu, Y.; Zou, B.; Zhao, Y.; Fang, H. Robust and discriminative zero-watermark scheme based on invariant features and similarity-based retrieval to protect large-scale DIBR 3D videos. Inf. Sci. 2021, 542, 263–285. [Google Scholar] [CrossRef]
- Xia, Z.; Wang, X.; Han, B.; Li, Q.; Wang, X.; Wang, C.; Zhao, T. Color image triple zero-watermarking using decimal-order polar harmonic transforms and chaotic system. Signal Process. 2021, 180, 107864. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2Net: A New Multi-Scale Backbone Architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 652–662. [Google Scholar] [CrossRef]
- Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image Matching from Handcrafted to Deep Features: A Survey. Int. J. Comput. Vis. 2021, 129, 23–79. [Google Scholar] [CrossRef]
- Garcia-Garcia, B.; Bouwmans, T.; Silva, A.J.R. Background subtraction in real applications: Challenges, current models and future directions. Comput. Sci. Rev. 2020, 35, 100204. [Google Scholar] [CrossRef]
- Taghanaki, S.A.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Hamarneh, G. Deep semantic segmentation of natural and medical images: A review. Artif. Intell. Rev. 2021, 54, 137–178. [Google Scholar] [CrossRef]
- Cheng, G.; Xie, X.; Han, J.; Guo, L.; Xia, G.S. Remote Sensing Image Scene Classification Meets Deep Learning: Challenges, Methods, Benchmarks, and Opportunities. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 3735–3756. [Google Scholar] [CrossRef]
- Wen, Q.; Sun, T.; Wang, A. Concept and Application of Zero-Watermark. Acta Electron. Sin. 2003, 31, 214–216. [Google Scholar]
- Jiang, F.; Gao, T.; Li, D. A robust zero-watermarking algorithm for color image based on tensor mode expansion. Multimedia Tools Appl. 2020, 79, 7599–7614. [Google Scholar] [CrossRef]
- Dong, F.; Li, J.; Bhatti, U.A.; Liu, J.; Chen, Y.W.; Li, D. Robust Zero Watermarking Algorithm for Medical Images Based on Improved NasNet-Mobile and DCT. Electronics 2023, 12, 3444. [Google Scholar] [CrossRef]
- Kang, X.-B.; Lin, G.-F.; Chen, Y.-J.; Zhao, F.; Zhang, E.-H.; Jing, C.-N. Robust and secure zero-watermarking algorithm for color images based on majority voting pattern and hyper-chaotic encryption. Multimedia Tools Appl. 2020, 79, 1169–1202. [Google Scholar] [CrossRef]
- Chu, R.; Zhang, S.; Mou, J.; Gao, X. A zero-watermarking for color image based on LWT-SVD and chaotic system. Multimedia Tools Appl. 2023, 82, 34565–34588. [Google Scholar] [CrossRef]
- Yang, H.-Y.; Qi, S.-R.; Niu, P.-P.; Wang, X.-Y. Color image zero-watermarking based on fast quaternion generic polar complex exponential transform. Signal Process. Image Commun. 2020, 82, 115747. [Google Scholar] [CrossRef]
- Leng, X.; Xiao, J.; Wang, Y. A Robust Image Zero-Watermarking Algorithm Based on DWT and PCA; Springer Berlin Heidelberg: Berlin, Heidelberg, 2012. [Google Scholar]
- Singh, A.; Dutta, M.K. A robust zero-watermarking scheme for tele-ophthalmological applications. J. King Saud Univ.—Comput. Inf. Sci. 2020, 32, 895–908. [Google Scholar] [CrossRef]
- Zhong, X.; Huang, P.-C.; Mastorakis, S.; Shih, F.Y. An Automated and Robust Image Watermarking Scheme Based on Deep Neural Networks. IEEE Trans. Multimedia 2021, 23, 1951–1961. [Google Scholar] [CrossRef]
- Mahapatra, D.; Amrit, P.; Singh, O.P.; Singh, A.K.; Agrawal, A.K. Autoencoder-convolutional neural network-based embedding and extraction model for image watermarking. J. Electron. Imaging 2022, 32, 021604. [Google Scholar] [CrossRef]
- Dhaya, D. Light Weight CNN based robust image watermarking scheme for security. J. Inf. Technol. Digit. World 2021, 3, 118–132. [Google Scholar] [CrossRef]
- Nawaz, S.A.; Li, J.; Shoukat, M.U.; Bhatti, U.A.; Raza, M.A. Hybrid medical image zero watermarking via discrete wavelet transform-ResNet101 and discrete cosine transform. Comput. Electr. Eng. 2023, 112, 108985. [Google Scholar] [CrossRef]
- Fierro-Radilla, A.; Nakano-Miyatake, M.; Cedillo-Hernandez, M.; Cleofas-Sanchez, L.; Perez-Meana, H. A Robust Image Zero-watermarking using Convolutional Neural Networks. In Proceedings of the 2019 7th International Workshop on Biometrics and Forensics (IWBF), Cancun, Mexico, 2–3 May 2019; pp. 1–5. [Google Scholar]
- Han, B.; Du, J.; Jia, Y.; Zhu, H. Zero-Watermarking Algorithm for Medical Image Based on VGG19 Deep Convolution Neural Network. J. Health Eng. 2021, 2021, 5551520. [Google Scholar] [CrossRef] [PubMed]
- Gong, C.; Liu, J.; Gong, M.; Li, J.; Bhatti, U.A.; Ma, J. Robust medical zero-watermarking algorithm based on Residual-DenseNet. IET Biom. 2022, 11, 547–556. [Google Scholar] [CrossRef]
- Liu, G.; Xiang, R.; Liu, J.; Pan, R.; Zhang, Z. An invisible and robust watermarking scheme using convolutional neural networks. Expert Syst. Appl. 2022, 210, 118529. [Google Scholar] [CrossRef]
- Li, H.; Xiong, P.; An, J.; Wang, L. Pyramid Attention Network for Semantic Segmentation. arXiv 2018, arXiv:1805.10180. [Google Scholar]
- Liu, Z.; Mao, H.; Wu, C.Y.; Feichtenhofer, C.; Darrell, T.; Xie, S. A ConvNet for the 2020s. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11976–11986. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer Hierarchical Vision Transformer Using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Liu, S.; Qi, L.; Qin, H.; Shi, J.; Jia, J. Path Aggregation Network for Instance Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8759–8768. [Google Scholar]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Feature Pyramid Networks for Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Li, C.; Liu, W.; Guo, R.; Yin, X.; Jiang, K.; Du, Y.; Du, Y.; Zhu, L.; Lai, B.; Hu, X.; et al. PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System. arXiv 2022, arXiv:2206.03001. [Google Scholar]
Methods | Method Description |
---|---|
Noise | Includes white noise and salt-and-pepper noise |
Filter | Includes Average filter, median filter and Gaussian filter |
Rotation | Rotates the image around the center with different angles |
Crop | Crop out part of the image with different sizes |
Mirror | Horizontal mirror and vertical mirror |
Attack Description | Lena | Mandril | Tree | Girl |
---|---|---|---|---|
Rotation (15°) | 0.9688 | 0.9609 | 0.9765 | 0.9688 |
Rotation (30°) | 0.9258 | 0.9297 | 0.9258 | 0.9375 |
Rotation (45°) | 0.8555 | 0.9063 | 0.8320 | 0.8203 |
Pepper and salt noise (intensity = 0.01) | 0.9922 | 0.9961 | 0.9727 | 0.9805 |
Pepper and salt noise (intensity = 0.05) | 0.9531 | 0.9609 | 0.9531 | 0.9375 |
Pepper and salt noise (intensity = 0.1) | 0.9219 | 0.9336 | 0.9609 | 0.8750 |
Gaussian noise (mean = 0, variance = 0.005) | 1 | 0.9843 | 0.8828 | 0.9570 |
Random crop (1/8) | 0.9063 | 1 | 1 | 0.9336 |
Random crop (1/6) | 1 | 0.8945 | 0.8164 | 0.8984 |
Random crop (1/4) | 0.8633 | 0.8984 | 0.8203 | 0.8710 |
Crop upper-left corner (1/4) | 0.9414 | 0.8750 | 0.9570 | 0.8086 |
Crop lower-left corner (1/4) | 0.9922 | 0.9258 | 0.9531 | 0.8867 |
Crop upper-right corner (1/4) | 0.8945 | 0.9531 | 0.9375 | 0.8789 |
Crop lower-right corner (1/4) | 0.9883 | 0.9609 | 0.9414 | 0.8086 |
Crop upper-left corner (1/8) | 0.9961 | 0.9219 | 0.9805 | 0.9297 |
Crop lower-left corner (1/8) | 1 | 0.9766 | 1 | 0.9922 |
Crop upper-right corner (1/8) | 0.9882 | 0.9922 | 0.9883 | 0.9453 |
Crop lower-right corner (1/8) | 1 | 0.9727 | 1 | 0.9453 |
Blur (3 × 3) | 1 | 0.9883 | 0.9922 | 0.9882 |
Blur (5 × 5) | 1 | 0.9883 | 0.9922 | 0.9922 |
Blur (9 × 9) | 1 | 0.9766 | 0.9922 | 0.9883 |
Blur (11 × 11) | 0.9961 | 0.9805 | 0.9609 | 0.9883 |
Gaussian Blur (3 × 3) | 1 | 0.9883 | 0.9922 | 0.9883 |
Gaussian Blur (5 × 5) | 0.9961 | 0.9727 | 0.9609 | 0.9844 |
Gaussian Blur (9 × 9) | 0.9844 | 0.9766 | 0.9492 | 0.9766 |
Gaussian Blur (11 × 11) | 0.9922 | 0.9844 | 0.9609 | 0.9766 |
Median Blur (3 × 3) | 0.9922 | 0.9688 | 0.9063 | 0.9688 |
Median Blur (5 × 5) | 0.9688 | 0.9648 | 0.9258 | 0.9688 |
Median Blur (9 × 9) | 0.9883 | 0.9805 | 0.9570 | 0.9805 |
Median Blur (11 × 11) | 0.9766 | 0.9648 | 0.8984 | 0.9688 |
Test Image | Lena | Mandril | Tree | Girl |
---|---|---|---|---|
Lena | 0 | 90 | 88 | 113 |
Mandril | 90 | 0 | 92 | 97 |
Tree | 88 | 92 | 0 | 91 |
Girl | 113 | 97 | 91 | 0 |
Attack Description | ZWNet (Proposed) | Yang’s Method | Liu’s Method | Nawaz’s Method |
---|---|---|---|---|
Rotation (15°) | 0.9688 | 0.9943 | 0.9466 | 0.9247 |
Rotation (30°) | 0.9258 | 1 | 0.9466 | 0.9058 |
Pepper and salt noise (intensity = 0.01) | 0.9922 | 0.9375 | 0.9766 | 0.8935 |
Gaussian noise (mean = 0, variance = 0.005) | 1 | 0.9531 | 1 | 0.9340 |
Random crop (1/6) | 1 | 0.9023 | 0.9522 | 0.8706 |
Blur (3 × 3) | 1 | 0.9414 | 0.9766 | 0.9172 |
Blur (11 × 11) | 0.9961 | 0.9414 | 0.9302 | 0.8388 |
Gaussian blur (3 × 3) | 1 | 0.9063 | 1 | 0.8902 |
Gaussian blur (11 × 11) | 0.9922 | 0.9375 | 0.9766 | 0.8253 |
Median blur (3 × 3) | 0.9922 | 0.9297 | 0.9961 | 0.9049 |
Median blur (11 × 11) | 0.9766 | 0.9648 | 0.9102 | 0.8138 |
Hamming Distance of Zero-Watermarks | ZWNet (Proposed) | Yang’s Method | Liu’s Method | Nawaz’s Method |
---|---|---|---|---|
Lena and Mandril | 90 | 28 | 59 | 43 |
Lena and Tree | 88 | 29 | 26 | 35 |
Lena and Girl | 113 | 40 | 90 | 93 |
Mandril and Tree | 92 | 21 | 61 | 66 |
Mandril and Girl | 97 | 20 | 57 | 21 |
Tree and Girl | 91 | 37 | 62 | 55 |
Methods | ZWNet (Proposed) | Yang’s Method | Liu’s Method | Nawaz’s Method |
---|---|---|---|---|
Average cost time | 96 ms | 2100 ms | 2440 ms | 1384 ms |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Sun, H.; Wang, C.; Chen, S.; Liu, X.; Zhang, Y.; Ren, N.; Tong, D. ZWNet: A Deep-Learning-Powered Zero-Watermarking Scheme with High Robustness and Discriminability for Images. Appl. Sci. 2024, 14, 435. https://doi.org/10.3390/app14010435
Li C, Sun H, Wang C, Chen S, Liu X, Zhang Y, Ren N, Tong D. ZWNet: A Deep-Learning-Powered Zero-Watermarking Scheme with High Robustness and Discriminability for Images. Applied Sciences. 2024; 14(1):435. https://doi.org/10.3390/app14010435
Chicago/Turabian StyleLi, Can, Hua Sun, Changhong Wang, Sheng Chen, Xi Liu, Yi Zhang, Na Ren, and Deyu Tong. 2024. "ZWNet: A Deep-Learning-Powered Zero-Watermarking Scheme with High Robustness and Discriminability for Images" Applied Sciences 14, no. 1: 435. https://doi.org/10.3390/app14010435
APA StyleLi, C., Sun, H., Wang, C., Chen, S., Liu, X., Zhang, Y., Ren, N., & Tong, D. (2024). ZWNet: A Deep-Learning-Powered Zero-Watermarking Scheme with High Robustness and Discriminability for Images. Applied Sciences, 14(1), 435. https://doi.org/10.3390/app14010435