Enhancing the Security of Deep Learning Steganography via Adversarial Examples
Abstract
:1. Introduction
2. Related Work
2.1. GAN Based Steganography
2.2. Adversarial Examples
3. Steganography Scheme Based on GAN and Adversarial Examples
3.1. Model Training
3.2. Security Improving
4. Experiments
4.1. Dataset
4.2. Implementation Details
4.3. Model Training Experiments
4.4. Security Enhancement Experiments
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Tomáš, F.; Judas, J.; Fridrich, J. Minimizing embedding impact in steganography using trellis-coded quantization. In Media Forensics and Security II; International Society for Optics and Photonics: Bellingham, WA, USA, 2010; Volume 7541. [Google Scholar]
- Pevny, T.; Filler, T.; Bas, P. Using high-dimensional image models to perform highly undetectable steganography. In Proceedings of the International Workshop on Information Hiding, Calgary, AB, Canada, 28–30 June 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 161–177. [Google Scholar]
- Holub, V.; Fridrich, J. Designing steganographic distortion using directional filters. In Proceedings of the 2012 IEEE International Workshop on Information Forensics and Security (WIFS), Tenerife, Spain, 2–5 December 2012; pp. 234–239. [Google Scholar]
- Holub, V.; Fridrich, J.; Denemark, T. Universal distortion function for steganography in an arbitrary domain. EURASIP J. Inf. Secur. 2014. [Google Scholar] [CrossRef] [Green Version]
- Li, B.; Wang, M.; Huang, J.; Li, X. A new cost function for spatial image steganography. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar]
- Jamie, H.; Danezis, G. Generating steganographic images via adversarial training. In Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Shumeet, B. Hiding images in plain sight: Deep steganography. In Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Zhu, J.; Kaplan, R.; Johnson, J.; Li, F. Hidden: Hiding data with deep networks. In Proceedings of the 15th European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Li, S.; Ye, D.; Jiang, S.; Liu, C.; Niu, X.; Luo, X. Anti-steganalysis for image on convolutional neural networks. Multimed. Tools Appl. 2018, 79, 4315–4331. [Google Scholar] [CrossRef]
- Tang, W.; Li, B.; Tan, S.; Barni, M.; Huang, J. CNN-based adversarial embedding for image steganography. IEEE Trans. Inf. Forensics Secur. 2019, 14, 2074–2087. [Google Scholar] [CrossRef] [Green Version]
- Kang, Y.; Liu, F.; Yang, C.; Luo, X.; Zhang, T. Color Image Steganalysis Based on Residuals of Channel Differences. Comput. Mater. Contin. 2019, 59, 315–329. [Google Scholar] [CrossRef] [Green Version]
- Shi, L.; Wang, Z.; Qian, Z.; Huang, N.; Puteaux, P.; Zhang, X. Distortion Function for Emoji Image Steganography. Comput. Mater. Contin. 2019, 59, 943–953. [Google Scholar] [CrossRef]
- Jessica, F.; Kodovsky, J. Rich models for steganalysis of digital images. IEEE Trans. Inf. Forensics Secur. 2012, 7, 868–882. [Google Scholar]
- Qian, Y.; Dong, J.; Wang, W.; Tan, T. Learning and transferring representations for image steganalysis using convolutional neural network. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016. [Google Scholar]
- Jian, Y.; Ni, J.; Yi, Y. Deep learning hierarchical representations for image steganalysis. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2545–2557. [Google Scholar]
- Ye, D.; Jiang, S.; Li, S.; Liu, C. Faster and transferable deep learning steganalysis on GPU. J. Real-Time Image Process. 2019, 16, 623–633. [Google Scholar]
- Zhang, Y.; Zhang, W.; Chen, K.; Liu, J.; Liu, Y.; Yu, N. Adversarial examples against deep neural network based steganalysis. In Proceedings of the 6th ACM Workshop on Information Hiding and Multimedia Security, Innsbruck, Austria, 20–22 June 2018. [Google Scholar]
- Yang, C.; Wang, J.; Lin, C.; Chen, H.; Wang, W. Locating Steganalysis of LSB Matching Based on Spatial and Wavelet Filter Fusion. Comput. Mater. Contin. 2019, 60, 633–644. [Google Scholar] [CrossRef] [Green Version]
- Schembri, F.; Sapuppo, F.; Bucolo, M. Experimental classification of nonlinear dynamics in microfluidic bubbles’ flow. Nonlinear Dyn. 2012, 67, 2807–2819. [Google Scholar] [CrossRef]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial nets. In Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Johnson, J.; Alahi, A.; Li, F. Perceptual losses for real-time style transfer and super-resolution. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 694–711. [Google Scholar]
- Snderby, C.K.; Caballero, J.; Theis, L.; Shi, W.; Huszar, F. Amortised map inference for image super-resolution. arXiv 2016, arXiv:1610.04490. [Google Scholar]
- Yang, C.; Lu, X.; Lin, Z.; Shechtman, E.; Wang, O.; Li, H. High-resolution image inpainting using multi-scale neural patch synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6721–6729. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Goodfellow Ian, J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572. [Google Scholar]
- Jiawei, S.; Vargas, D.V.; Sakurai, K. One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 2019, 23, 828–841. [Google Scholar]
- Huang, G.B.; Mattar, M.; Berg, T.; Learned-Miller, E. Labeled faces in the wild: A database forstudying face recognition in unconstrained environments. In Proceedings of the Workshop on Faces in ‘Real-Life’ Images: Detection, Alignment, and Recognition, Marseille, France, 12–18 October 2008. [Google Scholar]
- Patrick, B.; Filler, T.; Pevný, T. Break Our Steganographic System: The Ins and Outs of Organizing BOSS. In Proceedings of the International Workshop on Information Hiding 2011, Prague, Czech Republic, 18–20 May 2011; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lerer, A. Automatic differentiation in pytorch. In Proceedings of the Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
Cover Loss | Secret Loss | Stego PSNR | R-Secret PSNR | |
---|---|---|---|---|
LFW | 0.0049 | 0.0061 | 25.7222 | 26.3746 |
Bossbase | 0.0175 | 0.0140 | 19.8354 | 21.7831 |
LFW | Bossbase | |||
---|---|---|---|---|
Algorithms | SRM | CNN | SRM | CNN |
WOW | 0.2587 | 0.1328 | 0.2887 | 0.1654 |
S-UNIWARD | 0.2805 | 0.1571 | 0.2704 | 0.1849 |
GANste | 0.1910 | 0.1269 | 0.1039 | 0.0819 |
FGSM-GANste = 0.001 | 0.1394 | 0.2147 | 0.1387 | 0.1916 |
FGSM-GANste = 0.003 | 0.1704 | 0.4678 | 0.1208 | 0.5576 |
FGSM-GANste = 0.005 | 0.1773 | 0.7294 | 0.1135 | 0.8440 |
FGSM-GANste = 0.008 | 0.1638 | 0.9423 | 0.1039 | 0.9808 |
Onepixelattack-GANste p = 1 | 0.2202 | 0.5323 | 0.1231 | 0.2265 |
Onepixelattack-GANste p = 3 | 0.1666 | 0.3125 | 0.1190 | 0.3235 |
Onepixelattack-GANste p = 5 | 0.2168 | 0.3333 | 0.0843 | 0.1247 |
Onepixelattack-GANste p = 5 | 0.2667 | 0.2143 | 0.1724 | 0.1615 |
Cover Loss | Secret Loss | Stego PSNR | R-Secret PSNR | |
---|---|---|---|---|
FGSM-LFW = 0.003 | 0.0049 | 0.0079 | 25.8321 | 22.4592 |
FGSM-Bossbase = 0.001 | 0.0028 | 0.0039 | 27.9957 | 26.4751 |
Onepixelattack-LFW | 0.0167 | 0.0148 | 19.8567 | 20.0550 |
Onepixelattack-Bossbase | 0.0205 | 0.0091 | 20.2960 | 22.8812 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shang, Y.; Jiang, S.; Ye, D.; Huang, J. Enhancing the Security of Deep Learning Steganography via Adversarial Examples. Mathematics 2020, 8, 1446. https://doi.org/10.3390/math8091446
Shang Y, Jiang S, Ye D, Huang J. Enhancing the Security of Deep Learning Steganography via Adversarial Examples. Mathematics. 2020; 8(9):1446. https://doi.org/10.3390/math8091446
Chicago/Turabian StyleShang, Yueyun, Shunzhi Jiang, Dengpan Ye, and Jiaqing Huang. 2020. "Enhancing the Security of Deep Learning Steganography via Adversarial Examples" Mathematics 8, no. 9: 1446. https://doi.org/10.3390/math8091446
APA StyleShang, Y., Jiang, S., Ye, D., & Huang, J. (2020). Enhancing the Security of Deep Learning Steganography via Adversarial Examples. Mathematics, 8(9), 1446. https://doi.org/10.3390/math8091446