A High-Transferability Adversarial Sample Generation Method Incorporating Frequency Domain Transformations
Abstract
:1. Introduction
- This study finds that the patterns in the frequency domain of images are relatively consistent, allowing for more convenient modifications to specific regions of an image by altering its frequency domain. Such modifications are difficult to achieve in the spatial domain, and they can help generate adversarial samples with higher transferability.
- This paper proposes a novel method of frequency domain transformation and finds that suppress high-frequency information in the input image, while enhancing the frequency domain information of specific regions, is beneficial for improving the transferability of generated adversarial samples.
- This paper conducts extensive experiments to demonstrate the superiority of the frequency domain enhancement (FDE) method, which exhibits excellent transferability across both standard models and defense models. Furthermore, combining FDE with existing methods can enhance the transferability of the generated adversarial samples.
2. Related Work
3. Methodology
3.1. Preliminary
3.2. Frequency Domain Enhancement
3.3. Attack Algorithms
Algorithm 1. FDE-FGSM |
Input: A classifier f(·) with parameters , loss function J, clean image with true label y, the maximum perturbation magnitude , ensures that the generated pixel values remain within the range [0, 1] during the adversarial sample generation process, number of spectral transformations N, number of random enhancements Q, std of noise , and number of iterations T. |
Output: The adversarial example 1: = , 2: for t = 0 → T − 1 do 3: for i = 1→ N do 4: Get transformation output using Equation (9) 5: for k = 1 → Q do 6: Get transformation output F using Equation (10) 7: Gradient calculate 8: end for 9: end for 10: Average gradient: 11: 12: 13: end for 14: 15: return |
4. Experiments
4.1. Experiment Setup
4.2. Attack Models
4.3. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-V4, Inception-Resnet and the Impact of Residual Connections on Learning. In Proceedings of the AAAI Conference on Artificial Intelligence 2017, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing System, Lake Tahoe, NV, USA, 3–6 December 2012; Volume 25. [Google Scholar]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.J.; Fergus, R. Intriguing properties of neural networks. In Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. In Proceedings of the 3rd International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Lin, J.; Song, C.; He, K.; Wang, L.; Hopcroft, J.E. Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks. arXiv 2019, arXiv:1908.06281. [Google Scholar]
- Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting Adversarial Attacks with Momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; Yuille, A.L. Improving Transferability of Adversarial Examples with Input Diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Byun, J.; Cho, S.; Kwon, M.J.; Kim, H.S.; Kim, C. Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022, New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Wang, X.; He, X.; Wang, J.; He, K. Admix: Enhancing the Transferability of Adversarial Attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Deng, Z.; Xiao, W.; Li, X.; He, S.; Wang, Y. Enhancing the Transferability of Targeted Attacks with Adversarial Perturbation Transform. Electronics 2023, 12, 3895. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, Z.; Zhang, J. Structure Invariant Transformation for Better Adversarial Transferability. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2023, Paris, France, 2–3 October 2023. [Google Scholar]
- Wang, Z.; Guo, H.; Zhang, Z.; Liu, W.; Qin, Z.; Ren, K. Feature Importance-Aware Transferable Adversarial Attacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, BC, Canada, 11–17 October 2021. [Google Scholar]
- Ganeshan, A.; BS, V.; Babu, R.V. Fda: Feature Disruptive Attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2019, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Kurakin, A.; Goodfellow, I.J.; Bengio, S. Adversarial Examples in the Physical World. In Artificial Intelligence Safety and Security; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018; pp. 99–112. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Chen, P.Y.; Zhang, H.; Sharma, Y.; Yi, J.; Hsieh, C.J. Zoo: Zeroth Order Optimization Based Black-Box Attacks to Deep Neural Networks without Training Substitute Models. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security 2017, Dallas, TX, USA, 3 November 2017. [Google Scholar]
- Andriushchenko, M.; Croce, F.; Flammarion, N.; Hein, M. Square Attack: A Query-Efficient Black-Box Adversarial Attack Via Random Search. In Proceedings of the European Conference on Computer Vision 2020, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Dong, Y.; Pang, T.; Su, H.; Zhu, J. Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2019, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Long, Y.; Zhang, Q.; Zeng, B.; Gao, L.; Liu, X.; Zhang, J.; Song, J. Frequency Domain Model Augmentation for Adversarial Attack. In Proceedings of the European Conference on Computer Vision 2022, Tel Aviv, Israel, 23–27 October 2022. [Google Scholar]
- Guo, Y.; Li, Q.; Chen, H. Backpropagating Linearly Improves Transferability of Adversarial Examples. Adv. Neural Inf. Process. Syst. 2020, 33, 85–95. [Google Scholar]
- Wang, X.; Tong, K.; He, K. Rethinking the Backward Propagation for Adversarial Transferability. Adv. Neural Inf. Process. Syst. 2023, 36, 1905–1922. [Google Scholar]
- Wang, R.; Guo, Y.; Wang, Y. Ags: Affordable and Generalizable Substitute Training for Transferable Adversarial Attack. In Proceedings of the AAAI Conference on Artificial Intelligence 2024, Vancouver, BC, Canada, 20–27 February 2024. [Google Scholar]
- Yin, D.; Gontijo Lopes, R.; Shlens, J.; Cubuk, E.D.; Gilmer, J. A Fourier Perspective on Model Robustness in Computer Vision. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32. [Google Scholar]
- Wang, Z.; Yang, Y.; Shrivastava, A.; Rawal, V.; Ding, Z. Towards Frequency-Based Explanation for Robust Cnn. arXiv 2020, arXiv:2005.03141. [Google Scholar]
- Wallace, G.K. The Jpeg Still Picture Compression Standard. Commun. ACM 1991, 34, 30–44. [Google Scholar] [CrossRef]
- Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; McDaniel, P. Ensemble Adversarial Training: Attacks and Defenses. arXiv 2017, arXiv:1705.07204. [Google Scholar]
- Wang, X.; He, K. Enhancing the Transferability of Adversarial Attacks through Variance Tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021, Nashville, TN, USA, 19–25 June 2021. [Google Scholar]
Model | Attack | Inc_v3 | Inc_v4 | Inc_res_v2 | Res_50 | Res_101 | Res_152 | |||
---|---|---|---|---|---|---|---|---|---|---|
Inc_v3 | MI-FGSM DI-FGSM -FGSM FDE-FGSM(our) | 100.00 100.00 99.70 99.70 | 51.50 56.70 63.70 75.60 | 46.50 47.60 58.80 73.10 | 48.30 46.70 57.50 69.70 | 42.90 42.30 52.60 62.80 | 41.40 40.90 48.60 61.50 | 22.80 18.50 31.20 42.60 | 21.30 19.80 33.00 43.30 | 11.00 9.00 17.10 24.00 |
Inc_v4 | MI-FGSM DI-FGSM -FGSM FDE-FGSM(our) | 60.90 63.20 70.70 78.40 | 99.90 99.80 99.70 99.30 | 45.50 46.20 55.50 64.50 | 46.30 41.90 55.40 63.70 | 42.70 38.40 49.90 57.70 | 43.10 38.30 48.60 57.60 | 19.80 15.50 30.90 38.90 | 18.40 16.50 31.80 38.50 | 10.70 8.70 17.60 24.90 |
Inc_res_v2 | MI-FGS MDI-FGSM -FGSM FDE-FGSM(our) | 61.00 64.40 76.40 85.60 | 52.70 60.60 68.00 78.10 | 99.20 99.60 98.30 98.20 | 50.90 48.10 60.50 73.70 | 44.60 46.30 58.30 69.50 | 44.30 45.00 56.20 66.80 | 22.00 17.80 37.60 51.50 | 22.10 18.10 33.90 46.90 | 13.10 11.80 28.40 39.90 |
Res_152 | MI-FGSM DI-FGSM -FGSM FDE-FGSM(our) | 55.80 63.10 66.50 73.20 | 51.20 60.30 62.30 67.40 | 46.50 57.70 56.80 65.80 | 84.70 89.80 92.80 95.00 | 85.50 91.90 93.10 95.40 | 99.40 99.80 99.80 99.10 | 26.60 26.30 37.90 43.50 | 26.10 24.10 35.10 41.40 | 15.20 15.20 25.30 29.30 |
Model | Attack | Inc_v3 | Inc_v4 | Inc_res_v2 | Res_50 | Res_101 | Res_152 | |||
---|---|---|---|---|---|---|---|---|---|---|
Inc_v3 | SIM DI-MI VMI -MI -DI-TI-MI FDE-MI(our) FDE-DI-TI-MI(our) | 100.00 100.00 100.00 99.60 99.10 99.90 99.70 | 76.30 78.80 73.90 87.90 92.00 93.60 95.20 | 74.90 73.50 68.50 86.10 91.20 93.30 93.80 | 72.60 71.20 64.90 83.70 87.80 90.50 91.50 | 68.60 67.60 59.90 81.40 86.80 89.30 90.70 | 69.00 68.30 61.00 80.90 87.40 89.30 90.50 | 39.90 40.80 38.80 55.10 81.70 69.20 89.20 | 38.00 38.40 38.70 56.50 80.40 70.20 87.90 | 23.80 21.70 23.30 35.20 69.60 45.00 78.80 |
Inc_v4 | SIM DI-MI VMI -MI -DI-TI-MI FDE-MI(our) FDE-DI-TI-MI(our) | 87.60 83.10 77.40 91.20 92.70 94.20 95.60 | 100.00 99.90 99.90 99.40 98.10 99.30 99.10 | 77.60 75.30 69.00 86.30 89.20 90.20 92.60 | 76.40 68.90 63.90 83.50 85.80 88.10 91.30 | 73.70 64.80 61.70 82.60 85.20 86.60 88.60 | 73.30 65.50 62.20 81.80 86.40 85.80 88.50 | 47.60 35.90 39.00 57.40 79.20 68.20 85.90 | 42.60 33.70 38.60 56.40 78.30 65.20 84.40 | 28.80 19.70 24.20 36.50 69.80 46.00 77.10 |
Inc_res_v2 | SIM DI-MI VMI -MI -DI-TI-MI FDE-MI(our) FDE-DI-TI-MI(our) | 86.20 81.90 78.70 90.40 90.40 94.10 94.40 | 83.70 79.70 74.50 88.90 89.10 92.20 93.40 | 99.90 99.50 98.80 98.00 97.30 98.80 97.90 | 79.30 73.20 66.80 86.30 85.70 89.80 92.00 | 77.60 72.10 65.60 84.30 84.50 89.80 91.40 | 76.30 69.80 63.00 84.10 84.40 88.80 91.20 | 55.70 43.10 45.80 68.90 80.00 78.10 91.10 | 48.70 39.30 41.70 63.40 76.50 73.80 88.70 | 39.70 30.60 34.30 55.70 76.30 66.40 86.40 |
Res_152 | SIM DI-MI VMI -MI -DI-TI-MI FDE-MI(our) FDE-DI-TI-MI(our) | 76.40 85.10 72.90 87.80 93.60 91.40 94.70 | 73.30 83.90 67.10 86.90 93.20 89.30 93.30 | 71.70 80.10 65.80 85.50 92.20 90.40 93.80 | 95.10 95.30 92.30 97.50 98.10 97.70 98.20 | 95.50 96.00 92.60 97.40 97.90 97.90 98.20 | 99.80 99.90 99.50 99.70 99.80 99.50 99.30 | 47.00 51.70 46.10 62.90 85.70 71.30 89.80 | 43.50 48.20 41.90 59.70 84.30 68.80 87.40 | 29.90 34.60 30.60 46.40 79.70 55.00 83.80 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yan, S.; Deng, Z.; Dong, J.; Li, X. A High-Transferability Adversarial Sample Generation Method Incorporating Frequency Domain Transformations. Electronics 2024, 13, 4480. https://doi.org/10.3390/electronics13224480
Yan S, Deng Z, Dong J, Li X. A High-Transferability Adversarial Sample Generation Method Incorporating Frequency Domain Transformations. Electronics. 2024; 13(22):4480. https://doi.org/10.3390/electronics13224480
Chicago/Turabian StyleYan, Sijian, Zhengjie Deng, Jiale Dong, and Xiyan Li. 2024. "A High-Transferability Adversarial Sample Generation Method Incorporating Frequency Domain Transformations" Electronics 13, no. 22: 4480. https://doi.org/10.3390/electronics13224480
APA StyleYan, S., Deng, Z., Dong, J., & Li, X. (2024). A High-Transferability Adversarial Sample Generation Method Incorporating Frequency Domain Transformations. Electronics, 13(22), 4480. https://doi.org/10.3390/electronics13224480