Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods
Abstract
:1. Introduction
- We proposed an adversarial robustness technique that uses robust interpretable features with data augmentation to enhance the robustness of AI models against adversarial attacks in remote sensing applications.
- We validated our approach using EuroSAT and AID datasets, demonstrating its effectiveness across diverse and complex remote sensing scenarios.
- We applied SaliencyMix [23] augmentation to improve adversarial robustness and clean data, which performed better than the traditional data-augmentation technique.
- We evaluated transferability of the robustness against FGSM and BIM attacks and achieved similar consistency as PGD attack.
2. Methods
2.1. Threat Model: Adversarial Example Attack
2.2. Explainable AI Methods
2.3. Interpretation Discrepancy
3. Comparison Method for Adversarial Robustness
3.1. Adversarial Training
3.2. Robustness Using Interpretability
3.2.1. Int and Int − Adv
3.2.2. Int2 and Int2 − Adv
3.3. Traditional Data Augmentation
4. Proposed Adversarial Robustness Method
5. Experimental Setup
5.1. Datasets
5.2. Convolutional Neural Network (CNN) Architecture
5.3. Hyper-Parameters
5.4. Evaluation Matrix: Adversarial Test Accuracy (ATA)
6. Results
6.1. Base Method
6.2. Traditional Data-Augmentation
6.3. SaliencyMix Based Data-Augmentation
6.4. Robustness Transferability
7. Discussion
8. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
Full-Form | Abbreviation |
Artificial Intelligence | AI |
Class Activation Map | CAM |
Fast Gradient Sign Method | FGSM |
Basic Iterative Method | BIM |
Projected Gradient Descent | PGD |
Aerial Image Dataset | AID |
Adversarial Test Accuracy | ATA |
References
- Navalgund, R.R.; Jayaraman, V.; Roy, P. Remote sensing applications: An overview. Curr. Sci. 2007, 93, 1747–1766. [Google Scholar]
- Van Westen, C. Remote sensing for natural disaster management. Int. Arch. Photogramm. Remote Sens. 2000, 33, 1609–1617. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Özyurt, F.; Avcı, E.; Sert, E. UC-Merced Image Classification with CNN Feature Reduction Using Wavelet Entropy Optimized with Genetic Algorithm. Trait. Signal 2020, 37, 347–353. [Google Scholar] [CrossRef]
- Helber, P.; Bischke, B.; Dengel, A.; Borth, D. EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and Land Cover Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2217–2226. [Google Scholar] [CrossRef]
- Chan-Hon-Tong, A.; Lenczner, G.; Plyer, A. Demotivate adversarial defense in remote sensing. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; pp. 3448–3451. [Google Scholar]
- Chen, L.; Zhu, G.; Li, Q.; Li, H. Adversarial example in remote sensing image recognition. arXiv 2019, arXiv:1910.13222. [Google Scholar]
- Xu, Y.; Du, B.; Zhang, L. Assessing the threat of adversarial examples on deep neural networks for remote sensing scene classification: Attacks and defenses. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1604–1617. [Google Scholar] [CrossRef]
- Goodfellow, I.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Kurakin, A.; Goodfellow, I.; Bengio, S. Adversarial examples in the physical world. arXiv 2017, arXiv:1607.02533. [Google Scholar]
- Carlini, N.; Wagner, D. Towards evaluating the robustness of neural networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2017, arXiv:1706.06083. [Google Scholar]
- Cheng, G.; Sun, X.; Li, K.; Guo, L.; Han, J. Perturbation-seeking generative adversarial networks: A defense framework for remote sensing image scene classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–11. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
- Zhang, Y.; Zhang, Y.; Qi, J.; Bin, K.; Wen, H.; Tong, X.; Zhong, P. Adversarial patch attack on multi-scale object detection for uav remote sensing images. Remote Sens. 2022, 14, 5298. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 2019, 128, 336–359. [Google Scholar] [CrossRef]
- Wang, H.; Wang, Z.; Du, M.; Yang, F.; Zhang, Z.; Ding, S.; Mardziel, P.; Hu, X. Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. arXiv 2020, arXiv:1910.01279. [Google Scholar]
- Dombrowski, A.K.; Alber, M.; Anders, C.; Ackermann, M.; Müller, K.R.; Kessel, P. Explanations can be manipulated and geometry is to blame. arXiv 2019, arXiv:1906.07983. [Google Scholar]
- Chen, J.; Wu, X.; Rastogi, V.; Liang, Y.; Jha, S. Robust attribution regularization. arXiv 2019, arXiv:1905.09957. [Google Scholar]
- Boopathy, A.; Liu, S.; Zhang, G.; Liu, C.; Chen, P.Y.; Chang, S.; Daniel, L. Proper network interpretability helps adversarial robustness in classification. arXiv 2020, arXiv:2006.14748. [Google Scholar]
- Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
- Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. arXiv 2016, arXiv:1512.04150. [Google Scholar]
- Uddin, A.; Monira, M.; Shin, W.; Chung, T.; Bae, S.H. Saliencymix: A saliency guided data augmentation strategy for better regularization. arXiv 2020, arXiv:2006.01791. [Google Scholar]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Brown, T.B.; Mané, D.; Roy, A.; Abadi, M.; Gilmer, J. Adversarial Patch. arXiv 2018, arXiv:1712.09665. [Google Scholar]
Training Methods | ||||||||
---|---|---|---|---|---|---|---|---|
Base | Normal | 88% | 3% | 0% | 0% | 0% | 0% | 0% |
Adv | 80.5% | 62.5% | 30.5% | 9.9% | 3.5% | 2% | 1.5% | |
Int | 52.5% | 47% | 41% | 28% | 19% | 18.5% | 14% | |
Int-Adv | 45.5% | 41% | 36% | 29% | 24.5% | 22.5% | 21.5% | |
Int2 | 51.5% | 44.5% | 37.5% | 30.5% | 23.5% | 19.9% | 17.5% | |
Int2-Adv | 48.4% | 41.5% | 35% | 30% | 26% | 24.5% | 21.9% | |
Trad Aug | Normal | 91% | 3% | 0.2% | 0.05% | 0% | 0% | 0% |
Adv | 75% | 59.8% | 36% | 15.1 | 6.7% | 04.8% | 3.7% | |
Int | 57% | 51.6% | 43.6% | 34.8% | 27% | 24% | 22% | |
Int-Adv | 52% | 47.7% | 42.4% | 36.9% | 31.2% | 0.28.7% | 26.3% | |
Int2 | 60% | 49.7% | 43% | 0.36% | 29% | 26% | 23% | |
Int2-Adv | 53% | 48% | 42.9% | 37% | 32% | 29% | 26% | |
SaliencyMix | Normal | 91% | 0.67% | 0.05% | 0.04% | 0.02% | 0.02% | 0.02% |
Adv | 76% | 59% | 35% | 14% | 7% | 6% | 4 % | |
Int | 80% | 55% | 26% | 9.6% | 4% | 3% | 2% | |
Int-Adv | 53% | 47.9% | 41.2% | 34.1% | 28.7% | 26.5% | 24.5% | |
Int2 | 80% | 56% | 42.2% | 34.3% | 27.6% | 24.5% | 22.1% | |
Int2-Adv | 52% | 47% | 41% | 35% | 29% | 26% | 24% |
Training Methods | ||||||||
---|---|---|---|---|---|---|---|---|
Base | Normal | 71.5% | 3.8% | 0.7% | 0.2% | 0% | 0% | 0% |
Adv | 61% | 28.4% | 9.9% | 3.2% | 1.7% | 1.4% | 1.9% | |
Int | 46.2% | 0.343 | 0.219 | 0.138 | 0.079 | 0.059 | 0.045 | |
Int-Adv | 43.1% | 32.7% | 23.6% | 16.2% | 8.9% | 6.6% | 5.8% | |
Int2 | 44.5% | 34.2% | 22.9% | 15.4% | 7.8% | 6.4% | 4.7% | |
Int2-Adv | 40.7% | 31.4% | 24.2% | 17.2% | 10.6% | 8.3% | 6.7% | |
Trad Aug | Normal | 73.8% | 4.1% | 1.3% | 0.2% | 0% | 0% | 0% |
Adv | 59.9% | 28.8% | 10.1% | 0.03% | 1.9% | 1.5% | 2% | |
Int | 46.9% | 33.6% | 22% | 13.9% | 6.9% | 6.2% | 4.5% | |
Int-Adv | 43.7% | 1.8% | 22.9% | 14.9% | 9.2% | 6.9% | 6% | |
Int2 | 45.7% | 34.1% | 22.6% | 13.4% | 7.9% | 6.2% | 5% | |
Int2-Adv | 42.7% | 32.7% | 24.9% | 17% | 10.7% | 9.2% | 6.9% | |
SaliencyMix | Normal | 75.3% | 7.6% | 2.3% | 0.7% | 0.1% | 0.1% | 0.1% |
Adv | 60% | 29.1% | 18.3% | 2.9% | 2.4% | 1.1% | 2.8% | |
Int | 47.6% | 35.1% | 22.4% | 4.2% | 6.2% | 4.7% | 4.8% | |
Int-Adv | 44% | 31.9% | 23.9% | 15.6% | 9.3% | 7.7% | 6.4% | |
Int2 | 46.8% | 34.7% | 24.1% | 14.8% | 8.3% | 6.8% | 5.7% | |
Int2-Adv | 42.9% | 34.2% | 26.5% | 17.6% | 11% | 9.8% | 7% |
Training Methods | ||||||||
---|---|---|---|---|---|---|---|---|
FGSM | Normal | 91% | 98% | 3.3% | 2.7% | 2.8% | 3.4% | 3.8% |
Adv | 76% | 60.2% | 43.1% | 28.4% | 18.4% | 15.1% | 12.3% | |
Int | 80% | 56.9% | 34.8% | 21.8% | 13.3% | 11.1% | 9.5% | |
Int-Adv | 53% | 48.1% | 43.1% | 38.7% | 34.3% | 32.5% | 31% | |
Int2 | 80% | 57.2% | 34.5% | 20.9% | 12% | 9.7% | 7.9% | |
Int2-Adv | 52% | 58.8% | 44.8% | 33% | 24.6% | 21.2% | 18.3% | |
BIM | Normal | 91% | 0.7% | 0.02% | 0% | 0% | 0% | 0% |
Adv | 76% | 59.1% | 34.5% | 14.1% | 7.2% | 5.6% | 4.5% | |
Int | 80% | 54.7% | 26.3% | 9.6% | 3.9% | 2.9% | 2.1% | |
Int-Adv | 53% | 47.9% | 41.2% | 34.1% | 28.7% | 26.5% | 24.5% | |
Int2 | 80% | 55.6% | 25.6% | 8.9% | 3.8% | 2.8% | 2.1% | |
Int2-Adv | 52% | 58% | 40% | 22% | 9.2% | 6.9% | 5.5% |
Training Methods | ||||||||
---|---|---|---|---|---|---|---|---|
FGSM | Normal | 75.3% | 05.6% | 1.7% | 1.4% | 1.2% | 0.9% | 0.9% |
Adv | 61% | 31.3% | 14.3% | 6.5% | 3.6% | 2.8% | 2.4% | |
Int | 47.6% | 34.9% | 23.9% | 16.5% | 12.6% | 10.8% | 8.9% | |
Int-Adv | 44% | 33.1% | 25.2% | 19.3% | 13.9% | 11.9% | 10.5% | |
Int2 | 46.8% | 34.3% | 24.5% | 17.3% | 11.9% | 10.7% | 8.7% | |
Int2-Adv | 43% | 31.7% | 25% | 19.2% | 14.5% | 12.4% | 11.1% | |
BIM | Normal | 75.3% | 3.8% | 0.7% | 1.4% | 0.2% | 0% | 0% |
Adv | 61% | 28.4% | 9.9% | 3.2% | 1.7% | 1.4% | 1% | |
Int | 47.6% | 34.3% | 21.9% | 13.8% | 7.9% | 5.9% | 4.5% | |
Int-Adv | 44% | 32.7% | 23.6% | 16.2% | 9.8% | 7.6% | 5.8% | |
Int2 | 46.8% | 34.2% | 22.3% | 15.4% | 8.7% | 6.4% | 4.7% | |
Int2-Adv | 43% | 31.4% | 24.2% | 17.2% | 10.6% | 8.3% | 6.7% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tasneem, S.; Islam, K.A. Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods. Remote Sens. 2024, 16, 3210. https://doi.org/10.3390/rs16173210
Tasneem S, Islam KA. Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods. Remote Sensing. 2024; 16(17):3210. https://doi.org/10.3390/rs16173210
Chicago/Turabian StyleTasneem, Sumaiya, and Kazi Aminul Islam. 2024. "Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods" Remote Sensing 16, no. 17: 3210. https://doi.org/10.3390/rs16173210
APA StyleTasneem, S., & Islam, K. A. (2024). Improve Adversarial Robustness of AI Models in Remote Sensing via Data-Augmentation and Explainable-AI Methods. Remote Sensing, 16(17), 3210. https://doi.org/10.3390/rs16173210