Finding Explanations in AI Fusion of Electro-Optical/Passive Radio-Frequency Data
Abstract
:1. Introduction
2. Literature Review
2.1. Canonical Correlation Analysis
2.2. Eo/Rf Sensor Fusion
2.3. Explainable AI
3. Design and Methodology
3.1. The Escape Dataset
3.2. Data Preprocessing
3.3. Lstm-CCA
3.4. Explainable AI
4. Experimental Results
4.1. Explainable AI
4.2. Inferences from Explainx AI
5. Discussion
5.1. Fusion Comparison
5.2. Explainable AI
5.3. Comparison of Weights
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Correa, N.M.; Adali, T.; Li, Y.; Calhoun, V.D. Fusion of SAR and Multispectral Images Using Random Forest Regression for Change Detection. IEEE Signal Process. Mag. 2010, 27, 39–50. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Polat, O.; Özkazanç, Y.S. Image enhancement via Multiple Canonical Correlation Analysis. In Proceedings of the 2013 21st Signal Processing and Communications Applications Conference (SIU), Haspolat, Turkey, 24–26 April 2013; pp. 1–4. [Google Scholar]
- Du, L.; Liu, C.H.; Laghate, M.; Cabric, D. Sequential detection of number of primary users in cognitive radio networks. In Proceedings of the 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 8–11 November 2015; pp. 151–154. [Google Scholar]
- Barabas, C.; Virza, M.; Dinakar, K.; Ito, J.; Zittrain, J. Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment. Proc. Mach. Learn. Res. 2018, 114, 62–76. [Google Scholar]
- Bau, D.; Zhou, B.; Khosla, A.; Oliva, A.; Torralba, A. Network Dissection: Quantifying Interpretability of Deep Visual Representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6541–6549. [Google Scholar]
- Yang, X.; Liu, W.; Liu, W.; Tao, D. A Survey on Canonical Correlation Analysis. IEEE Trans. Knowl. Data Eng. 2021, 33, 2349–2368. [Google Scholar] [CrossRef]
- Wang, C. Variational bayesian approach to canonical correlation. IEEE Trans. Neural Networks 2007, 18, 905–910. [Google Scholar] [CrossRef] [PubMed]
- Rupnik, J.; Shawe-Taylor, J. Multi-View Canonical Correlation Analysis. In Proceedings of the Conference on Data Mining and Data Warehouses (SiKDD 2010), Ljubljana, Slovenia, 12 October 2010; pp. 1–4. [Google Scholar]
- Caroll, J.D. Generalization of canonical correlation analysis to three or more sets of variables. Proc. 76th Annu. Conv. Am. Psychol. Assoc. 1968, 3, 227–228. [Google Scholar]
- Luo, Y.; Tao, D.; Ramamohanarao, K.; Xu, C.; Wen, Y. Tensor Canonical Correlation Analysis for Multi-View Dimension Reduction. IEEE Trans. Knowl. Data Eng. 2015, 27, 3111–3324. [Google Scholar] [CrossRef] [Green Version]
- Hardoon, D.R.; Szedmak, S.; Shawe-Taylor, J. Canonical correlation analysis: An overview with application to learning methods. IEEE Trans. Neural Netw. 2004, 16, 2639–2644. [Google Scholar] [CrossRef] [Green Version]
- Sun, T.; Chen, S.; Yang, J.; Hu, X.; Shi, P. Discriminative Canonical Correlation Analysis with Missing Samples. WRI World Congr. Comput. Sci. Inf. Eng. 2009, 6, 95–99. [Google Scholar]
- Witten, D.M.; Tibshirani, R.; Hastie, T. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 2009, 10, 515–534. [Google Scholar] [CrossRef] [Green Version]
- Sun, T.; Chen, S. Locality preserving CCA with applications to data visualization and pose estimation. Image Vis. Comput. 2007, 25, 531–543. [Google Scholar] [CrossRef] [Green Version]
- Yang, X.; Liu, W.; Tao, D.; Cheng, J. Canonical correlation analysis networks for two-view image recognition. Inf. Sci. 2017, 385–386, 338–352. [Google Scholar] [CrossRef]
- Vakil, A.; Liu, J.; Zulch, P.A.; Blasch, E.; Ewing, R.; Li, J. A Survey of Multimodal Sensor Fusion for Passive RF and EO Information Integration. IEEE Aerosp. Electron. Syst. Mag. 2021, 36, 44–61. [Google Scholar] [CrossRef]
- Liu, J.; Mu, H.; Vakil, A.; Ewing, R.L.; Shen, X.; Blasch, E.; Li, J. Human Occupancy Detection via Passive Cognitive Radio. Sensors 2020, 20, 4248. [Google Scholar] [CrossRef]
- Majumder, U.; Blasch, E.; Garren, D. Deep Learning for Radar and Communications Automatic Target Recognition; Artech House: London, UK, 2020. [Google Scholar]
- Blasch, E.; Zheng, Y.; Liu, Z. Multispectral Image Fusion and Colorization; SPIE Press: Bellingham, WA, USA, 2018. [Google Scholar]
- Seo, D.K. Fusion of SAR and Multispectral Images Using Random Forest Regression for Change Detection. ISPRS Int. J. Geo-Inf. 2018, 7, 401. [Google Scholar] [CrossRef] [Green Version]
- Kim, S.; Song, W.J.; Kim, S.H. Double Weight-Based SAR and Infrared Sensor Fusion for Automatic Ground Target Recognition with Deep Learning. Remote Sens. 2018, 10, 72. [Google Scholar] [CrossRef] [Green Version]
- Hall, D.L.; Llinas, J. An Introduction to multisensory data fusion. Proc. IEEE 1997, 85, 6–23. [Google Scholar] [CrossRef] [Green Version]
- Barott, W.C.; Coyle, E.; Dabrowski, T.; Hockley, C.J.; Stansbury, R.S. Passive multispectral sensor architecture for radar-EOIR sensor fusion for low SWAP UAS sense and avoid. In Proceedings of the 2014 IEEE/ION Position, Location and Navigation Symposium—PLANS 2014, Monterey, CA, USA, 5–8 May 2014; pp. 1188–1196. [Google Scholar]
- Garagic, D.; Pless, G.V.; Hagan, R.J.R.; Liu, F.; Peskoe, J.; Zulch, P.A.; Rhodes, B.J. Unsupervised Upstream Fusion of Multiple Sensing Modalities Using Dynamic Deep Directional-Unit Networks for Event Behavior Characterization. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; pp. 1–7. [Google Scholar]
- Shen, D.; Blasch, E.; Zulch, P.; Distasio, M.; Niu, J.L.R. A Joint Manifold Leaning-Based Framework for Heterogeneous Upstream Data Fusion. J. Algorithms Comput. Technol. (JACT) 2018, 12, 311–332. [Google Scholar] [CrossRef] [Green Version]
- Robinson, M.; Henrich, J.; Capraro, C.; Zulch, P.A. Dynamic sensor fusion using local topology. In Proceedings of the 2018 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2018; pp. 1–7. [Google Scholar]
- Vakil, A.; Blasch, E.; Ewing, R.; Li, J. Visualizations of Fusion of Electro Optical (EO) and Passive Radio-Frequency (PRF) Data. In Proceedings of the NAECON 2021—IEEE National Aerospace And Electronics Conference, Dayton, OH, USA, 16–19 August 2021; pp. 294–301. [Google Scholar]
- Blasch, E.; Vakil, A.; Li, J.; Ewing, R. Multimodal Data Fusion Using Canonical Variates Analysis Confusion Matrix Fusion. In Proceedings of the 2021 IEEE Aerospace Conference (50100), Big Sky, MT, USA, 6–13 March 2021; pp. 1–10. [Google Scholar]
- Blasch, E.; Sung, J.; Nguyen, T. Multisource AI Scorecard Table for System Evaluation. arXiv 2021, arXiv:abs/2102.03985. [Google Scholar]
- Gunning, D.; Aha, D. DARPA’s Explainable Artificial Intelligence (XAI) Program. Ai Mag. 2019, 40, 44–58. [Google Scholar]
- Moosavi-Dezfooli, S.M.; Fawzi, A.; Frossard, P. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NA, USA, 27–30 June 2016; pp. 2574–2582. [Google Scholar]
- Lipton, Z. The Mythos of Model Interpretability: In Machine Learning, the Concept of Interpretability is Both Important and Slippery. Assoc. Comput. Mach. 2018, 16, 1542–7730. [Google Scholar] [CrossRef]
- Letham, B.; Rudin, C.; McCormick, T.H.; Madigan, D. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann. Appl. Stat. 2015, 9, 1350–1371. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from Deep Networks via Gradient-based Localization. Int. J. Comput. Vis. 2019, 128, 336–359. [Google Scholar] [CrossRef] [Green Version]
- Dosovitskiy, A.; Brox, T. Inverting Visual Representations with Convolutional Networks. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2016, 6, 4829–4837. [Google Scholar]
- Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M.A. Striving for Simplicity: The All Convolutional Net. arXiv 2015, arXiv:abs/1412.6806. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks; Springer: Cham, Switzerland, 2014; pp. 818–833. [Google Scholar]
- Montavon, G.; Lapuschkin, S.; Binder, A.; Samek, W.; Müller, K. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognit. 2017, 65, 211–222. [Google Scholar] [CrossRef]
- Du, M.; Liu, N.; Song, Q.; Hu, X. Towards Explanation of DNN-based Prediction with Guided Feature Inversion. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018. [Google Scholar]
- Zulch, P.; Distasio, M.; Cushman, T.; Wilson, B.; Hart, B.; Blasch, E. ESCAPE Data Collection for Multi-Modal Data Fusion Research. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MN, USA, 2–9 March 2019; pp. 1–10. [Google Scholar]
- Vaquero, V.; Sanfeliu, A.; Moreno-Noguer, F. Hallucinating Dense Optical Flow from Sparse Lidar for Autonomous Vehicles. In Proceedings of the 2018 24th International Conference on Pattern Recognition, (ICPR), Beijing, China, 20–24 August 2018; pp. 1959–1964. [Google Scholar]
- Andrew, A.; Arora, R.; Bilmes, J.; Livescu, K. Deep Canonical Correlation Analysis. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 28. [Google Scholar]
- Wu, X.; Wang, H.; Shi, P.; Sun, R.; Wang, X.; Luo, Z.; Zeng, F.; Lebowitz, M.; Lin, W.; Lu,, J.; et al. Long short-term memory model – A deep learning approach for medical data with irregularity in cancer predication with tumor markers. Comput. Biol. Med. 2022, 144, 105362. Available online: https://www.sciencedirect.com/science/article/pii/S0010482522001548 (accessed on 12 January 2023).
- Wang, Y.; Pan, Y.; Wang, K.; Liu, C.; Jiang, S. GraphSAGE-LSTM-based deep canonical correlation analysis for batch process monitoring. In Proceedings of the 2022 IEEE International Symposium On Advanced Control Of Industrial Processes (AdCONIP), Vancouver, BC, USA, 7–9 August 2022; pp. 188–193. [Google Scholar]
- Gurumoorthy, K.S.; Dhurandhar, A.; Cecchi, G.A.; Aggarwal, C.C. Efficient Data Representation by Selecting Prototypes with Importance Weights. In Proceedings of the 2019 IEEE International Conference on Data Mining (ICDM), Beijing, China, 8–11 November 2019; pp. 260–269. [Google Scholar]
Scenario | Description |
---|---|
Scenario 1 | Two vehicles: one behind treeline, one in sight, “switches” in the garage |
Scenario 2(2C) | Three vehicles: one behind treeline, one vehicle that looks different that simply moves out of sight, one in sight, “switches” in the garage |
Scenario 3(2D) | Five vehicles: four come out of the garage and shuffle the order while another comes out from the back of the garage |
Vehicle | Input | Weight |
---|---|---|
Vehicle 1 | EO Input | 0.212249 |
EO CCA Input | 0.1844515 | |
RF Input | 0.1246046 | |
RF CCA Input | 0.2122194 | |
Vehicle 2 | EO Input | 0.1269909 |
EO CCA Input | 0.09609033 | |
RF Input | 0.08085641 | |
RF CCA Input | 0.08058309 |
Vehicle | Input | Weight |
---|---|---|
Vehicle 1 | EO Input | 0.1313921 |
EO CCA Input | 0.1134093 | |
RF Input | 0.09182034 | |
RF CCA Input | 0.100984 | |
Vehicle 2 | EO Input | 0.1841983 |
EO CCA Input | 0.07001276 | |
RF Input | 0.06620991 | |
RF CCA Input | 0.1185514 | |
Vehicle 3 | EO Input | 0.1304092 |
EO CCA Input | 0.1116768 | |
RF Input | 0.0984916 | |
RF CCA Input | 0.1072683 |
Vehicle | Input | Weight |
---|---|---|
Vehicle 1 | EO Input | 0.09108965 |
EO CCA Input | 0.0.07937257 | |
RF Input | 0.05895651 | |
RF CCA Input | 0.0654112 | |
Vehicle 2 | EO Input | 0.10578873 |
EO CCA Input | 0.09031994 | |
RF Input | 0.06189732 | |
RF CCA Input | 0.1057738 | |
Vehicle 3 | EO Input | 0.1332544 |
EO CCA Input | 0.1116768 | |
RF Input | 0.0984916 | |
RF CCA Input | 0.09032434 | |
Vehicle 4 | EO Input | 0.1841983 |
EO CCA Input | 0.0825 | |
RF Input | 0.08589286 | |
RF CCA Input | 0.09032434 | |
Vehicle 5 | EO Input | 0.1343745 |
EO CCA Input | 0.04249655 | |
RF Input | 0.007944699 | |
RF CCA Input | 0.134089 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vakil, A.; Blasch, E.; Ewing, R.; Li, J. Finding Explanations in AI Fusion of Electro-Optical/Passive Radio-Frequency Data. Sensors 2023, 23, 1489. https://doi.org/10.3390/s23031489
Vakil A, Blasch E, Ewing R, Li J. Finding Explanations in AI Fusion of Electro-Optical/Passive Radio-Frequency Data. Sensors. 2023; 23(3):1489. https://doi.org/10.3390/s23031489
Chicago/Turabian StyleVakil, Asad, Erik Blasch, Robert Ewing, and Jia Li. 2023. "Finding Explanations in AI Fusion of Electro-Optical/Passive Radio-Frequency Data" Sensors 23, no. 3: 1489. https://doi.org/10.3390/s23031489
APA StyleVakil, A., Blasch, E., Ewing, R., & Li, J. (2023). Finding Explanations in AI Fusion of Electro-Optical/Passive Radio-Frequency Data. Sensors, 23(3), 1489. https://doi.org/10.3390/s23031489