On the Soundness of XAI in Prognostics and Health Management (PHM)
Abstract
:1. Introduction
1.1. Explainable Artificial Intelligence
1.2. XAI and Predictive Maintenance
1.3. Aim and Structure of the Paper
2. Materials and Methods
2.1. XAI Methods
2.1.1. Local Interpretable Model-Agnostic Explanations
2.1.2. SHapley Additive exPlanations
2.1.3. Layer-Wise Relevance Propagation
2.1.4. Image-Specific Class Saliency
2.1.5. Gradient-Weighted Class Activation Mapping
2.2. Perturbation and Neighborhood
- Zero: The values in are set to zero.
- One: The values in are set to one.
- Mean: The values in are replaced with the mean of that segment ().
- Uniform Noise: The values in are replaced with random noise following a uniform distribution between the minimum and maximum values of the feature.
- Normal Noise: The values in are replaced with random noise following a normal distribution with mean and standard deviation of the feature.
2.3. Validation of XAI Method Explanations
- Identity: The principle of identity states that identical objects should receive identical explanations. This estimates the level of intrinsic non-determinism in the method:
- Separability: Non-identical objects cannot have identical explanations.If a feature is not actually needed for the prediction, then two samples that differ only in that feature will have the same prediction. In this scenario, the explanation method could provide the same explanation, even though the samples are different. For the sake of simplicity, this proxy is based on the assumption that every feature has a minimum level of importance, positive or negative, in the predictions.
- Stability: Similar objects must have similar explanations. This is built on the idea that an explanation method should only return similar explanations for slightly different objects. The Spearman correlation is used to define this:
- Selectivity. The elimination of relevant variables must negatively affect the prediction [9,35]. To compute the selectivity, the features are ordered from the most to least relevant. One by one the features are removed, by setting it to zero for example, and the residual errors are obtained to obtain the area under the curve (AUC).
- Coherence. It computes the difference between the prediction error over the original signal and the prediction error of a new signal where the non-important features are removed.
- Completeness. It evaluates the percentage of the explanation error from its respective prediction error.
- Congruence. The standard deviation of the coherence provides the congruence proxy. This metric helps to capture the variability of the coherence.
- Acumen. It is a new proxy proposed by the authors for the first time in this paper, based on the idea that an important feature according to the XAI method should be one of the least important after it is perturbed. This proxy aims to detect whether the XAI method depends on the position of the feature, in our case, the time dimension. It is computed by comparing the ranking position of each important feature after perturbing it.
3. Experiments and Results
3.1. Problem Description
3.2. Datasets
3.2.1. PRONOSTIA Dataset
3.2.2. Fast-Charging Catteries
3.2.3. N-CMAPSS Dataset
3.3. The Black-Box Models
3.4. Experiments
Algorithm 1 Algorithm to compute each proxy on the test set |
|
3.5. Results
4. Discussion
5. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AUC | Area under the curve |
CMAPSS | Commercial Modular Aero-Propulsion System Simulation |
DCNN | Deep Convolutional Neural Networks |
DL | Deep learning |
EM | Explicable Methods |
Grad-CAM | Gradient-weighted Class Activation Mapping |
LRP | Layer-wise Relevance Propagation |
LIME | Local Interpretable Model-agnostic Explanations |
LSTM | Long Short-Term Memory |
ML | Machine learning |
PHM | Prognostics and health management |
RMSE | Root Mean Square Error |
RUL | Remaining useful life |
SHAP | SHapley Additive exPlanations |
XAI | Explainable Artificial Intelligence |
References
- Pomerleau, D.A. Neural networks for intelligent vehicles. In Proceedings of the IEEE Conference on Intelligent Vehicles, Tokyo, Japan, 14–16 July 1993; pp. 19–24. [Google Scholar]
- Goodman, B.; Flaxman, S. European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 2017, 38, 50–57. [Google Scholar] [CrossRef] [Green Version]
- Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.; Kagal, L. Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018; pp. 80–89. [Google Scholar]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef] [Green Version]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Lundberg, S.M.; Lee, S.-I. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; Volume 30, pp. 4768–4777. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Proceedings of the 2nd International Conference on Learning Representations (ICLR), Banff, AB, Canada, 14–16 April 2014. [Google Scholar]
- Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.-R.; Samek, W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Letzgus, S.; Wagner, P.; Lederer, J.; Samek, W.; Müller, K.-R.; Montavon, G. Toward Explainable AI for Regression Models. IEEE Signal Process. Mag. 2022, 39, 40–58. [Google Scholar] [CrossRef]
- Schlegel, U.; Arnout, H.; El-Assady, M.; Oelke, D.; Keim, D.A. Towards a rigorous evaluation of xai methods on time series. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 4197–4201. [Google Scholar]
- Siddiqui, S.A.; Mercier, D.; Munir, M.; Dengel, A.; Ahmed, S. Tsviz: Demystification of deep learning models for time-series analysis. IEEE Access 2019, 7, 67027–67040. [Google Scholar] [CrossRef]
- Ahmed, I.; Kumara, I.; Reshadat, V.; Kayes, A.S.M.; van den Heuvel, W.J.; Tamburri, D.A. Travel Time Prediction and Explanation with Spatio-Temporal Features: A Comparative Study. Electronics 2021, 11, 106. [Google Scholar] [CrossRef]
- Vijayan, M.; Sridhar, S.S.; Vijayalakshmi, D. A Deep Learning Regression Model for Photonic Crystal Fiber Sensor with XAI Feature Selection and Analysis. IEEE Trans. NanoBiosci. 2022. [Google Scholar] [CrossRef] [PubMed]
- Mamalakis, A.; Barnes, E.A.; Ebert-Uphoff, I. Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience. Artif. Intell. Earth Syst. 2023, 2, e220058. [Google Scholar] [CrossRef]
- Cohen, J.; Huan, X.; Ni, J. Shapley-based Explainable AI for Clustering Applications in Fault Diagnosis and Prognosis. arXiv 2023, arXiv:2303.14581. [Google Scholar]
- Brusa, E.; Cibrario, L.; Delprete, C.; Di Maggio, L.G. Explainable AI for Machine Fault Diagnosis: Understanding Features’ Contribution in Machine Learning Models for Industrial Condition Monitoring. Appl. Sci. 2023, 13, 2038. [Google Scholar] [CrossRef]
- Kratzert, F.; Herrnegger, M.; Klotz, D.; Hochreiter, S.; Klambauer, G. NeuralHydrology—Interpreting LSTMs in hydrology. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer Nature: Berlin/Heidelberg, Germany, 2019; pp. 347–362. [Google Scholar]
- Zhang, L.; Chang, X.; Liu, J.; Luo, M.; Li, Z.; Yao, L.; Hauptmann, A. TN-ZSTAD: Transferable Network for Zero-Shot Temporal Activity Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3848–3861. [Google Scholar] [CrossRef] [PubMed]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sutskever, I. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Carvalho, D.V.; Pereira, E.d.M.; Cardoso, J.S. Machine learning interpretability: A survey on methods and metrics. Electronics 2019, 8, 832. [Google Scholar] [CrossRef] [Green Version]
- Vollert, S.; Atzmueller, M.; Theissler, A. Interpretable Machine Learning: A brief survey from the predictive maintenance perspective. In Proceedings of the 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vasteras, Sweden, 7–10 September 2021; pp. 1–8. [Google Scholar]
- Samek, W.; Wiegand, T.; Müller, K.-R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. [Google Scholar]
- Honegger, M. Shedding Light on Black Box Machine Learning Algorithms: Development of an Axiomatic Framework to Assess the Quality of Methods that Explain Individual Predictions. arXiv 2018, arXiv:1808.05054. [Google Scholar]
- Doshi-Velez, F.; Kim, B. Towards a rigorous science of interpretable machine learning. arXiv 2017, arXiv:1702.08608. [Google Scholar]
- Silva, W.; Fernandes, K.; Cardoso, M.J.; Cardoso, J.S. Towards complementary explanations using deep neural networks. Understanding and Interpreting Machine Learning in Medical Image Computing Applications. In Proceedings of the MICCAI 2018, Granada, Spain, 16–20 September 2018; pp. 133–140. [Google Scholar]
- Hong, C.W.; Lee, C.; Lee, K.; Ko, M.-S.; Hur, K. Explainable Artificial Intelligence for the Remaining Useful Life Prognosis of the Turbofan Engines. In Proceedings of the 2020 3rd IEEE International Conference on Knowledge Innovation and Invention (ICKII), Kaohsiung, Taiwan, 21–23 August 2021; pp. 144–147. [Google Scholar]
- Szelazek, M.; Bobek, S.; Gonzalez-Pardo, A.; Nalepa, G.J. Towards the Modeling of the Hot Rolling Industrial Process. Preliminary Results. In Proceedings of the 21st International Conference on Intelligent Data Engineering and Automated Learning—IDEAL, Guimaraes, Portugal, 4–6 November 2020; Springer: Berlin/Heidelberg, Germany, 2020; Volume 12489, pp. 385–396. [Google Scholar]
- Serradilla, O.; Zugasti, E.; Cernuda, C.; Aranburu, A.; de Okariz, J.R.; Zurutuza, U. Interpreting Remaining Useful Life estimations combining Explainable Artificial Intelligence and domain knowledge in industrial machinery. In Proceedings of the 2020 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Glasgow, UK, 15 January 2020; pp. 1–8. [Google Scholar]
- Ferraro, A.; Galli, A.; Moscato, V.; Sperlì, G. Evaluating eXplainable artificial intelligence tools for hard disk drive predictive maintenance. Artif. Intell. Rev. 2022, 1–36. [Google Scholar] [CrossRef]
- Shapley, L.S. A Value for N-Person Games. In Contributions to the Theory of Games (AM-28); Princeton University Press: Princeton, NJ, USA, 1953; Volume II, pp. 307–318. [Google Scholar]
- Zhou, B.; Khosla, A.; Oliva, L.A.A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
- Truong, C.; Oudre, L.; Vayatis, N. Selective review of offline change point detection methods. Signal Process. 2020, 167, 107299. [Google Scholar] [CrossRef] [Green Version]
- Rokade, P.; Alluri BKSP, K.R. Building Quantifiable System for Xai Models. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4038039 (accessed on 9 August 2021).
- Samek, W.; Binder, A.; Montavon, G.; Lapuschkin, S.; Müller, K.R. Evaluating the visualization of what a deep neural network has learned. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2660–2673. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Saxena, A.; Goebel, K.; Simon, D.; Eklund, N. Damage propagation modeling for aircraft engine run-to-failure simulation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; pp. 1–9. [Google Scholar]
- Nectoux, P.; Gouriveau, R.; Medjaher, K.; Ramasso, E.; Chebel-Morello, B.; Zerhouni, N.; Varnier, C. PRONOSTIA: An experimental platform for bearings accelerated degradation tests. In Proceedings of the IEEE International Conference on Prognostics and Health Management, PHM’12, Montreal, QC, Canada, 5–7 July 2012; pp. 1–8. [Google Scholar]
- Severson, K.A.; Attia, P.M.; Jin, N.; Perkins, N.; Jiang, B.; Yang, Z.; Braatz, R.D. Data-driven prediction of battery cycle life before capacity degradation. Nat. Energy 2019, 4, 383–391. [Google Scholar] [CrossRef] [Green Version]
- Arias, C.M.; Kulkarni, C.; Goebel, K.; Fink, O. Aircraft engine run-to-failure dataset under real flight conditions for prognostics and diagnostics. Data 2021, 6, 5. [Google Scholar] [CrossRef]
- Solís-Martín, D.; Galán-Páez, J.; Borrego-Díaz, J.A. Stacked Deep Convolutional Neural Network to Predict the Remaining Useful Life of a Turbofan Engine. In Proceedings of the Annual Conference of the PHM Society, Virtual, 29 November–2 December 2021; Volume 13. [Google Scholar]
Symbol | Set | Description | Units |
---|---|---|---|
alt | W | Altitude | ft |
Mach | W | Flight Mach number | - |
TRA | W | Throttle-resolver angle | % |
T2 | W | Total temperature at fan inlet | °R |
Wf | Fuel flow | pps | |
Nf | Physical fan speed | rpm | |
Nc | Physical core speed | rpm | |
T24 | Total temperature at LPC outlet | °R | |
T30 | Total temperature at HPC outlet | °R | |
T48 | Total temperature at HPT outlet | °R | |
T50 | Total temperature at LPT outlet | °R | |
P15 | Total pressure in bypass-duct | psia | |
P2 | Total pressure at fan inlet | psia | |
P21 | Total pressure at fan outlet | psia | |
P24 | Total pressure at LPC outlet | psia | |
Ps30 | Static pressure at HPC outlet | psia | |
P40 | Total pressure at burner outlet | psia | |
P50 | Total pressure at LPT outlet | psia | |
Fc | A | Flight class | - |
A | Health state | - |
Parameter | PRONOSTIA | N-CMAPSS | Fast Charge |
---|---|---|---|
256 | 161 | 512 | |
32 | 116 | 32 | |
2 | 4 | 3 | |
4 | 4 | 3 | |
118 | 256 | 168 | |
100 | 100 | 24 | |
(1, 10) | (3, 3) | (1, 10) | |
ReLU | tanh | ReLU | |
4 | 2 | 4 | |
ReLU | Leaky ReLU | tanh | |
ReLU | ReLU | ReLU | |
Net params | |||
RMSE | 0.24 | 10.46 | 84.78 |
MAE | 0.17 | 7.689 | 51.98 |
NASA score | 0.015 | 2.13 | - |
CV score | - | 6.30 | - |
std() | - | 0.37 | - |
Method | Perm | I | Sep | Sta | Sel | Coh | Comp | Cong | Acu | Total |
---|---|---|---|---|---|---|---|---|---|---|
SHAP | mean | 0.0 | 1.000 | −0.009 | 0.533 | 0.054 | 0.989 | 0.105 | 0.495 | 0.382 |
SHAP | n.noise | 0.0 | 1.000 | 0.020 | 0.532 | 0.053 | 0.991 | 0.103 | 0.485 | 0.386 |
SHAP | u.noise | 0.0 | 1.000 | 0.078 | 0.521 | 0.032 | 0.994 | 0.084 | 0.487 | 0.387 |
SHAP | zero | 0.0 | 1.000 | 0.011 | 0.529 | 0.012 | 0.997 | 0.045 | 0.519 | 0.371 |
SHAP | one | 1.0 | 1.000 | −0.015 | 0.528 | 0.093 | 0.997 | 0.128 | 0.402 | 0.533 |
GradCAM | 1.0 | 0.976 | 0.368 | 0.531 | 0.141 | 0.980 | 0.134 | 0.206 | 0.542 | |
LRP | 0.0 | 1.000 | 0.042 | 0.536 | 0.158 | 0.970 | 0.133 | 0.502 | 0.406 | |
Saliency | 1.0 | 1.000 | −0.122 | 0.544 | 0.155 | 0.975 | 0.131 | 0.180 | 0.526 | |
Lime | mean | 0.0 | 1.000 | 0.038 | 0.538 | 0.040 | 0.986 | 0.081 | 0.553 | 0.383 |
Lime | n.noise | 0.0 | 1.000 | 0.034 | 0.538 | 0.042 | 0.985 | 0.083 | 0.537 | 0.383 |
Lime | u.noise | 0.0 | 1.000 | −0.043 | 0.530 | 0.032 | 0.985 | 0.071 | 0.525 | 0.368 |
Lime | zero | 1.0 | 1.000 | 0.108 | 0.529 | 0.008 | 1.004 | 0.036 | 0.500 | 0.526 |
Lime | one | 1.0 | 1.000 | 0.016 | 0.540 | 0.052 | 0.992 | 0.101 | 0.435 | 0.529 |
Method | Perm | I | Sep | Sta | Sel | Coh | Comp | Cong | Acu | Total |
---|---|---|---|---|---|---|---|---|---|---|
SHAP | mean | 0.000 | 1.000 | 0.119 | 0.584 | 0.097 | 0.903 | 0.121 | 0.418 | 0.405 |
SHAP | n.noise | 0.000 | 1.000 | 0.120 | 0.588 | 0.100 | 0.900 | 0.122 | 0.383 | 0.402 |
SHAP | u.noise | 0.000 | 1.000 | 0.180 | 0.616 | 0.107 | 0.893 | 0.121 | 0.303 | 0.403 |
SHAP | zero | 1.000 | 1.000 | 0.153 | 0.597 | 0.093 | 0.908 | 0.130 | 0.536 | 0.552 |
SHAP | one | 1.000 | 1.000 | 0.176 | 0.526 | 0.077 | 0.923 | 0.113 | 0.269 | 0.510 |
LRP | 0.441 | 1.000 | 0.007 | 0.659 | 0.073 | 0.936 | 0.076 | 0.492 | 0.460 | |
GradCAM | 1.000 | 1.000 | 0.259 | 0.664 | 0.063 | 1.050 | 0.080 | 0.317 | 0.554 | |
Saliency | 1.000 | 0.990 | 0.163 | 0.517 | 0.170 | 0.831 | 0.157 | 0.452 | 0.535 | |
Lime | mean | 0.023 | 1.000 | 0.456 | 0.595 | 0.087 | 0.913 | 0.116 | 0.501 | 0.461 |
Lime | n.noise | 0.023 | 1.000 | 0.447 | 0.598 | 0.087 | 0.914 | 0.115 | 0.529 | 0.464 |
Lime | u.noise | 0.027 | 0.999 | 0.333 | 0.632 | 0.102 | 0.899 | 0.114 | 0.227 | 0.417 |
Lime | zero | 1.000 | 1.000 | 0.512 | 0.608 | 0.221 | 0.781 | 0.156 | 0.296 | 0.572 |
Lime | one | 1.000 | 1.000 | 0.601 | 0.537 | 0.065 | 0.936 | 0.061 | 0.138 | 0.542 |
Method | Perm | I | Sep | Sta | Sel | Coh | Comp | Cong | Acu | Total |
---|---|---|---|---|---|---|---|---|---|---|
SHAP | mean | 0.000 | 1.000 | 0.033 | 0.582 | 0.120 | 0.973 | 0.152 | 0.505 | 0.421 |
SHAP | n.noise | 0.000 | 1.000 | 0.037 | 0.581 | 0.116 | 0.968 | 0.150 | 0.501 | 0.419 |
SHAP | u.noise | 0.000 | 1.000 | 0.027 | 0.581 | 0.125 | 0.961 | 0.162 | 0.503 | 0.420 |
SHAP | zero | 1.000 | 1.000 | 0.226 | 0.800 | 0.152 | 1.010 | 0.149 | 0.761 | 0.637 |
SHAP | one | 1.000 | 1.000 | 0.200 | 0.692 | 0.173 | 0.969 | 0.169 | 0.349 | 0.569 |
GradCAM | 1.000 | 1.000 | 0.653 | 0.702 | 0.196 | 0.948 | 0.170 | 0.435 | 0.638 | |
LRP | 1.000 | 1.000 | −0.037 | 0.599 | 0.180 | 0.967 | 0.165 | 0.495 | 0.546 | |
Saliency | 1.000 | 0.999 | 0.055 | 0.434 | 0.174 | 0.972 | 0.163 | 0.516 | 0.539 | |
Lime | mean | 0.004 | 1.000 | 0.130 | 0.569 | 0.173 | 0.962 | 0.161 | 0.685 | 0.461 |
Lime | n.noise | 0.008 | 1.000 | 0.131 | 0.572 | 0.173 | 0.960 | 0.162 | 0.677 | 0.460 |
Lime | u.noise | 0.012 | 1.000 | 0.109 | 0.560 | 0.166 | 0.960 | 0.162 | 0.577 | 0.443 |
Lime | zero | 1.000 | 1.000 | 0.554 | 0.835 | 0.160 | 1.017 | 0.146 | 0.753 | 0.683 |
Lime | one | 1.000 | 1.000 | 0.349 | 0.728 | 0.184 | 0.969 | 0.166 | 0.069 | 0.558 |
Sta | Sel | Coh | Comp | Con | Acu | Sep | |
---|---|---|---|---|---|---|---|
PRONOSTIA | I | I | - | - | D | - | - |
Fast-charging batteries | I | I | I | - | I | D | I |
N-CMAPSS | I | I | I | D | D | D | I |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Solís-Martín, D.; Galán-Páez, J.; Borrego-Díaz, J. On the Soundness of XAI in Prognostics and Health Management (PHM). Information 2023, 14, 256. https://doi.org/10.3390/info14050256
Solís-Martín D, Galán-Páez J, Borrego-Díaz J. On the Soundness of XAI in Prognostics and Health Management (PHM). Information. 2023; 14(5):256. https://doi.org/10.3390/info14050256
Chicago/Turabian StyleSolís-Martín, David, Juan Galán-Páez, and Joaquín Borrego-Díaz. 2023. "On the Soundness of XAI in Prognostics and Health Management (PHM)" Information 14, no. 5: 256. https://doi.org/10.3390/info14050256
APA StyleSolís-Martín, D., Galán-Páez, J., & Borrego-Díaz, J. (2023). On the Soundness of XAI in Prognostics and Health Management (PHM). Information, 14(5), 256. https://doi.org/10.3390/info14050256