Black Box Adversarial Reprogramming for Time Series Feature Classification in Ball Bearings’ Remaining Useful Life Classification
Abstract
:1. Introduction
2. Transfer Learning
2.1. Definitions
- Domain
- Task
- Transfer Learning
2.2. Problem Categorization
2.3. Solution Categorization
2.3.1. Instance-Based
2.3.2. Feature-Based
2.3.3. Parameter-Based
2.3.4. Relational-Based
3. Predictive Maintenance
Transfer Learning in Predictive Maintenance
4. Experimental Design
4.1. Datasets
4.2. Models and Transfer Learning Approaches
4.3. Analysis
5. Baseline Approach
5.1. Preprocessing
- Total Useful Lifetime
- Pro Rata Useful Lifetime
5.2. Feature Engineering
5.3. ML Model
5.4. Transfer Learning
6. Black Box Adversarial Reprogramming
6.1. Preprocessing
6.2. ML Model
6.3. Adversarial Reprogramming Algorithm
6.3.1. Functional Structure
- Translation Input
- Prediction of the Black Box Model
- Translation Output
Listing 1. Three functions comprise the functional structure of the BAR algorithm. |
6.3.2. Training Process
- Initialization
- Sample Dataset
- One-Sided Averaged Gradient Estimator
- Loss Function
- Update W
- Select best W
6.3.3. Hyperparameters
7. Results
7.1. Baseline
7.1.1. Performance Source Domain
7.1.2. Performance Target Domain
7.2. Black Box Adversarial Reprogramming
7.2.1. Performance Black Box Model
7.2.2. Hyperparameters
- Initial coarse grid search: Each configuration is computed three times.
- Subsequent finer grid search: Each configuration is computed seven times.
- Final fine grid search: Each configuration undergoes eleven training sessions.
7.2.3. Performance Black Box Adversarial Reprogramming
8. Discussion
8.1. Interpretation
8.2. Hyperparameter Influence
- Learning Rate
- Weighting Penalty Term
- Number Random Vectors
- Size of Vectors
9. Limitations and Future Research Directions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Maslej, N.; Fattorini, L.; Brynjolfsson, E.; Etchemendy, J.; Ligett, K.; Lyons, T.; Manyika, J.; Ngo, H.; Niebles, J.C.; Sellitto, M.; et al. The AI Index 2023 Annual Report. arXiv, 2023; arXiv:2310.03715. [Google Scholar]
- Petangoda, J.; Deisenroth, M.P.; Monk, N.A. Learning to Transfer: A Foliated Theory. arXiv 2021, arXiv:2107.10763. [Google Scholar]
- Elsayed, G.F.; Goodfellow, I.J.; Sohl-Dickstein, J.N. Adversarial Reprogramming of Neural Networks. arXiv 2018, arXiv:1806.11146. [Google Scholar]
- Tsai, Y.Y.; Chen, P.Y.; Ho, T.Y. Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources. arXiv 2020, arXiv:2007.08714. [Google Scholar]
- Nectoux, P.; Gouriveau, R.; Medjaher, K.; Ramasso, E.; Chebel-Morello, B.; Zerhouni, N.; Varnier, C. PRONOSTIA: An experimental platform for bearings accelerated degradation tests. In Proceedings of the IEEE International Conference on Prognostics and Health Management (PHM’12), Beijing, China, 23–25 May 2012; IEEE Catalog Number: CPF12PHM-CDR. pp. 1–8. [Google Scholar]
- Bengio, Y.; Goodfellow, I.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2017; Volume 1. [Google Scholar]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE 2019, 109, 43–76. [Google Scholar] [CrossRef]
- Azari, M.S.; Flammini, F.; Santini, S.; Caporuscio, M. A Systematic Literature Review on Transfer Learning for Predictive Maintenance in Industry 4.0. IEEE Access 2023, 11, 12887–12910. [Google Scholar] [CrossRef]
- Dai, W.; Yang, Q.; Xue, G.R.; Yu, Y. Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, USA, 20–24 June 2007; pp. 193–200. [Google Scholar]
- Yao, Y.; Doretto, G. Boosting for transfer learning with multiple sources. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1855–1862. [Google Scholar]
- Huang, J.; Gretton, A.; Borgwardt, K.; Schölkopf, B.; Smola, A. Correcting sample selection bias by unlabeled data. In Proceedings of the Advances in Neural Information Processing Systems 19 (NIPS 2006), Vancouver, BC, Canada, 4–7 December 2006; Volume 19. [Google Scholar]
- Jiang, J.; Zhai, C. Instance weighting for domain adaptation in NLP. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic, 23–30 June 2007; ACL: Philadelphia, PA, USA, 2007. [Google Scholar]
- Dai, W.; Xue, G.R.; Yang, Q.; Yu, Y. Transferring naive bayes classifiers for text classification. In Proceedings of the AAAI, Vancouver, BC, Canada, 22–26 July 2007; Volume 7, pp. 540–545. [Google Scholar]
- Asgarian, A.; Sobhani, P.; Zhang, J.C.; Mihailescu, M.; Sibilia, A.; Ashraf, A.B.; Taati, B. A hybrid instance-based transfer learning method. arXiv 2018, arXiv:1812.01063. [Google Scholar]
- Chen, Q.; Xue, B.; Zhang, M. Instance based transfer learning for genetic programming for symbolic regression. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 3006–3013. [Google Scholar]
- Raina, R.; Battle, A.; Lee, H.; Packer, B.; Ng, A.Y. Self-taught learning: Transfer learning from unlabeled data. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, USA, 20–24 June 2007; pp. 759–766. [Google Scholar]
- Daumé III, H. Frustratingly easy domain adaptation. arXiv 2009, arXiv:0907.1815. [Google Scholar]
- Yan, H.; Ding, Y.; Li, P.; Wang, Q.; Xu, Y.; Zuo, W. Mind the class weight bias: Weighted maximum mean discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2272–2281. [Google Scholar]
- Blitzer, J.; McDonald, R.; Pereira, F. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia, 22–23 July 2006; pp. 120–128. [Google Scholar]
- Argyriou, A.; Evgeniou, T.; Pontil, M. Multi-task feature learning. In Proceedings of the Advances in Neural Information Processing Systems 19 (NIPS 2006), Vancouver, BC, Canada, 4–6 December 2006; Volume 19. [Google Scholar]
- Argyriou, A.; Pontil, M.; Ying, Y.; Micchelli, C. A spectral regularization framework for multi-task structure learning. In Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS 2007), Vancouver, BC, Canada, 3–6 December 2007; Volume 20. [Google Scholar]
- Dai, W.; Xue, G.R.; Yang, Q.; Yu, Y. Co-clustering based classification for out-of-domain documents. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007; pp. 210–219. [Google Scholar]
- Johnson, R.; Zhang, T. A high-performance semi-supervised learning method for text chunking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), Ann Arbor, MI, USA, 25–30 June 2005; pp. 1–9. [Google Scholar]
- Bonilla, E.V.; Chai, K.; Williams, C. Multi-task Gaussian process prediction. In Proceedings of the Advances in Neural Information Processing Systems 20 (NIPS 2007), Vancouver, BC, Canada, 3–6 December 2007; Volume 20. [Google Scholar]
- Lawrence, N.D.; Platt, J.C. Learning to learn with the informative vector machine. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, AB, Canada, 4–8 July 2004; p. 65. [Google Scholar]
- Evgeniou, T.; Pontil, M. Regularized multi–task learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 22–25 August 2004; pp. 109–117. [Google Scholar]
- Duan, L.; Tsang, I.W.; Xu, D.; Chua, T.S. Domain adaptation from multiple sources via auxiliary classifiers. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 289–296. [Google Scholar]
- Duan, L.; Xu, D.; Tsang, I.W.H. Domain adaptation from multiple sources: A domain-dependent regularization approach. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 504–518. [Google Scholar] [CrossRef]
- Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; pp. 92–100. [Google Scholar]
- Zhuang, F.; Luo, P.; Xiong, H.; He, Q.; Xiong, Y.; Shi, Z. Exploiting associations between word clusters and document classes for cross-domain text categorization. Stat. Anal. Data Min. Asa Data Sci. J. 2011, 4, 100–114. [Google Scholar] [CrossRef]
- Oquab, M.; Bottou, L.; Laptev, I.; Sivic, J. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1717–1724. [Google Scholar]
- Huang, J.T.; Li, J.; Yu, D.; Deng, L.; Gong, Y. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 7304–7308. [Google Scholar]
- Long, M.; Zhu, H.; Wang, J.; Jordan, M.I. Unsupervised domain adaptation with residual transfer networks. In Proceedings of the Advances in Neural Information Processing Systems 29 (NIPS 2016), Barcelona, Spain, 5–10 December 2016; Volume 29. [Google Scholar]
- George, D.; Shen, H.; Huerta, E. Deep Transfer Learning: A new deep learning glitch classification method for advanced LIGO. arXiv 2017, arXiv:1706.07446. [Google Scholar]
- Mihalkova, L.; Huynh, T.; Mooney, R.J. Mapping and revising markov logic networks for transfer learning. In Proceedings of the AAAI, Vancouver, BC, Canada, 22–26 July 2007; Volume 7, pp. 608–614. [Google Scholar]
- Mihalkova, L.; Mooney, R.J. Transfer learning by mapping with minimal target data. In Proceedings of the AAAI-08 Workshop on Transfer Learning for Complex Tasks, Chicago, IL, USA, 13–14 July 2008; pp. 31–36. [Google Scholar]
- Davis, J.; Domingos, P. Deep transfer via second-order markov logic. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 217–224. [Google Scholar]
- Zonta, T.; Costa, C.A.d.; Righi, R.d.R.; Lima, M.J.; Trindade, E.S.d.; Li, G.P. Predictive maintenance in the Industry 4.0: A systematic literature review. Comput. Ind. Eng. 2020, 150, 106889. [Google Scholar] [CrossRef]
- Ran, Y.; Zhou, X.; Lin, P.; Wen, Y.; Deng, R. A survey of predictive maintenance: Systems, purposes and approaches. arXiv 2019, arXiv:1912.07383. [Google Scholar]
- Jimenez, J.J.M.; Schwartz, S.; Vingerhoeds, R.; Grabot, B.; Salaün, M. Towards multi-model approaches to predictive maintenance: A systematic literature survey on diagnostics and prognostics. J. Manuf. Syst. 2020, 56, 539–557. [Google Scholar] [CrossRef]
- Zheng, H.; Wang, R.; Yang, Y.; Yin, J.; Li, Y.; Li, Y.; Xu, M. Cross-Domain Fault Diagnosis Using Knowledge Transfer Strategy: A Review. IEEE Access 2019, 7, 129260–129290. [Google Scholar] [CrossRef]
- Mao, W.; He, J.; Zuo, M.J. Predicting Remaining Useful Life of rolling bearings based on deep feature representation and transfer learning. IEEE Trans. Instrum. Meas. 2019, 69, 1594–1608. [Google Scholar] [CrossRef]
- Wen, L.; Gao, L.; Li, X. A New Deep Transfer Learning Based on Sparse Auto-Encoder for Fault Diagnosis. IEEE Trans. Syst. Man Cybern. Syst. 2019, 49, 136–144. [Google Scholar] [CrossRef]
- Zhu, J.; Chen, N.; Shen, C. A new data-driven transferable Remaining Useful Life prediction approach for bearing under different working conditions. Mech. Syst. Signal Process. 2020, 139, 106602. [Google Scholar] [CrossRef]
- da Costa, P.R.d.O.; Akçay, A.; Zhang, Y.; Kaymak, U. Remaining Useful Lifetime prediction via deep domain adaptation. Reliab. Eng. Syst. Saf. 2020, 195, 106682. [Google Scholar] [CrossRef]
- Xia, P.; Huang, Y.; Li, P.; Liu, C.; Shi, L. Fault knowledge transfer assisted ensemble method for remaining useful life prediction. IEEE Trans. Ind. Inform. 2021, 18, 1758–1769. [Google Scholar] [CrossRef]
- Ma, X.; Niu, T.; Liu, X.; Luan, H.; Zhao, S. Remaining Useful Lifetime prediction of rolling bearing based on ConvNext and multi-feature fusion. In Proceedings of the 2022 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI), Shijiazhuang, China, 22–24 July 2022; pp. 299–304. [Google Scholar]
- Xu, G.; Liu, M.; Jiang, Z.; Shen, W.; Huang, C. Online Fault Diagnosis Method Based on Transfer Convolutional Neural Networks. IEEE Trans. Instrum. Meas. 2020, 69, 509–520. [Google Scholar] [CrossRef]
- Cheng, H.; Kong, X.; Wang, Q.; Ma, H.; Yang, S.; Chen, G. Deep transfer learning based on dynamic domain adaptation for Remaining Useful Life prediction under different working conditions. J. Intell. Manuf. 2023, 34, 587–613. [Google Scholar] [CrossRef]
- Ong, K.S.H.; Wang, W.; Hieu, N.Q.; Niyato, D.T.; Friedrichs, T. Predictive Maintenance Model for IIoT-Based Manufacturing: A Transferable Deep Reinforcement Learning Approach. IEEE Internet Things J. 2022, 9, 15725–15741. [Google Scholar] [CrossRef]
- Xu, Y.; Sun, Y.; Liu, X.; Zheng, Y. A digital-twin-assisted fault diagnosis using deep transfer learning. IEEE Access 2019, 7, 19990–19999. [Google Scholar] [CrossRef]
- Kim, H.; Youn, B.D. A new parameter repurposing method for parameter transfer with small dataset and its application in fault diagnosis of rolling element bearings. IEEE Access 2019, 7, 46917–46930. [Google Scholar] [CrossRef]
- Shao, S.; McAleer, S.; Yan, R.; Baldi, P. Highly Accurate Machine Fault Diagnosis Using Deep Transfer Learning. IEEE Trans. Ind. Inform. 2019, 15, 2446–2455. [Google Scholar] [CrossRef]
- Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv 2017, arXiv:1708.08296. [Google Scholar]
- Xu, G.; Liu, M.; Wang, J.; Ma, Y.; Wang, J.; Li, F.; Shen, W. Data-driven fault diagnostics and prognostics for predictive maintenance: A brief overview. In Proceedings of the 2019 IEEE 15th International Conference On Automation Science and Engineering (CASE), Vancouver, BC, Canada, 22–26 August 2019; pp. 103–108. [Google Scholar] [CrossRef]
- Arrieta, A.B.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; García, S.; Gil-López, S.; Molina, D.; Benjamins, R. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- Mahyari, A.G.; Locher, T. Robust predictive maintenance for robotics via unsupervised transfer learning. In Proceedings of the International FLAIRS Conference Proceedings, North Miami Beach, FL, USA, 17–19 May 2021; p. 34. [Google Scholar] [CrossRef]
- Mao, W.; He, J.; Sun, B.; Wang, L. Prediction of bearings Remaining Useful Life across working conditions based on transfer learning and time series clustering. IEEE Access 2021, 9, 135285–135303. [Google Scholar] [CrossRef]
- Shen, F.; Yan, R. A new intermediate-domain SVM-based transfer model for rolling bearing RUL prediction. IEEE/ASME Trans. Mechatron. 2021, 27, 1357–1369. [Google Scholar] [CrossRef]
- Schlag, S.; Schmitt, M.; Schulz, C. Faster support vector machines. J. Exp. Algorithmics (JEA) 2021, 26, 1–21. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, H.H.; Wu, Y. Multiclass probability estimation with support vector machines. J. Comput. Graph. Stat. 2019, 28, 586–595. [Google Scholar] [CrossRef]
- Olson, M.A.; Wyner, A.J. Making sense of random forest probabilities: A kernel perspective. arXiv 2018, arXiv:1812.05792. [Google Scholar]
- Mathew, V.; Toby, T.; Singh, V.; Rao, B.M.; Kumar, M.G. Prediction of Remaining Useful Lifetime (RUL) of turbofan engine using machine learning. In Proceedings of the 2017 IEEE International Conference on Circuits and Systems (ICCS), Thiruvananthapuram, India, 20–21 December 2017; pp. 306–311. [Google Scholar]
- Saxena, A.; Goebel, K.; Simon, D.; Eklund, N. Damage propagation modeling for aircraft engine run-to-failure simulation. In Proceedings of the 2008 International Conference on Prognostics and Health Management, Denver, CO, USA, 6–9 October 2008; pp. 1–9. [Google Scholar]
- Wu, Z.; Yu, S.; Zhu, X.; Ji, Y.; Pecht, M. A weighted deep domain adaptation method for industrial fault prognostics according to prior distribution of complex working conditions. IEEE Access 2019, 7, 139802–139814. [Google Scholar] [CrossRef]
- Xia, M.; Li, T.; Shu, T.; Wan, J.; De Silva, C.W.; Wang, Z. A two-stage approach for the Remaining Useful Life prediction of bearings using deep neural networks. IEEE Trans. Ind. Inform. 2018, 15, 3703–3711. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Prinzie, A.; Van den Poel, D. Random forests for multiclass classification: Random multinomial logit. Expert Syst. Appl. 2008, 34, 1721–1732. [Google Scholar] [CrossRef]
- Krmar, J.; Vukićević, M.; Kovačević, A.; Protić, A.; Zečević, M.; Otašević, B. Performance comparison of nonlinear and linear regression algorithms coupled with different attribute selection methods for quantitative structure-retention relationships modelling in micellar liquid chromatography. J. Chromatogr. A 2020, 1623, 461146. [Google Scholar] [CrossRef] [PubMed]
- Song, Y.Y.; Ying, L. Decision tree methods: Applications for classification and prediction. Shanghai Arch. Psychiatry 2015, 27, 130. [Google Scholar]
- de Mathelin, A.; Deheeger, F.; Richard, G.; Mougeot, M.; Vayatis, N. Adapt: Awesome domain adaptation python toolbox. arXiv 2021, arXiv:2107.03049. [Google Scholar]
- Lemaître, G.; Nogueira, F.; Aridas, C.K. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. J. Mach. Learn. Res. 2017, 18, 559–563. [Google Scholar]
Use Case | Focus | ML Method | Deep Learning | Year | Source |
---|---|---|---|---|---|
Meta Studies | Generalistic | ■ | 2019 | [40] | |
Cross-Domain Fault Diagnosis | ■ | 2019 | [42] | ||
Industry 4.0 | ■ | 2023 | [9] | ||
Prediction | Fault Datasets | CNN, LSTM | ■ | 2021 | [47] |
Feature Distribution | LSTM, DANN | ■ | 2020 | [46] | |
Feature Distribution | CDAE, SVM | ■ | 2019 | [43] | |
Feature Distribution | LSTM, CNN | ■ | 2019 | [66] | |
Feature Distribution and FOT | HMM, MLP | ■ | 2020 | [45] | |
Multi Feature Fusion | ConvNeXt | ■ | 2022 | [48] | |
Sample Selection | CNN | ■ | 2023 | [50] | |
Feature Distribution | HCA, SVM | □ | 2021 | [59] | |
Low-Quality Features | SVM | □ | 2021 | [60] | |
Diagnosis | Feature Distribution and Extraction | SAE, DNN | ■ | 2019 | [44] |
Simulation to Real World | SSAE, DNN | ■ | 2019 | [52] | |
Training | NN | ■ | 2019 | [53] | |
Training | CNN | ■ | 2019 | [54] | |
Detection | Resource Allocation | NN | ■ | 2022 | [51] |
Robot Tasks | - | □ | 2021 | [58] | |
Detection and Diagnosis | Real Time Requirements | CNN | ■ | 2020 | [49] |
1 | 2 | 3 | |
---|---|---|---|
interval of class i | (0, 50] | (50, 75] | (75, 100] |
RUL interval of class i | [100, 50) | [50, 25) | [25, 0) |
1 | 2 | 3 | 4 | |
---|---|---|---|---|
interval of class i | (0, 50] | (50, 70] | (70, 90] | (90, 100] |
RUL interval of class i | [100, 50) | [50, 30] | [30, 10] | [10, 0] |
Learning Rate | Number Resamplings | Weighting Penalty Term | Number Random Vectors | Iterations per Sampling | Size of Vectors q |
---|---|---|---|---|---|
n_splits | q | epochs | vector_size |
Train Data | Test Data | |||||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | f1 score | Support | Precision | Recall | f1 score | Support | |
Class 1 | 1.00 | 0.95 | 0.97 | 113 | 0.85 | 1.00 | 0.92 | 45 |
Class 2 | 0.84 | 0.97 | 0.90 | 32 | 0.46 | 0.43 | 0.44 | 14 |
Class 3 | 0.93 | 1.00 | 0.97 | 14 | 1.00 | 0.30 | 0.46 | 10 |
accuracy | 0.96 | 159 | 0.78 | 69 | ||||
macro avg | 0.92 | 0.97 | 0.95 | 159 | 0.77 | 0.58 | 0.61 | 69 |
weighted avg | 0.96 | 0.96 | 0.96 | 159 | 0.79 | 0.78 | 0.76 | 69 |
Train Data | Test Data | |||||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | f1 score | Support | Precision | Recall | f1 score | Support | |
Class 1 | 0.80 | 1.00 | 0.89 | 4 | 0.33 | 1.00 | 0.50 | 1 |
Class 2 | 0.00 | 0.00 | 0.00 | 3 | 0.00 | 0.00 | 0.00 | 1 |
Class 3 | 0.88 | 1.00 | 0.93 | 14 | 1.00 | 0.50 | 0.67 | 2 |
accuracy | 0.86 | 21 | 0.50 | 4 | ||||
macro avg | 0.56 | 0.67 | 0.61 | 21 | 0.44 | 0.50 | 0.39 | 4 |
weighted avg | 0.74 | 0.86 | 0.79 | 21 | 0.58 | 0.50 | 0.46 | 4 |
Hyperparameter | Search Space | Hyperparameter | Search Space |
---|---|---|---|
0.6 | q | 50 | |
n splits | 5 | epochs | 120 |
7 | vector size | 135 |
Train Data | Test Data | |||||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | f1 score | Support | Precision | Recall | f1 score | Support | |
Class 1 | 0.93 | 0.94 | 0.93 | 82 | 0.81 | 0.94 | 0.87 | 18 |
Class 2 | 0.76 | 0.90 | 0.83 | 63 | 0.86 | 0.75 | 0.80 | 24 |
Class 3 | 0.77 | 0.84 | 0.80 | 97 | 0.53 | 0.85 | 0.65 | 20 |
Class 4 | 0.96 | 0.76 | 0.85 | 101 | 0.98 | 0.75 | 0.85 | 53 |
accuracy | 0.86 | 343 | 0.79 | 115 | ||||
macro avg | 0.86 | 0.86 | 0.85 | 343 | 0.79 | 0.82 | 0.79 | 115 |
weighted avg | 0.86 | 0.85 | 0.85 | 343 | 0.85 | 0.80 | 0.81 | 115 |
Train Data | Test Data | |||||||
---|---|---|---|---|---|---|---|---|
Precision | Recall | f1 score | Support | Precision | Recall | f1 score | Support | |
1 | 0.40 | 0.50 | 0.44 | 4 | 0.00 | 0.00 | 0.00 | 1 |
2 | 1.00 | 0.67 | 0.80 | 3 | 0.00 | 0.00 | 0.00 | 1 |
3 | 0.86 | 0.86 | 0.86 | 14 | 0.67 | 1.00 | 0.80 | 2 |
accuracy | 0.76 | 21 | 0.50 | 4 | ||||
macro avg | 0.75 | 0.67 | 0.70 | 21 | 0.22 | 0.33 | 0.27 | 4 |
weighted avg | 0.79 | 0.76 | 0.77 | 21 | 0.33 | 0.50 | 0.40 | 4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bott, A.; Schreyer, F.; Puchta, A.; Fleischer, J. Black Box Adversarial Reprogramming for Time Series Feature Classification in Ball Bearings’ Remaining Useful Life Classification. Mach. Learn. Knowl. Extr. 2024, 6, 1969-1996. https://doi.org/10.3390/make6030097
Bott A, Schreyer F, Puchta A, Fleischer J. Black Box Adversarial Reprogramming for Time Series Feature Classification in Ball Bearings’ Remaining Useful Life Classification. Machine Learning and Knowledge Extraction. 2024; 6(3):1969-1996. https://doi.org/10.3390/make6030097
Chicago/Turabian StyleBott, Alexander, Felix Schreyer, Alexander Puchta, and Jürgen Fleischer. 2024. "Black Box Adversarial Reprogramming for Time Series Feature Classification in Ball Bearings’ Remaining Useful Life Classification" Machine Learning and Knowledge Extraction 6, no. 3: 1969-1996. https://doi.org/10.3390/make6030097
APA StyleBott, A., Schreyer, F., Puchta, A., & Fleischer, J. (2024). Black Box Adversarial Reprogramming for Time Series Feature Classification in Ball Bearings’ Remaining Useful Life Classification. Machine Learning and Knowledge Extraction, 6(3), 1969-1996. https://doi.org/10.3390/make6030097