RI2AP: Robust and Interpretable 2D Anomaly Prediction in Assembly Pipelines
Abstract
:1. Introduction
- For challenges (i) and (ii) above, we implemented the following strategies. We model an anomaly using a compositional real-valued number. First, we encode each anomaly class using a monotonically increasing token assignment strategy (e.g., 0 for none, 1 for the first part falling off, 2 for the second part falling off, and so on). This is done to capture the monotonically increasing nature of the severity of anomaly categories in rocket assembly. Next, we represent compositional anomalies using the expected value of their token assignments. We propose a novel model architecture that predicts both the sensor values at the next time step, as well as the value assigned to the compositional anomaly (hence the name 2D prediction). The robustness to rarity is achieved through modeling the problem using a regression objective, thus preventing the need for obtaining an adequate number of positive vs. negative class instances or other ad hoc sampling strategies to handle the rare occurrence.
- For challenge (iii), we use the Future Factories dataset. The dataset originates from a manufacturing assembly line specifically designed for rocket assembly, adhering to industrial standards in deploying actuators, control mechanisms, and transducers [19].
- For enabling domain-expert-friendly interpretability, we introduce combining rules first introduced in the independence of a causal influence framework [20], which were specifically inspired by real-world use cases such as healthcare cases to allow enhanced expressivity beyond traditional explainable AI (XAI) methods (e.g., saliency and heat maps). We note that although XAI methods are useful for the system developer for debugging and verification, they are not end-user friendly and do not give end-users the information they want [18]. We demonstrate how combining rules allows natural and user-friendly ways for the domain expert to interpret the influence of individual measurements on the prediction outcome.
- This full investigation aimed to tackle the above challenges to create an adequate model and fully deploy this model in a real manufacturing system. The results and insights from this deployment showcase the promising potential of RI2AP for anomaly prediction in manufacturing assembly pipelines.Figure 1 shows a summary of the proposed method.
2. Related Work
3. Future Factories Dataset
4. Problem Formulation
4.1. Notations
4.2. Anomaly Encodings
4.3. Why Not Simple “One-Hot” Encoding for Anomaly Types?
4.4. Task Description
5. The RI2AP Method
5.1. Design Motivations
5.1.1. Why Separate Function Approximators and Combining Rules?
5.1.2. Why Not Standard XAI Methods?
5.2. Function Approximation Methods
5.2.1. Long Short-Term Memory Networks (LSTMs)
5.2.2. Transformer Architecture—Decoder Only
5.2.3. Method of Moments
6. Experiments and Results
6.1. Function Approximator Setup Details
6.1.1. LSTM
6.1.2. Transformer (Ours)
6.1.3. TimeGPT
6.1.4. Method of Moments
6.2. Evaluation Results Using Individual Measurements
6.3. Evaluation Results with Combining Rules
B | LSTM | Transformer | TimeGPT | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
S | 42,189 | 42,299 | 2000 | |||||||||
M | P | R | F1 | A | P | R | F1 | A | P | R | F1 | A |
V1 | 0.2 | 0.3 | 0.3 | 0.7 | 0.2 | 0.3 | 0.2 | 0.2 | 0.5 | 0.5 | 0.5 | 0.9 |
V2 | 0.8 | 0.7 | 0.8 | 0.9 | 0.2 | 0.1 | 0.2 | 0.3 | 0.5 | 0.5 | 0.5 | 0.9 |
V3 | 0.9 | 0.8 | 0.9 | 0.9 | 0.2 | 0.3 | 0.2 | 0.5 | 0.5 | 0.5 | 0.5 | 1 |
V4 | 0.9 | 0.9 | 0.9 | 0.9 | 0.2 | 0.1 | 0.1 | 0.1 | 0.5 | 0.5 | 0.5 | 1 |
V5 | 1 | 1 | 1 | 1 | 0 | 0.1 | 0 | 0.1 | 0.2 | 0.1 | 0.1 | 0.8 |
V6 | 0.9 | 0.9 | 0.9 | 0.9 | 0.2 | 0.2 | 0.1 | 0.5 | 0.5 | 0.5 | 0.5 | 0.9 |
V7 | 0.9 | 0.9 | 0.9 | 0.9 | 0.2 | 0.3 | 0.2 | 0.6 | 0.2 | 0.1 | 0.16 | 0.9 |
V8 | 0.9 | 1 | 1 | 1 | 0.4 | 0.4 | 0.4 | 0.7 | 0.5 | 0.5 | 0.5 | 1 |
V9 | 1 | 0.9 | 0.9 | 1 | 0.2 | 0.3 | 0.3 | 0.6 | 0.3 | 0.3 | 0.3 | 0.9 |
V10 | 0.9 | 0.9 | 0.9 | 0.9 | 0.3 | 0.3 | 0.2 | 0.5 | 0.2 | 0.1 | 0.1 | 0.7 |
V11 | 0.9 | 0.9 | 0.9 | 0.9 | 0 | 0.1 | 0 | 0 | 0.2 | 0.2 | 0.8 | 0.9 |
V12 | 0.4 | 0.5 | 0.3 | 0.5 | 0.4 | 0.3 | 0.4 | 0.7 | 1 | 1 | 1 | 1 |
V13 | 0.7 | 0.7 | 0.7 | 0.9 | 0.3 | 0.4 | 0.4 | 0.6 | 0.5 | 0.5 | 0.5 | 1 |
V14 | 0.9 | 0.9 | 0.9 | 1 | 0.6 | 0.7 | 0.5 | 0.7 | 0.5 | 0.5 | 0.5 | 0.9 |
V15 | 1 | 1 | 1 | 1 | 0.2 | 0.3 | 0.1 | 0.5 | 0.3 | 0.3 | 0.3 | 1 |
V16 | 0.8 | 0.7 | 0.7 | 0.8 | 0.2 | 0.2 | 0.2 | 0.4 | 0.2 | 0.2 | 0.2 | 0.9 |
V17 | 0.9 | 0.9 | 0.9 | 0.9 | 0.3 | 0.4 | 0.3 | 0.6 | 0.5 | 0.5 | 0.5 | 0.9 |
V18 | 1 | 1 | 1 | 1 | 0.2 | 0.2 | 0 | 0.1 | 0.5 | 0.5 | 0.5 | 1 |
V19 | 0.9 | 0.9 | 0.9 | 0.9 | 0.3 | 0.4 | 0.3 | 0.2 | 0.5 | 0.5 | 0.5 | 0.9 |
V20 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.5 | 0.5 | 0.5 | 0.9 |
B | LSTM | Transformer | TimeGPT | ||||||
---|---|---|---|---|---|---|---|---|---|
S | 42,189 | 42,189 | 2000 | ||||||
M | |||||||||
V1 | 0.7 | 0.5 | 0.7 | 6.4 | 41.1 | 29.2 | 1.1 | 1.2 | 1.6 |
D | 0.6 | 0.4 | 1.3 | 1.8 | 1.9 | 3.7 | |||
V2 | 0.7 | 0.5 | 0.7 | 0.5 | 0.3 | 2015.4 | 1.7 | 2.9 | 1.3 |
D | 0.8 | 0.6 | 0.6 | 0.4 | 0.8 | 0.7 | |||
V3 | 0.8 | 0.6 | 0.7 | 3.6 | 13.2 | 9.8 | 1.1 | 1.2 | 0.8 |
D | 0.4 | 0.2 | 2.09 | 4.2 | 0 | 0 | |||
V4 | 1 | 1 | 0.7 | 4.1 | 17.1 | 12.4 | 1.2 | 1.5 | 0.9 |
D | 0.2 | 0.4 | 1.9 | 3.8 | 0.3 | 0.1 | |||
V5 | 0.4 | 0.2 | 0.5 | 2.7 | 6.9 | 5.7 | 1 | 1 | 1.7 |
D | 0.6 | 0.4 | 1.8 | 3.5 | 2.2 | 4.7 | |||
V6 | 0.6 | 0.4 | 0.6 | 5.8 | 33.9 | 24.1 | 1 | 1 | 0.8 |
D | 0.5 | 0.3 | 1.8 | 3.4 | 0.5 | 0.3 | |||
V7 | 0.6 | 0.4 | 0.6 | 4.6 | 22 | 16 | 1 | 1 | 1.3 |
D | 0.5 | 0.2 | 1.4 | 2 | 1.6 | 2.6 | |||
V8 | 0.1 | 0.1 | 0.1 | 0 | 0 | 2.8 | 1 | 1.1 | 0.7 |
D | 0.1 | 0.1 | 1 | 1 | 0.2 | 0 | |||
V9 | 0.9 | 0.9 | 0.8 | 6.4 | 41.2 | 29 | 1 | 1 | 1.4 |
D | 0.4 | 0.1 | 1.4 | 2 | 1.7 | 2.8 | |||
V10 | 0.5 | 0.2 | 0.5 | 5.6 | 32.3 | 23 | 1 | 1 | 2.3 |
D | 0.5 | 0.2 | 1.6 | 2.6 | 3.1 | 9.9 | |||
V11 | 0.9 | 0.8 | 0.7 | 2.6 | 6.9 | 16.5 | 1.3 | 1.7 | 1.3 |
D | 0.2 | 0.1 | 1.8 | 3.5 | 0.9 | 0.9 | |||
V12 | 0.9 | 0.8 | 1.3 | 9.2 | 85.1 | 60.3 | 1.3 | 1.6 | 1.2 |
D | 1.7 | 2.9 | 1.04 | 1.1 | 0 | 0 | |||
V13 | 1 | 1 | 0.7 | 7.1 | 50.5 | 35.8 | 1.1 | 1.3 | 0.9 |
D | 0.4 | 0.2 | 1 | 1 | 0.5 | 0.2 | |||
V14 | 0.9 | 0.7 | 0.7 | 7.6 | 58 | 41 | 4.1 | 16.6 | 3.1 |
D | 0.3 | 0.1 | 1.1 | 1.4 | 0.7 | 0.5 | |||
V15 | 1 | 0.9 | 0.7 | 57.2 | 3274 | 2315.1 | 1 | 1 | 0.8 |
D | 0.2 | 0 | 1.7 | 3 | 0.3 | 0.1 | |||
V16 | 1.1 | 1.1 | 1 | 6.4 | 41 | 29 | 2.1 | 4.2 | 1.6 |
D | 0.8 | 0.7 | 1.7 | 3 | 0.8 | 0.7 | |||
V17 | 0.8 | 0.6 | 0.6 | 56.2 | 3166.8 | 2239.2 | 0.8 | 0.7 | 0.9 |
D | 0.4 | 0.2 | 1.4 | 2 | 0.8 | 0.6 | |||
V18 | 0.4 | 0.1 | 0.5 | 30.7 | 948 | 670.3 | 1 | 1 | 0.7 |
D | 0.5 | 0.3 | 2.05 | 4.2 | 0.2 | 0 | |||
V19 | 0.6 | 0.4 | 0.7 | 46.3 | 2141.8 | 1514.4 | 0.9 | 0.8 | 0.9 |
D | 0.8 | 0.7 | 5.9 | 3.5 | 0.7 | 0.4 | |||
V20 | 0.8 | 0.6 | 0.6 | 36.8 | 1352.1 | 956.1 | 1 | 1.1 | 0.9 |
D | 0.3 | 0 | 2.6 | 6.7 | 0.8 | 0.6 |
Model | RI2AP | |||
---|---|---|---|---|
S | 11,927 | |||
M | P | R | F1 | A |
V1 | 0.6 | 1 | 0.7 | 0.7 |
V2 | 1 | 1 | 1 | 1 |
V3 | 1 | 1 | 1 | 1 |
V4 | 1 | 1 | 1 | 1 |
V5 | 0.8 | 1 | 0.9 | 0.9 |
V6 | 0.7 | 0.8 | 1 | 0.8 |
V7 | 1 | 1 | 1 | 1 |
V8 | 0.8 | 1 | 0.9 | 0.8 |
V9 | 1 | 1 | 1 | 1 |
V10 | 0.8 | 1 | 0.9 | 0.8 |
V11 | 1 | 1 | 1 | 1 |
V12 | 1 | 1 | 1 | 1 |
V13 | 0.8 | 1 | 0.9 | 0.8 |
V14 | 1 | 1 | 1 | 1 |
V15 | 0.8 | 1 | 0.9 | 0.8 |
V16 | 0.8 | 0.9 | 1 | 0.8 |
V17 | 1 | 1 | 1 | 1 |
V18 | 1 | 1 | 1 | 1 |
V19 | 1 | 1 | 1 | 1 |
V20 | 1 | 1 | 1 | 1 |
AT | LSTM | Transformer | TimeGPT | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
P | R | F1 | A | P | R | F1 | A | P | R | F1 | A | |
A9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.6 | 0.8 | 0.7 | 0.4 | 0.9 | 1 | 0.9 | 0.9 |
S-382328 | S-38465 | |||||||||||
A1 | 0.9 | 0.8 | 0.8 | 0.2 | 0.3 | 0.2 | 0 | 0 | 0 | 0 | ||
S-224081 | S-699 | |||||||||||
A2 | 0.7 | 0.8 | 0.7 | 0.2 | 0.1 | 0.2 | 0 | 0 | 0 | 0 | ||
S-77556 | S-237 | |||||||||||
A3 | 0.7 | 0.9 | 0.8 | 0.3 | 0.2 | 0.2 | 0 | 0 | 0 | 0 | ||
S-89768 | S-717 | |||||||||||
A4 | 0.8 | 0.8 | 0.8 | 0.3 | 0.2 | 0.2 | 0 | 0 | 0 | 0 | ||
S-64471 | S-169 | |||||||||||
A5 | 0.1 | 0.1 | 0.1 | 0.3 | 0.1 | 0.1 | 0 | 0 | 0 | 0 | ||
S-5576 | S-313 | |||||||||||
MA | 0.7 | 0.7 | 0.7 | 0.3 | 0.3 | 0.3 | 0.2 | 0.2 | 0.2 |
AT | LSTM | Transformer | TimeGPT | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
P | R | F1 | A | P | R | F1 | A | P | R | F1 | A | |
A9 | 0.7 | 1 | 0.8 | 0.5 | 0.3 | 0.4 | 0.3 | 0.4 | 0.7 | 1 | 0.8 | 0.7 |
S-12626 | S-1354 | |||||||||||
A1 | 0.5 | 0.4 | 0.5 | 0.2 | 0.3 | 0.2 | 0 | 0 | 0 | 0 | ||
S-11377 | S-348 | |||||||||||
A2 | 0.1 | 0.1 | 0.1 | 0.2 | 0.1 | 0.2 | 0 | 0 | 0 | 0 | ||
S-3664 | S-113 | |||||||||||
A3 | 0.1 | 0.1 | 0.1 | 0.2 | 0.1 | 0.2 | 0 | 0 | 0 | 0 | ||
S-4170 | S-56 | |||||||||||
A4 | 0.2 | 0.3 | 0.3 | 0.2 | 0.2 | 0.2 | 0 | 0 | 0 | 0 | ||
S-6762 | S-22 | |||||||||||
A5 | 0.1 | 0.1 | 0.1 | 0.2 | 0.1 | 0.1 | 0 | 0 | 0 | 0 | ||
S-3590 | S-107 | |||||||||||
MA | 0.4 | 0.5 | 0.4 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 | 0.2 |
AT | Noisy-OR | Noisy-MAX | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
P | R | F1 | A | S | P | R | F1 | A | S | |
A9 | 1 | 1 | 1 | 0.8 | 2000 | 1 | 1 | 1 | 0.6 | 100 |
A6 | 0.5 | 0.6 | 0.5 | 1600 | 0.4 | 0.5 | 0.5 | 100 | ||
A1 | 0.8 | 0.8 | 0.9 | 1989 | 0.4 | 0.5 | 0.5 | 100 | ||
A2 | 0.8 | 0.8 | 0.8 | 2011 | 1 | 0.5 | 0.7 | 200 | ||
A3 | 0.9 | 0.6 | 0.8 | 2800 | 0.3 | 0.4 | 0.5 | 53 | ||
A7 | 1 | 0.9 | 1 | 2200 | 0.6 | 0.4 | 0.5 | 82 | ||
A4 | 0.9 | 0.9 | 0.9 | 1800 | 0.5 | 0.4 | 0.4 | 65 | ||
A5 | 0.9 | 0.9 | 0.9 | 1900 | 1 | 0.9 | 0.9 | 100 | ||
A8 | 1 | 0.9 | 1 | 2100 | 1 | 0.9 | 1 | 100 | ||
TS | 18,000 | 900 |
Baseline | Noisy-Max | Noisy-OR |
---|---|---|
LSTM | 2.04 | 0.99 |
Transformer | 6.64 | 3.24 |
TimeGPT | 3.2 | 1.19 |
RI2AP | 0.57 | 0.23 |
7. Deployment of RI2AP
7.1. Deployment Plan
- Input: The first step involves gathering and organizing saved models for important sensor variables, ensuring that they are ready for deployment. These saved models constitute the baselines and the proposed linear model based on the method of moments. An important task in this step is to verify the availability and compatibility of these models to be deployed in the FF setup.
- Data Preparation: This step involves integrating real-time data with server and Program Logic Controller (PLC) devices, enabling the collection of real-time data for analysis. Anomaly simulation mechanisms were developed to simulate various anomalies in the FF cell, tailored to each modeling approach, while normal event simulation was also conducted for training and testing purposes.
- Experimentation: This step involves feeding the prepared real-time data into the baseline models to analyze and predict outcomes.
- Output: The output includes generating predictions for normal and anomalous events in the future based on the deployed models.
- Validation: The validation of the results was carried out through expert validation, where domain experts in the FF lab validated the results obtained from the deployed models. The predictions were cross-checked with findings from previous research or empirical observations to ensure their accuracy and reliability.
- Refinement: The refinement of the models was undertaken based on validation results and feedback from domain experts, ensuring that the deployed models were effective and accurate. An iterative improvement process was implemented, involving refinement, testing, and validation cycles to continually enhance the effectiveness and accuracy of the deployed models.
7.2. Technical Details of Deployment
7.3. Results of Deployment and Discussion
7.4. Engineering Challenges Faced in Deployment
8. Conclusion, Future Work, and Broader Impact
8.1. Future Work
8.2. Broader Impact
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Supervised Learning Methods in Time Series Forecasting and Anomaly Detection and Prediction
Appendix B. XGBoost Feature Coverage Plots
Appendix C. Initial Experiments on Anomaly Detection and Anomaly Prediction
Appendix C.1. Anomaly Detection
Appendix C.2. Anomaly Prediction
# One hot encoded anomaly dictionary labels = { ‘No_Anomaly’ : A9, ‘Nosecone_Removed’: A1, ‘BothBodies_and_Nose_Removed’ : A2, ‘TopBody_and_Nose_Removed’ : A3, ,‘Body2_Removed’ : A4, ‘Door2_TimedOut’ : A5, ‘R04_crashed_nose’ : A6, ‘R03_crashed_tail’ : A7, ‘ESTOPPED’ : A8} |
AT | P | R | F1 | S | A |
---|---|---|---|---|---|
A9 | 0.91 | 1 | 0.95 | 36,383 | 0.91 |
A1 | 0.75 | 0.06 | 0.1 | 1813 | |
A2 | 0.98 | 0.18 | 0.31 | 904 | |
A3 | 0.89 | 0.06 | 0.11 | 1145 | |
A4 | 0.96 | 0.63 | 0.76 | 636 | |
A5 | 1 | 1 | 1 | 792 | |
A6 | 1 | 0.99 | 1 | 332 | |
A7 | 1 | 0.95 | 0.97 | 255 | |
A8 | 0.61 | 0.98 | 0.75 | 49 | |
MA | 0.9 | 0.65 | 0.66 | 42,309 |
P | R | F1 | S | A | |
---|---|---|---|---|---|
NA | 0.9 | 0.9 | 0.9 | 3516 | 0.97 |
A | 0.5 | 0.1 | 0.1 | 121 | |
MA | 0.7 | 0.5 | 0.5 |
Appendix D. More Details on TimeGPT Model
Appendix E. Future Factories Setup
References
- Anumbe, N.; Saidy, C.; Harik, R. A Primer on the Factories of the Future. Sensors 2022, 22, 5834. [Google Scholar] [CrossRef] [PubMed]
- Tao, F.; Qi, Q.; Liu, A.; Kusiak, A. Data-driven smart manufacturing. J. Manuf. Syst. 2018, 48, 157–169. [Google Scholar] [CrossRef]
- Oztemel, E.; Gursev, S. Literature review of Industry 4.0 and related technologies. J. Intell. Manuf. 2020, 31, 127–182. [Google Scholar] [CrossRef]
- Morariu, C.; Borangiu, T. Time series forecasting for dynamic scheduling of manufacturing processes. In Proceedings of the 2018 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR), Cluj-Napoca, Romania, 24–26 May 2018; pp. 1–6. [Google Scholar] [CrossRef]
- Apostolou, G.; Ntemi, M.; Paraschos, S.; Gialampoukidis, I.; Rizzi, A.; Vrochidis, S.; Kompatsiaris, I. Novel Framework for Quality Control in Vibration Monitoring of CNC Machining. Sensors 2024, 24, 307. [Google Scholar] [CrossRef]
- Shyalika, C.; Wickramarachchi, R.; Sheth, A. A Comprehensive Survey on Rare Event Prediction. arXiv 2023, arXiv:2309.11356. [Google Scholar] [CrossRef]
- Ariyo, A.A.; Adewumi, A.O.; Ayo, C.K. Stock price prediction using the ARIMA model. In Proceedings of the 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Cambridge, UK, 26–28 March 2014; pp. 106–112. [Google Scholar] [CrossRef]
- Gardner, E.S., Jr. Exponential smoothing: The state of the art. J. Forecast. 1985, 4, 1–28. [Google Scholar] [CrossRef]
- Harvey, A.C. Forecasting, Structural Time Series Models and the Kalman Filter. 1990. Available online: https://books.google.com/books?hl=en&lr=&id=Kc6tnRHBwLcC&oi=fnd&pg=PR9&ots=I6QTUvUZNC&sig=fXNsvlMyfu0S-zOoOSJfX5gTEBM#v=onepage&q&f=false (accessed on 5 January 2024).
- Ranjan, C.; Reddy, M.; Mustonen, M.; Paynabar, K.; Pourak, K. Dataset: Rare event classification in multivariate time series. arXiv 2018, arXiv:1809.10717. [Google Scholar] [CrossRef]
- Nanduri, A.; Sherry, L. Anomaly detection in aircraft data using Recurrent Neural Networks (RNN). In Proceedings of the 2016 Integrated Communications Navigation and Surveillance (ICNS), Herndon, VA, USA, 19–21 April 2016; p. 5C2-1. [Google Scholar] [CrossRef]
- Wang, X.; Zhao, T.; Liu, H.; He, R. Power consumption predicting and anomaly detection based on long short-term memory neural network. In Proceedings of the 2019 IEEE 4th International Conference on Cloud Computing and Big Data Analysis (ICCCBDA), Chengdu, China, 12–15 April 2019; pp. 487–491. [Google Scholar] [CrossRef]
- Munir, M.; Siddiqui, S.A.; Dengel, A.; Ahmed, S. DeepAnT: A deep learning approach for unsupervised anomaly detection in time series. IEEE Access 2018, 7, 1991–2005. [Google Scholar] [CrossRef]
- Tuli, S.; Casale, G.; Jennings, N.R. Tranad: Deep transformer networks for anomaly detection in multivariate time series data. arXiv 2022, arXiv:2201.07284. [Google Scholar] [CrossRef]
- Xu, J.; Wu, H.; Wang, J.; Long, M. Anomaly transformer: Time series anomaly detection with association discrepancy. International Conference on Learning Representations. arXiv 2021, arXiv:2110.02642. [Google Scholar] [CrossRef]
- Garza, A.; Mergenthaler-Canseco, M. TimeGPT-1. arXiv 2023, arXiv:2310.03589. [Google Scholar] [CrossRef]
- Xue, H.; Salim, F.D. Promptcast: A new prompt-based learning paradigm for time series forecasting. IEEE Trans. Knowl. Data Eng. 2023, 1–14. [Google Scholar] [CrossRef]
- Sheth, A.; Gaur, M.; Roy, K.; Venkataraman, R.; Khandelwal, V. Process knowledge-infused AI: Toward user-level explainability, interpretability, and safety. IEEE Internet Comput. 2022, 26, 76–84. [Google Scholar] [CrossRef]
- Harik, R.; Kalach, F.E.; Samaha, J.; Clark, D.; Sander, D.; Samaha, P.; Burns, L.; Yousif, I.; Gadow, V.; Tarekegne, T.; et al. FF 2023 12 12 Analog Dataset, 2024. Available online: https://www.kaggle.com/datasets/ramyharik/ff-2023-12-12-analog-dataset (accessed on 1 January 2024).
- Koller, D.; Friedman, N. Probabilistic Graphical Models Principles and Techniques; MIT Press: Cambridge, MA, USA, 2012; Available online: https://pdfs.semanticscholar.org/d0a9/b181fc252108de45720d4645ac245e1ba463.pdf (accessed on 5 January 2024).
- Wang, Y.; Perry, M.; Whitlock, D.; Sutherland, J.W. Detecting anomalies in time series data from a manufacturing system using recurrent neural networks. J. Manuf. Syst. 2022, 62, 823–834. [Google Scholar] [CrossRef]
- Tanuska, P.; Spendla, L.; Kebisek, M.; Duris, R.; Stremy, M. Smart anomaly detection and prediction for assembly process maintenance in compliance with industry 4.0. Sensors 2021, 21, 2376. [Google Scholar] [CrossRef] [PubMed]
- Pittino, F.; Puggl, M.; Moldaschl, T.; Hirschl, C. Automatic anomaly detection on in-production manufacturing machines using statistical learning methods. Sensors 2020, 20, 2344. [Google Scholar] [CrossRef] [PubMed]
- Kammerer, K.; Hoppenstedt, B.; Pryss, R.; Stökler, S.; Allgaier, J.; Reichert, M. Anomaly detections for manufacturing systems based on sensor data—Insights into two challenging real-world production settings. Sensors 2019, 19, 5370. [Google Scholar] [CrossRef] [PubMed]
- Abdallah, M.; Joung, B.G.; Lee, W.J.; Mousoulis, C.; Raghunathan, N.; Shakouri, A.; Sutherland, J.W.; Bagchi, S. Anomaly detection and inter-sensor transfer learning on smart manufacturing datasets. Sensors 2023, 23, 486. [Google Scholar] [CrossRef] [PubMed]
- Park, Y.; Yun, I.D. Fast adaptive RNN encoder–decoder for anomaly detection in SMD assembly machine. Sensors 2018, 18, 3573. [Google Scholar] [CrossRef]
- Chen, C.Y.; Chang, S.C.; Liao, D.Y. Equipment anomaly detection for semiconductor manufacturing by exploiting unsupervised learning from sensory data. Sensors 2020, 20, 5650. [Google Scholar] [CrossRef]
- Saci, A.; Al-Dweik, A.; Shami, A. Autocorrelation integrated gaussian based anomaly detection using sensory data in industrial manufacturing. IEEE Sens. J. 2021, 21, 9231–9241. [Google Scholar] [CrossRef]
- Abdallah, M.; Lee, W.J.; Raghunathan, N.; Mousoulis, C.; Sutherland, J.W.; Bagchi, S. Anomaly detection through transfer learning in agriculture and manufacturing IoT systems. arXiv 2021, arXiv:2102.05814. [Google Scholar] [CrossRef]
- Harik, R.; Kalach, F.E.; Samaha, J.; Clark, D.; Sander, D.; Samaha, P.; Burns, L.; Yousif, I.; Gadow, V.; Tarekegne, T.; et al. Analog and Multi-modal Manufacturing Datasets Acquired on the Future Factories Platform. arXiv 2024, arXiv:2401.15544. [Google Scholar] [CrossRef]
- Srinivas, S. A generalization of the noisy-or model. In Proceedings of the Uncertainty in Artificial Intelligence, Washington, DC, USA, 9–11 July 1993; Elsevier: Amsterdam, The Netherlands, 1993; pp. 208–215. [Google Scholar] [CrossRef]
- Vomlel, J. Noisy-or classifier. Int. J. Intell. Syst. 2006, 21, 381–398. [Google Scholar] [CrossRef]
- Pearl, J. Bayesian Networks 2011. UCLA: Department of Statistics. Available online: https://escholarship.org/uc/item/53n4f34m (accessed on 15 May 2024).
- Pearl, J. A probabilistic calculus of actions. In Proceedings of the Tenth International Conference on Uncertainty in Artificial Intelligence, San Francisco, CA, USA, 29–31 July 1994; Elsevier: Amsterdam, The Netherlands, 1994; pp. 454–462. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. [Google Scholar] [CrossRef]
- Gramegna, A.; Giudici, P. SHAP and LIME: An evaluation of discriminative power in credit risk. Front. Artif. Intell. 2021, 4, 752558. [Google Scholar] [CrossRef] [PubMed]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Islam, S.; Elmekki, H.; Elsebai, A.; Bentahar, J.; Drawel, N.; Rjoub, G.; Pedrycz, W. A comprehensive survey on applications of transformers for deep learning tasks. Expert Syst. Appl. 2023, 241, 122666. [Google Scholar] [CrossRef]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. 2018. Available online: https://api.semanticscholar.org/CorpusID:49313245 (accessed on 15 May 2024).
- Wynne, G.; Duncan, A.B. A kernel two-sample test for functional data. J. Mach. Learn. Res. 2022, 23, 3159–3209. [Google Scholar] [CrossRef]
- Narayanan, V.; Zhang, W.; Li, J.S. Moment-based ensemble control. arXiv 2020, arXiv:2009.02646. [Google Scholar] [CrossRef]
- Shohat, J.A.; Tamarkin, J.D. The Problem of Moments; American Mathematical Society: Providence, RI, USA, 1950; Volume 1. [Google Scholar]
- Yu, Y.C.; Narayanan, V.; Li, J.S. Moment-based reinforcement learning for ensemble control. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–12. [Google Scholar] [CrossRef] [PubMed]
- Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 11121–11128. [Google Scholar] [CrossRef]
- Valeriy Manokhin, P. Transformers Are What You Do Not Need. 2023. Available online: https://valeman.medium.com/transformers-are-what-you-do-not-need-cf16a4c13ab7 (accessed on 5 January 2024).
- Lee, S.; Hong, J.; Liu, L.; Choi, W. TS-Fastformer: Fast Transformer for Time-Series Forecasting. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–20. [Google Scholar] [CrossRef]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar] [CrossRef]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. arXiv 2019, arXiv:1907.00235. [Google Scholar] [CrossRef]
- Liu, S.; Yu, H.; Liao, C.; Li, J.; Lin, W.; Liu, A.X.; Dustdar, S. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In Proceedings of the International Conference on Learning Representations, Vienna, Austria, 4 May 2021. [Google Scholar] [CrossRef]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
- Rasul, K.; Ashok, A.; Williams, A.R.; Khorasani, A.; Adamopoulos, G.; Bhagwatkar, R.; Biloš, M.; Ghonia, H.; Hassen, N.V.; Schneider, A.; et al. Lag-llama: Towards foundation models for time series forecasting. arXiv 2023, arXiv:2310.08278. [Google Scholar] [CrossRef]
- Das, A.; Kong, W.; Sen, R.; Zhou, Y. A decoder-only foundation model for time-series forecasting. arXiv 2023, arXiv:2310.10688. [Google Scholar] [CrossRef]
- Nixtla. TimeGPT Quickstart. 2023. Available online: https://docs.nixtla.io/docs/getting-started-timegpt_quickstart (accessed on 17 January 2024).
Dataset Artifact | Statistic |
---|---|
Rarity percentage | 13.36% |
Frequency | 10 Hz |
Data collection period | 6 h |
Original features | 41 |
Selected features | 20 |
Number of data points | 211,546 |
Train/test split | 80:20 |
Train samples | 169,236 |
Test samples | 42,309 |
Anomaly Type and Notation | Sub Type | Count | Percentage |
---|---|---|---|
Nosecone Removed | Type 1 | 9043 | 4.27% |
BothBodies and Nose Removed | Type 3 | 4405 | 2.08% |
TopBody and Nose Removed | Type 2 | 5904 | 2.79% |
Body2 Removed | Type 1 | 3306 | 1.56% |
Door2_TimedOut | Type 4 | 3711 | 1.75% |
R04 crashed nose | Type 4 | 1631 | 0.77% |
R03 crashed tail | Type 4 | 1426 | 0.67% |
ESTOPPED | Type 4 | 273 | 0.13% |
No anomaly | None | 183,272 | 86.63% |
Variable | Abbreviation | Variable | Abbreviation |
---|---|---|---|
Anomaly Label | D | LoadCell_R04 | V15 |
SJointAngle_R03 | V1 | BJointAngle_R04 | V16 |
Potentiometer_R04 | V2 | Potentiometer_R03 | V17 |
VFD2 | V3 | Potentiometer_R01 | V18 |
LoadCell_R02 | V4 | Potentiometer_R02 | V19 |
LJointAngle_R01 | V5 | LoadCell_R03 | V20 |
BJointAngle_R03 | V6 | Nosecone Removed | A1 |
UJointAngle_R03 | V7 | BothBodies and Removed | A2 |
VFD1 | V8 | TopBody and Nose Removed | A3 |
RJointAngle_R04 | V9 | Body2 Removed | A4 |
SJointAngle_R02 | V10 | Door2_TimedOut | A5 |
LJointAngle_R04 | V11 | R04 crashed nose | A6 |
SJointAngle_R04 | V12 | R03 crashed tail | A7 |
LoadCell_R01 | V13 | ESTOPPED | A8 |
TJointAngle_R04 | V14 | No anomaly | A9 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shyalika, C.; Roy, K.; Prasad, R.; Kalach, F.E.; Zi, Y.; Mittal, P.; Narayanan, V.; Harik, R.; Sheth, A. RI2AP: Robust and Interpretable 2D Anomaly Prediction in Assembly Pipelines. Sensors 2024, 24, 3244. https://doi.org/10.3390/s24103244
Shyalika C, Roy K, Prasad R, Kalach FE, Zi Y, Mittal P, Narayanan V, Harik R, Sheth A. RI2AP: Robust and Interpretable 2D Anomaly Prediction in Assembly Pipelines. Sensors. 2024; 24(10):3244. https://doi.org/10.3390/s24103244
Chicago/Turabian StyleShyalika, Chathurangi, Kaushik Roy, Renjith Prasad, Fadi El Kalach, Yuxin Zi, Priya Mittal, Vignesh Narayanan, Ramy Harik, and Amit Sheth. 2024. "RI2AP: Robust and Interpretable 2D Anomaly Prediction in Assembly Pipelines" Sensors 24, no. 10: 3244. https://doi.org/10.3390/s24103244
APA StyleShyalika, C., Roy, K., Prasad, R., Kalach, F. E., Zi, Y., Mittal, P., Narayanan, V., Harik, R., & Sheth, A. (2024). RI2AP: Robust and Interpretable 2D Anomaly Prediction in Assembly Pipelines. Sensors, 24(10), 3244. https://doi.org/10.3390/s24103244