Machine Learning and Deep Learning Applications for Anomaly and Fault Detection

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (1 July 2023) | Viewed by 43208

Special Issue Editor


E-Mail Website
Guest Editor
Zeiss Innovation Hub@KIT, 76344 Eggenstein-Leopoldshafen, Germany
Interests: machine learning; deep learning; computer vision; signal processing; fault diagnosis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned factories are now being discussed as part of Industry 4.0. There have been numerous studies conducted on this subject. One of these is the monitoring of machines. An explanation for this condition is that failures on rare occasions are crucial to the operation's performance in this manufacturing environment.

An unnoticed machinery problem is certain to worsen over time, causing other mechanical problems. For solving this problem, the newly intelligent manufacturing systems are used to predict failure in advance. Technical advancements in manufacturing have addressed the research interests of integrated distributed intelligent manufacturing systems, which include distributed artificial intelligence theory and applications. It is critical to monitor them and forecast problems at an early stage, as well as to repair machine parts on time. Through machine monitoring, many machine faults can be diagnosed in time to prevent more serious damage later. Early detection of machine faults can improve their reliability, reduce energy consumption, reduce service and maintenance costs, and increase their lifecycle and safety, thereby significantly reducing lifecycle costs.

A lot of work has been done in this area and continues to be done. The aim of this Special Issue is to collect recent developments in machine learning and deep learning algorithms, signal processing techniques, feature extraction, feature selection, and data science studies. It is expected that the studies will be application-oriented. The topics include, but are not limited to, the following:

  • Deep Learning
  • Machine Learning
  • Object Detection
  • Computer Vision
  • Fault Diagnosis
  • Anomaly Detection
  • Machine Monitoring
  • Fault Detection
  • Signal Processing
  • Predictive Maintenance
  • Time-Series and Image-based detection

Dr. Mustafa Demetgül
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 4562 KiB  
Article
Improved YOLOv5-Based Real-Time Road Pavement Damage Detection in Road Infrastructure Management
by Abdullah As Sami, Saadman Sakib, Kaushik Deb and Iqbal H. Sarker
Algorithms 2023, 16(9), 452; https://doi.org/10.3390/a16090452 - 21 Sep 2023
Cited by 13 | Viewed by 3808
Abstract
Deep learning has enabled a straightforward, convenient method of road pavement infrastructure management that facilitates a secure, cost-effective, and efficient transportation network. Manual road pavement inspection is time-consuming and dangerous, making timely road repair difficult. This research showcases You Only Look Once version [...] Read more.
Deep learning has enabled a straightforward, convenient method of road pavement infrastructure management that facilitates a secure, cost-effective, and efficient transportation network. Manual road pavement inspection is time-consuming and dangerous, making timely road repair difficult. This research showcases You Only Look Once version 5 (YOLOv5), the most commonly employed object detection model trained on the latest benchmark Road Damage Dataset, Road Damage Detection 2022 (RDD 2022). The RDD 2022 dataset includes four common types of road pavement damage, namely vertical cracks, horizontal cracks, alligator cracks, and potholes. This paper presents an improved deep neural network model based on YOLOv5 for real-time road pavement damage detection in photographic representations of outdoor road surfaces, making it an indispensable tool for efficient, real-time, and cost-effective road infrastructure management. The YOLOv5 model has been modified to incorporate several techniques that improve its accuracy and generalization performance. These techniques include the Efficient Channel Attention module (ECA-Net), label smoothing, the K-means++ algorithm, Focal Loss, and an additional prediction layer. In addition, a 1.9% improvement in mean average precision (mAP) and a 1.29% increase in F1-Score were attained by the model in comparison to YOLOv5s, with an increment of 1.1 million parameters. Moreover, a 0.11% improvement in mAP and 0.05% improvement in F1 score was achieved by the proposed model compared to YOLOv8s while having 3 million fewer parameters and 12 gigabytes fewer Giga Floating Point Operation per Second (GFlops). Full article
Show Figures

Figure 1

14 pages, 5860 KiB  
Article
Regularized Contrastive Masked Autoencoder Model for Machinery Anomaly Detection Using Diffusion-Based Data Augmentation
by Esmaeil Zahedi, Mohamad Saraee, Fatemeh Sadat Masoumi and Mohsen Yazdinejad
Algorithms 2023, 16(9), 431; https://doi.org/10.3390/a16090431 - 8 Sep 2023
Cited by 3 | Viewed by 1947
Abstract
Unsupervised anomalous sound detection, especially self-supervised methods, plays a crucial role in differentiating unknown abnormal sounds of machines from normal sounds. Self-supervised learning can be divided into two main categories: Generative and Contrastive methods. While Generative methods mainly focus on reconstructing data, Contrastive [...] Read more.
Unsupervised anomalous sound detection, especially self-supervised methods, plays a crucial role in differentiating unknown abnormal sounds of machines from normal sounds. Self-supervised learning can be divided into two main categories: Generative and Contrastive methods. While Generative methods mainly focus on reconstructing data, Contrastive learning methods refine data representations by leveraging the contrast between each sample and its augmented version. However, existing Contrastive learning methods for anomalous sound detection often have two main problems. The first one is that they mostly rely on simple augmentation techniques, such as time or frequency masking, which may introduce biases due to the limited diversity of real-world sounds and noises encountered in practical scenarios (e.g., factory noises combined with machine sounds). The second issue is dimension collapsing, which leads to learning a feature space with limited representation. To address the first shortcoming, we suggest a diffusion-based data augmentation method that employs ChatGPT and AudioLDM. Also, to address the second concern, we put forward a two-stage self-supervised model. In the first stage, we introduce a novel approach that combines Contrastive learning and masked autoencoders to pre-train on the MIMII and ToyADMOS2 datasets. This combination allows our model to capture both global and local features, leading to a more comprehensive representation of the data. In the second stage, we refine the audio representations for each machine ID by employing supervised Contrastive learning to fine-tune the pre-trained model. This process enhances the relationship between audio features originating from the same machine ID. Experiments show that our method outperforms most of the state-of-the-art self-supervised learning methods. Our suggested model achieves an average AUC and pAUC of 94.39% and 87.93% on the DCASE 2020 Challenge Task2 dataset, respectively. Full article
Show Figures

Figure 1

13 pages, 4255 KiB  
Article
Robustness of Artificial Neural Networks Based on Weight Alterations Used for Prediction Purposes
by Andreas G. Savva, Theocharis Theocharides and Chrysostomos Nicopoulos
Algorithms 2023, 16(7), 322; https://doi.org/10.3390/a16070322 - 29 Jun 2023
Cited by 1 | Viewed by 1405
Abstract
Nowadays, due to their excellent prediction capabilities, the use of artificial neural networks (ANNs) in software has significantly increased. One of the most important aspects of ANNs is robustness. Most existing studies on robustness focus on adversarial attacks and complete redundancy schemes in [...] Read more.
Nowadays, due to their excellent prediction capabilities, the use of artificial neural networks (ANNs) in software has significantly increased. One of the most important aspects of ANNs is robustness. Most existing studies on robustness focus on adversarial attacks and complete redundancy schemes in ANNs. Such redundancy methods for robustness are not easily applicable in modern embedded systems. This work presents a study, based on simulations, about the robustness of ANNs used for prediction purposes based on weight alterations. We devise a method to increase the robustness of ANNs directly from ANN characteristics. By using this method, only the most important neurons/connections are replicated, keeping the additional hardware overheads to a minimum. For implementation and evaluation purposes, the networks-on-chip (NoC) case, which is the next generation of system-on-chip, was used as a case study. The proposed study/method was validated using simulations and can be used for larger and different types of networks and hardware due to its scalable nature. The simulation results obtained using different PARSEC (Princeton Application Repository for Shared-Memory Computers) benchmark suite traffic show that a high level of robustness can be achieved with minimum hardware requirements in comparison to other works. Full article
Show Figures

Figure 1

18 pages, 6442 KiB  
Article
Fault-Diagnosis Method for Rotating Machinery Based on SVMD Entropy and Machine Learning
by Lijun Zhang, Yuejian Zhang and Guangfeng Li
Algorithms 2023, 16(6), 304; https://doi.org/10.3390/a16060304 - 17 Jun 2023
Cited by 8 | Viewed by 1947
Abstract
Rolling bearings and gears are important components of rotating machinery. Their operating condition affects the operation of the equipment. Fault in the accessory directly leads to equipment downtime or a series of adverse reactions in the system, which brings enormous pecuniary loss to [...] Read more.
Rolling bearings and gears are important components of rotating machinery. Their operating condition affects the operation of the equipment. Fault in the accessory directly leads to equipment downtime or a series of adverse reactions in the system, which brings enormous pecuniary loss to the institution. Hence, it is of great significance to detect the operating status of rolling bearings and gears for fault diagnosis. At present, the vibration method is considered to be the most common method for fault diagnosis, a method that analyzes the equipment by collecting vibration signals. However, rotating-machinery fault diagnosis is challenging due to the need to select effective fault feature vectors, use appropriate machine-learning classification methods, and achieve accurate fault diagnosis. To solve this problem, this paper illustrates a new fault-diagnosis method combining successive variational-mode decomposition (SVMD) entropy values and machine learning. First, the simulation signal and the real fault signal are used to analyze and compare the variational-mode decomposition (VMD) and SVMD methods. The comparison results prove that SVMD can be a useful method for fault diagnosis. Then, these two methods are utilized to extract the energy entropy and fuzzy entropy of the gearbox dataset of Southeast University (SEU), respectively. The feature vector and multiple machine-learning classification models are constructed for failure-mode identification. The experimental-analysis results successfully verify the effectiveness of the combined SVMD entropy and machine-learning approach for rotating-machinery fault diagnosis. Full article
Show Figures

Figure 1

25 pages, 4849 KiB  
Article
Cooperative Attention-Based Learning between Diverse Data Sources
by Harshit Srivastava and Ravi Sankar
Algorithms 2023, 16(5), 240; https://doi.org/10.3390/a16050240 - 4 May 2023
Viewed by 2312
Abstract
Cooperative attention provides a new method to study how epidemic diseases are spread. It is derived from the social data with the help of survey data. Cooperative attention enables the detection possible anomalies in an event by formulating the spread variable, which determines [...] Read more.
Cooperative attention provides a new method to study how epidemic diseases are spread. It is derived from the social data with the help of survey data. Cooperative attention enables the detection possible anomalies in an event by formulating the spread variable, which determines the disease spread rate decision score. This work proposes a determination spread variable using a disease spread model and cooperative learning. It is a four-stage model that determines answers by identifying semantic cooperation using the spread model to identify events, infection factors, location spread, and change in spread rate. The proposed model analyses the spread of COVID-19 throughout the United States using a new approach by defining data cooperation using the dynamic variable of the spread rate and the optimal cooperative strategy. Game theory is used to define cooperative strategy and to analyze the dynamic variable determined with the help of a control algorithm. Our analysis successfully identifies the spread rate of disease from social data with an accuracy of 67% and can dynamically optimize the decision model using a control algorithm with a complexity of order O(n2). Full article
Show Figures

Figure 1

17 pages, 583 KiB  
Article
A Bayesian Multi-Armed Bandit Algorithm for Dynamic End-to-End Routing in SDN-Based Networks with Piecewise-Stationary Rewards
by Pedro Santana and José Moura
Algorithms 2023, 16(5), 233; https://doi.org/10.3390/a16050233 - 28 Apr 2023
Cited by 2 | Viewed by 2106
Abstract
To handle the exponential growth of data-intensive network edge services and automatically solve new challenges in routing management, machine learning is steadily being incorporated into software-defined networking solutions. In this line, the article presents the design of a piecewise-stationary Bayesian multi-armed bandit approach [...] Read more.
To handle the exponential growth of data-intensive network edge services and automatically solve new challenges in routing management, machine learning is steadily being incorporated into software-defined networking solutions. In this line, the article presents the design of a piecewise-stationary Bayesian multi-armed bandit approach for the online optimum end-to-end dynamic routing of data flows in the context of programmable networking systems. This learning-based approach has been analyzed with simulated and emulated data, showing the proposal’s ability to sequentially and proactively self-discover the end-to-end routing path with minimal delay among a considerable number of alternatives, even when facing abrupt changes in transmission delay distributions due to both variable congestion levels on path network devices and dynamic delays to transmission links. Full article
Show Figures

Figure 1

24 pages, 8508 KiB  
Article
From Activity Recognition to Simulation: The Impact of Granularity on Production Models in Heavy Civil Engineering
by Anne Fischer, Alexandre Beiderwellen Bedrikow, Iris D. Tommelein, Konrad Nübel and Johannes Fottner
Algorithms 2023, 16(4), 212; https://doi.org/10.3390/a16040212 - 18 Apr 2023
Cited by 10 | Viewed by 4178
Abstract
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous [...] Read more.
As in manufacturing with its Industry 4.0 transformation, the enormous potential of artificial intelligence (AI) is also being recognized in the construction industry. Specifically, the equipment-intensive construction industry can benefit from using AI. AI applications can leverage the data recorded by the numerous sensors on machines and mirror them in a digital twin. Analyzing the digital twin can help optimize processes on the construction site and increase productivity. We present a case from special foundation engineering: the machine production of bored piles. We introduce a hierarchical classification for activity recognition and apply a hybrid deep learning model based on convolutional and recurrent neural networks. Then, based on the results from the activity detection, we use discrete-event simulation to predict construction progress. We highlight the difficulty of defining the appropriate modeling granularity. While activity detection requires equipment movement, simulation requires knowledge of the production flow. Therefore, we present a flow-based production model that can be captured in a modularized process catalog. Overall, this paper aims to illustrate modeling using digital-twin technologies to increase construction process improvement in practice. Full article
Show Figures

Figure 1

18 pages, 2886 KiB  
Article
Application of Search Algorithms in Determining Fault Location on Overhead Power Lines According to the Emergency Mode Parameters
by Aleksandr Kulikov, Pavel Ilyushin, Anton Loskutov and Sergey Filippov
Algorithms 2023, 16(4), 189; https://doi.org/10.3390/a16040189 - 30 Mar 2023
Viewed by 1705
Abstract
The identification of fault locations (FL) on overhead power lines (OHPLs) in the shortest possible time allows for a reduction in the time to shut down OHPLs in case of damage. This helps to improve the reliability of power systems. FL devices on [...] Read more.
The identification of fault locations (FL) on overhead power lines (OHPLs) in the shortest possible time allows for a reduction in the time to shut down OHPLs in case of damage. This helps to improve the reliability of power systems. FL devices on OHPLs according to the emergency mode parameters (EMPs) are widely used, as they have a lower cost. However, they have a larger error than FL devices that record traveling wave processes. Most well-known algorithms for FL on OHPL by EMP assume a uniform distribution of resistivity along the OHPL. In real conditions, this is not the case. The application of these algorithms in FL devices on OHPLs with inhomogeneities leads to significant errors in calculating the distance to the fault location. The use of search algorithms for unconditional one-dimensional optimization is proposed to increase the speed of the implementation of iterative procedures in FL devices on OHPLs by EMPs. Recommendations have been developed for choosing optimization criteria, as well as options for implementing computational procedures. Using the example of a two-sided FL on OHPL, it is shown that the use of search algorithms can significantly (from tens to hundreds of times) reduce the number of steps of the computational iterative procedure. The implementation of search algorithms is possible in the software of typical relay protection and automation terminals, without upgrading their hardware. Full article
Show Figures

Figure 1

20 pages, 5295 KiB  
Article
In-Process Monitoring of Hobbing Process Using an Acoustic Emission Sensor and Supervised Machine Learning
by Vivian Schiller, Sandra Klaus, Ali Bilen and Gisela Lanza
Algorithms 2023, 16(4), 183; https://doi.org/10.3390/a16040183 - 28 Mar 2023
Cited by 2 | Viewed by 2807
Abstract
The complexity of products increases considerably, and key functions can often only be realized by using high-precision components. Microgears have a particularly complex geometry and thus the manufacturing requirements often reach technological limits. Their geometric deviations are relatively large in comparison to the [...] Read more.
The complexity of products increases considerably, and key functions can often only be realized by using high-precision components. Microgears have a particularly complex geometry and thus the manufacturing requirements often reach technological limits. Their geometric deviations are relatively large in comparison to the small component size and thus have a major impact on the functionality in terms of generating unwanted noise and vibrations in the final product. There are still no readily available production-integrated measuring methods that enable quality control of all produced microgears. Consequently, many manufacturers are not able to measure any geometric gear parameters according to standards such as DIN ISO 21771. If at all, only samples are measured, as this is only possible by means of specialized, sensitive, and cost-intensive tactile or optical measuring technologies. In a novel approach, this paper examines the integration of an acoustic emission sensor into the hobbing process of microgears in order to predict process parameters as well as geometric and functional features of the produced gears. In terms of process parameters, radial feed and tool tumble are investigated, whereas the total profile deviation is used as a representative geometric variable and the overall transmission error as a functional variable. The approach is experimentally validated by means of the design of experiments. Furthermore, different approaches for feature extraction from time-continuous sensor data and different machine-learning approaches for predicting process and geometry parameters are compared with each other and tested for suitability. It is shown that structure-borne sound, in combination with supervised machine learning and data analysis, is suitable for inprocess monitoring of microgear hobbing processes. Full article
Show Figures

Figure 1

15 pages, 433 KiB  
Article
Toward Explainable AutoEncoder-Based Diagnosis of Dynamical Systems
by Gregory Provan
Algorithms 2023, 16(4), 178; https://doi.org/10.3390/a16040178 - 24 Mar 2023
Viewed by 2049
Abstract
Autoencoders have been used widely for diagnosing devices, for example, faults in rotating machinery. However, autoencoder-based approaches lack explainability for their results and can be hard to tune. In this article, we propose an explainable method for applying autoencoders for diagnosis, where we [...] Read more.
Autoencoders have been used widely for diagnosing devices, for example, faults in rotating machinery. However, autoencoder-based approaches lack explainability for their results and can be hard to tune. In this article, we propose an explainable method for applying autoencoders for diagnosis, where we use a metric that maximizes the diagnostics accuracy. Since an autoencoder projects the input into a reduced subspace (the code), we define a theoretically well-understood approach, the subspace principal angle, to define a metric over the possible fault labels. We show how this approach can be used for both single-device diagnostics (e.g., faults in rotating machinery) and complex (multi-device) dynamical systems. We empirically validate the theoretical claims using multiple autoencoder architectures. Full article
Show Figures

Figure 1

26 pages, 3019 KiB  
Article
A Real-Time Novelty Recognition Framework Based on Machine Learning for Fault Detection
by Umberto Albertin, Giuseppe Pedone, Matilde Brossa, Giovanni Squillero and Marcello Chiaberge
Algorithms 2023, 16(2), 61; https://doi.org/10.3390/a16020061 - 17 Jan 2023
Cited by 6 | Viewed by 2575
Abstract
New technologies are developed inside today’s companies with the ascent of Industry 4.0 paradigm; Artificial Intelligence applied to Predictive Maintenance is one of these, helping factories automate their systems in detecting anomalies. The deviation of statistical features from standard operating conditions computed on [...] Read more.
New technologies are developed inside today’s companies with the ascent of Industry 4.0 paradigm; Artificial Intelligence applied to Predictive Maintenance is one of these, helping factories automate their systems in detecting anomalies. The deviation of statistical features from standard operating conditions computed on collected data is a common investigation technique that companies use. The information loss due to transformation from raw data to extracted features is a problem of this approach. Furthermore, a common Predictive Maintenance framework requires historical data about failures that often do not exist, neglecting the possibility of applying it. This paper uses Artificial Intelligence as Machine Learning models to recognize when something changes in the data’s behavior collected up to that moment, also helping companies to gather a preliminary dataset for future Predictive Maintenance implementation. The aim concerns a framework in which several sensors are used to collect data by adopting a sensor fusion approach. The architecture is composed of an optimized software system able to enhance the computation scalability and the response time regarding novelty detection. This article analyzes the proposed architecture, then explains a proof-of-concept development using a digital model; finally, two real cases are studied to show how the framework behaves in a real environment. The analysis done in this paper has an application-oriented approach; hence a company can directly use the framework in its systems. Full article
Show Figures

Figure 1

21 pages, 2149 KiB  
Article
Packet-Level and Flow-Level Network Intrusion Detection Based on Reinforcement Learning and Adversarial Training
by Bin Yang, Muhammad Haseeb Arshad and Qing Zhao
Algorithms 2022, 15(12), 453; https://doi.org/10.3390/a15120453 - 30 Nov 2022
Cited by 5 | Viewed by 2799
Abstract
Powered by advances in information and internet technologies, network-based applications have developed rapidly, and cybersecurity has grown more critical. Inspired by Reinforcement Learning (RL) success in many domains, this paper proposes an Intrusion Detection System (IDS) to improve cybersecurity. The IDS based on [...] Read more.
Powered by advances in information and internet technologies, network-based applications have developed rapidly, and cybersecurity has grown more critical. Inspired by Reinforcement Learning (RL) success in many domains, this paper proposes an Intrusion Detection System (IDS) to improve cybersecurity. The IDS based on two RL algorithms, i.e., Deep Q-Learning and Policy Gradient, is carefully formulated, strategically designed, and thoroughly evaluated at the packet-level and flow-level using the CICDDoS2019 dataset. Compared to other research work in a similar line of research, this paper is focused on providing a systematic and complete design paradigm of IDS based on RL algorithms, at both the packet and flow levels. For the packet-level RL-based IDS, first, the session data are transformed into images via an image embedding method proposed in this work. A comparison between 1D-Convolutional Neural Networks (1D-CNN) and CNN for extracting features from these images (for further RL agent training) is drawn from the quantitative results. In addition, an anomaly detection module is designed to detect unknown network traffic. For flow-level IDS, a Conditional Generative Adversarial Network (CGAN) and the ε-greedy strategy are adopted in designing the exploration module for RL agent training. To improve the robustness of the intrusion detection, a sample agent with a complement reward policy of the RL agent is introduced for the purpose of adversarial training. The experimental results of the proposed RL-based IDS show improved results over the state-of-the-art algorithms presented in the literature for packet-level and flow-level IDS. Full article
Show Figures

Figure 1

22 pages, 4021 KiB  
Article
An Auto-Encoder with Genetic Algorithm for High Dimensional Data: Towards Accurate and Interpretable Outlier Detection
by Jiamu Li, Ji Zhang, Mohamed Jaward Bah, Jian Wang, Youwen Zhu, Gaoming Yang, Lingling Li and Kexin Zhang
Algorithms 2022, 15(11), 429; https://doi.org/10.3390/a15110429 - 15 Nov 2022
Cited by 5 | Viewed by 3580
Abstract
When dealing with high-dimensional data, such as in biometric, e-commerce, or industrial applications, it is extremely hard to capture the abnormalities in full space due to the curse of dimensionality. Furthermore, it is becoming increasingly complicated but essential to provide interpretations for outlier [...] Read more.
When dealing with high-dimensional data, such as in biometric, e-commerce, or industrial applications, it is extremely hard to capture the abnormalities in full space due to the curse of dimensionality. Furthermore, it is becoming increasingly complicated but essential to provide interpretations for outlier detection results in high-dimensional space as a consequence of the large number of features. To alleviate these issues, we propose a new model based on a Variational AutoEncoder and Genetic Algorithm (VAEGA) for detecting outliers in subspaces of high-dimensional data. The proposed model employs a neural network to create a probabilistic dimensionality reduction variational autoencoder (VAE) that applies its low-dimensional hidden space to characterize the high-dimensional inputs. Then, the hidden vector is sampled randomly from the hidden space to reconstruct the data so that it closely matches the input data. The reconstruction error is then computed to determine an outlier score, and samples exceeding the threshold are tentatively identified as outliers. In the second step, a genetic algorithm (GA) is used as a basis for examining and analyzing the abnormal subspace of the outlier set obtained by the VAE layer. After encoding the outlier dataset’s subspaces, the degree of anomaly for the detected subspaces is calculated using the redefined fitness function. Finally, the abnormal subspace is calculated for the detected point by selecting the subspace with the highest degree of anomaly. The clustering of abnormal subspaces helps filter outliers that are mislabeled (false positives), and the VAE layer adjusts the network weights based on the false positives. When compared to other methods using five public datasets, the VAEGA outlier detection model results are highly interpretable and outperform or have competitive performance compared to current contemporary methods. Full article
Show Figures

Figure 1

38 pages, 1009 KiB  
Article
Anomaly Detection in Financial Time Series by Principal Component Analysis and Neural Networks
by Stéphane Crépey, Noureddine Lehdili, Nisrine Madhar and Maud Thomas
Algorithms 2022, 15(10), 385; https://doi.org/10.3390/a15100385 - 19 Oct 2022
Cited by 7 | Viewed by 5286
Abstract
A major concern when dealing with financial time series involving a wide variety of market risk factors is the presence of anomalies. These induce a miscalibration of the models used to quantify and manage risk, resulting in potential erroneous risk measures. We propose [...] Read more.
A major concern when dealing with financial time series involving a wide variety of market risk factors is the presence of anomalies. These induce a miscalibration of the models used to quantify and manage risk, resulting in potential erroneous risk measures. We propose an approach that aims to improve anomaly detection in financial time series, overcoming most of the inherent difficulties. Valuable features are extracted from the time series by compressing and reconstructing the data through principal component analysis. We then define an anomaly score using a feedforward neural network. A time series is considered to be contaminated when its anomaly score exceeds a given cutoff value. This cutoff value is not a hand-set parameter but rather is calibrated as a neural network parameter throughout the minimization of a customized loss function. The efficiency of the proposed approach compared to several well-known anomaly detection algorithms is numerically demonstrated on both synthetic and real data sets, with high and stable performance being achieved with the PCA NN approach. We show that value-at-risk estimation errors are reduced when the proposed anomaly detection model is used with a basic imputation approach to correct the anomaly. Full article
Show Figures

Figure 1

Back to TopTop