sensors-logo

Journal Browser

Journal Browser

Engineering Applications of Artificial Intelligence for Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (30 September 2024) | Viewed by 18377

Special Issue Editors


E-Mail Website
Guest Editor
Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 159, 02-776 Warsaw, Poland
Interests: artificial intelligence; machine learning; deep learning; applications of AI for sensors

E-Mail Website
Guest Editor
Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, Nowoursynowska 159, 02-776 Warsaw, Poland
Interests: artificial intelligence; machine learning; deep learning; applications; applications of AI for sensors

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) is currently one of the most developing techniques in the engineering world and plays an important role worldwide. Thanks to the continuous development of machine learning, deep learning, etc. we constantly observe new applications of these techniques. Artificial intelligence techniques are widely used by engineers to solve a whole range of previously unseen problems.

Artificial intelligence methods require a lot of data, which we often obtain from many types of sensors and sensory technology. The combination of these two areas: data obtained from sensors and artificial intelligence algorithms, creates an extremely interesting, future-proof and promising interdisciplinary research area.

The Special Issue of "Engineering Applications of Artificial Intelligence for Sensors" provides an international space for the publication of papers describing the practical application of AI methods such us machine learning and deep learning in all aspects of engineering for sensors. Artificial intelligence techniques implemented in both open and closed code are acceptable. Artificial intelligence solutions based on cloud computing are particularly expected. This Special Issue aims to report innovative algorithms and applications of Artificial intelligence, machine learning, and deep learning to achieve improvement of life. Submitted articles should show interesting applications of artificial intelligence in engineering world where data for AI derives from sensors.

Potential topics include, but are not limited to:

  • Machine learning application
  • Deep learning applications
  • Internet of things (IoT) and cyber-physical systems
  • Intelligent transportation systems & smart vehicles
  • Big data analytics, understanding complex networks
  • Neural networks, fuzzy systems, neuro-fuzzy systems
  • Deep learning and real-world applications
  • Self-organizing, emerging or bio-inspired system
  • Global optimization, Meta-heuristics and their applications: Evolutionary Algorithms, swarm intelligence, nature and biologically inspired meta-heuristics, etc.
  • Architectures, algorithms and techniques for distributed AI systems, including multi-agent-based control and holonic control
  • Decision-support systems
  • Real-time intelligent automation, and their associated supporting methodologies and techniques, including control theory and industrial informatics
  • Knowledge processing, knowledge elicitation and acquisition, knowledge representation, knowledge compaction, knowledge bases, expert systems
  • Perception, e.g., image processing, pattern recognition, vision systems, tactile systems, speech recognition and synthesis
  • Aspects of software engineering, e.g., intelligent programming environments, verification and validation of AI-based software, software and hardware architectures for the real-time use of AI techniques, safety and reliability
  • Intelligent fault detection, fault analysis, diagnostics and monitoring
  • Industrial experiences in the application of the above techniques, e.g., case studies or benchmarking exercises
  • Robotics

Prof. Dr. Jarosław Kurek
Prof. Dr. Bartosz Świderski 
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • simulation
  • applications of AI for sensors
  • neural network
  • decision-support systems
  • sensors

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 7277 KiB  
Article
Leak Event Diagnosis for Power Plants: Generative Anomaly Detection Using Prototypical Networks
by Jaehyeok Jeong, Doyeob Yeo, Seungseo Roh, Yujin Jo and Minsuk Kim
Sensors 2024, 24(15), 4991; https://doi.org/10.3390/s24154991 - 1 Aug 2024
Viewed by 868
Abstract
Anomaly detection systems based on artificial intelligence (AI) have demonstrated high performance and efficiency in a wide range of applications such as power plants and smart factories. However, due to the inherent reliance of AI systems on the quality of training data, they [...] Read more.
Anomaly detection systems based on artificial intelligence (AI) have demonstrated high performance and efficiency in a wide range of applications such as power plants and smart factories. However, due to the inherent reliance of AI systems on the quality of training data, they still demonstrate poor performance in certain environments. Especially in hazardous facilities with constrained data collection, deploying these systems remains a challenge. In this paper, we propose Generative Anomaly Detection using Prototypical Networks (GAD-PN) designed to detect anomalies using only a limited number of normal samples. GAD-PN is a structure that integrates CycleGAN with Prototypical Networks (PNs), learning from metadata similar to the target environment. This approach enables the collection of data that are difficult to gather in real-world environments by using simulation or demonstration models, thus providing opportunities to learn a variety of environmental parameters under ideal and normal conditions. During the inference phase, PNs can classify normal and leak samples using only a small number of normal data from the target environment by prototypes that represent normal and abnormal features. We also complement the challenge of collecting anomaly data by generating anomaly data from normal data using CycleGAN trained on anomaly features. It can also be adapted to various environments that have similar anomalous scenarios, regardless of differences in environmental parameters. To validate the proposed structure, data were collected specifically targeting pipe leakage scenarios, which are significant problems in environments such as power plants. In addition, acoustic ultrasound signals were collected from the pipe nozzles in three different environments. As a result, the proposed model achieved a leak detection accuracy of over 90% in all environments, even with only a small number of normal data. This performance shows an average improvement of approximately 30% compared with traditional unsupervised learning models trained with a limited dataset. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

29 pages, 1265 KiB  
Article
Machine Learning Model Development to Predict Power Outage Duration (POD): A Case Study for Electric Utilities
by Bita Ghasemkhani, Recep Alp Kut, Reyat Yilmaz, Derya Birant, Yiğit Ahmet Arıkök, Tugay Eren Güzelyol and Tuna Kut
Sensors 2024, 24(13), 4313; https://doi.org/10.3390/s24134313 - 2 Jul 2024
Viewed by 2083
Abstract
In the face of increasing climate variability and the complexities of modern power grids, managing power outages in electric utilities has emerged as a critical challenge. This paper introduces a novel predictive model employing machine learning algorithms, including decision tree (DT), random forest [...] Read more.
In the face of increasing climate variability and the complexities of modern power grids, managing power outages in electric utilities has emerged as a critical challenge. This paper introduces a novel predictive model employing machine learning algorithms, including decision tree (DT), random forest (RF), k-nearest neighbors (KNN), and extreme gradient boosting (XGBoost). Leveraging historical sensors-based and non-sensors-based outage data from a Turkish electric utility company, the model demonstrates adaptability to diverse grid structures, considers meteorological and non-meteorological outage causes, and provides real-time feedback to customers to effectively address the problem of power outage duration. Using the XGBoost algorithm with the minimum redundancy maximum relevance (MRMR) feature selection attained 98.433% accuracy in predicting outage durations, better than the state-of-the-art methods showing 85.511% accuracy on average over various datasets, a 12.922% improvement. This paper contributes a practical solution to enhance outage management and customer communication, showcasing the potential of machine learning to transform electric utility responses and improve grid resilience and reliability. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

22 pages, 6526 KiB  
Article
Optimizing Image Enhancement: Feature Engineering for Improved Classification in AI-Assisted Artificial Retinas
by Asif Mehmood, Jungbeom Ko, Hyunchul Kim and Jungsuk Kim
Sensors 2024, 24(9), 2678; https://doi.org/10.3390/s24092678 - 23 Apr 2024
Viewed by 1780
Abstract
Artificial retinas have revolutionized the lives of many blind people by enabling their ability to perceive vision via an implanted chip. Despite significant advancements, there are some limitations that cannot be ignored. Presenting all objects captured in a scene makes their identification difficult. [...] Read more.
Artificial retinas have revolutionized the lives of many blind people by enabling their ability to perceive vision via an implanted chip. Despite significant advancements, there are some limitations that cannot be ignored. Presenting all objects captured in a scene makes their identification difficult. Addressing this limitation is necessary because the artificial retina can utilize a very limited number of pixels to represent vision information. This problem in a multi-object scenario can be mitigated by enhancing images such that only the major objects are considered to be shown in vision. Although simple techniques like edge detection are used, they fall short in representing identifiable objects in complex scenarios, suggesting the idea of integrating primary object edges. To support this idea, the proposed classification model aims at identifying the primary objects based on a suggested set of selective features. The proposed classification model can then be equipped into the artificial retina system for filtering multiple primary objects to enhance vision. The suitability of handling multi-objects enables the system to cope with real-world complex scenarios. The proposed classification model is based on a multi-label deep neural network, specifically designed to leverage from the selective feature set. Initially, the enhanced images proposed in this research are compared with the ones that utilize an edge detection technique for single, dual, and multi-object images. These enhancements are also verified through an intensity profile analysis. Subsequently, the proposed classification model’s performance is evaluated to show the significance of utilizing the suggested features. This includes evaluating the model’s ability to correctly classify the top five, four, three, two, and one object(s), with respective accuracies of up to 84.8%, 85.2%, 86.8%, 91.8%, and 96.4%. Several comparisons such as training/validation loss and accuracies, precision, recall, specificity, and area under a curve indicate reliable results. Based on the overall evaluation of this study, it is concluded that using the suggested set of selective features not only improves the classification model’s performance, but aligns with the specific problem to address the challenge of correctly identifying objects in multi-object scenarios. Therefore, the proposed classification model designed on the basis of selective features is considered to be a very useful tool in supporting the idea of optimizing image enhancement. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

32 pages, 1698 KiB  
Article
Anomaly Detection in Railway Sensor Data Environments: State-of-the-Art Methods and Empirical Performance Evaluation
by Michał Bałdyga, Kacper Barański, Jakub Belter, Mateusz Kalinowski and Paweł Weichbroth
Sensors 2024, 24(8), 2633; https://doi.org/10.3390/s24082633 - 20 Apr 2024
Viewed by 1866
Abstract
To date, significant progress has been made in the field of railway anomaly detection using technologies such as real-time data analytics, the Internet of Things, and machine learning. As technology continues to evolve, the ability to detect and respond to anomalies in railway [...] Read more.
To date, significant progress has been made in the field of railway anomaly detection using technologies such as real-time data analytics, the Internet of Things, and machine learning. As technology continues to evolve, the ability to detect and respond to anomalies in railway systems is once again in the spotlight. However, railway anomaly detection faces challenges related to the vast infrastructure, dynamic conditions, aging infrastructure, and adverse environmental conditions on the one hand, and the scale, complexity, and critical safety implications of railway systems on the other. Our study is underpinned by the three objectives. Specifically, we aim to identify time series anomaly detection methods applied to railway sensor device data, recognize the advantages and disadvantages of these methods, and evaluate their effectiveness. To address the research objectives, the first part of the study involved a systematic literature review and a series of controlled experiments. In the case of the former, we adopted well-established guidelines to structure and visualize the review. In the second part, we investigated the effectiveness of selected machine learning methods. To evaluate the predictive performance of each method, a five-fold cross-validation approach was applied to ensure the highest accuracy and generality. Based on the calculated accuracy, the results show that the top three methods are CatBoost (96%), Random Forest (91%), and XGBoost (90%), whereas the lowest accuracy is observed for One-Class Support Vector Machines (48%), Local Outlier Factor (53%), and Isolation Forest (55%). As the industry moves toward a zero-defect paradigm on a global scale, ongoing research efforts are focused on improving existing methods and developing new ones that contribute to the safety and quality of rail transportation. In this sense, there are at least four avenues for future research worth considering: testing richer data sets, hyperparameter optimization, and implementing other methods not included in the current study. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

28 pages, 9008 KiB  
Article
Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System
by Haiyang Zhou, Yixin Zhao, Yanzhong Liu, Sichao Lu, Xiang An and Qiang Liu
Sensors 2023, 23(10), 4750; https://doi.org/10.3390/s23104750 - 14 May 2023
Cited by 8 | Viewed by 4364
Abstract
Human activity recognition (HAR) is becoming increasingly important, especially with the growing number of elderly people living at home. However, most sensors, such as cameras, do not perform well in low-light environments. To address this issue, we designed a HAR system that combines [...] Read more.
Human activity recognition (HAR) is becoming increasingly important, especially with the growing number of elderly people living at home. However, most sensors, such as cameras, do not perform well in low-light environments. To address this issue, we designed a HAR system that combines a camera and a millimeter wave radar, taking advantage of each sensor and a fusion algorithm to distinguish between confusing human activities and to improve accuracy in low-light settings. To extract the spatial and temporal features contained in the multisensor fusion data, we designed an improved CNN-LSTM model. In addition, three data fusion algorithms were studied and investigated. Compared to camera data in low-light environments, the fusion data significantly improved the HAR accuracy by at least 26.68%, 19.87%, and 21.92% under the data level fusion algorithm, feature level fusion algorithm, and decision level fusion algorithm, respectively. Moreover, the data level fusion algorithm also resulted in a reduction of the best misclassification rate to 2%~6%. These findings suggest that the proposed system has the potential to enhance the accuracy of HAR in low-light environments and to decrease human activity misclassification rates. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

15 pages, 4253 KiB  
Article
Vehicle Detection and Recognition Approach in Multi-Scale Traffic Monitoring System via Graph-Based Data Optimization
by Grzegorz Wieczorek, Sheikh Badar ud din Tahir, Israr Akhter and Jaroslaw Kurek
Sensors 2023, 23(3), 1731; https://doi.org/10.3390/s23031731 - 3 Feb 2023
Cited by 7 | Viewed by 3892
Abstract
Over the past few years, significant investments in smart traffic monitoring systems have been made. The most important step in machine learning is detecting and recognizing objects relative to vehicles. Due to variations in vision and different lighting conditions, the recognition and tracking [...] Read more.
Over the past few years, significant investments in smart traffic monitoring systems have been made. The most important step in machine learning is detecting and recognizing objects relative to vehicles. Due to variations in vision and different lighting conditions, the recognition and tracking of vehicles under varying extreme conditions has become one of the most challenging tasks. To deal with this, our proposed system presents an adaptive method for robustly recognizing several existing automobiles in dense traffic settings. Additionally, this research presents a broad framework for effective on-road vehicle recognition and detection. Furthermore, the proposed system focuses on challenges typically noticed in analyzing traffic scenes captured by in-vehicle cameras, such as consistent extraction of features. First, we performed frame conversion, background subtraction, and object shape optimization as preprocessing steps. Next, two important features (energy and deep optical flow) were extracted. The incorporation of energy and dense optical flow features in distance-adaptive window areas and subsequent processing over the fused features resulted in a greater capacity for discrimination. Next, a graph-mining-based approach was applied to select optimal features. Finally, the artificial neural network was adopted for detection and classification. The experimental results show significant performance in two benchmark datasets, including the LISA and KITTI 7 databases. The LISA dataset achieved a mean recognition rate of 93.75% on the LDB1 and LDB2 databases, whereas KITTI attained 82.85% accuracy on separate training of ANN. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

20 pages, 1386 KiB  
Article
Improved Drill State Recognition during Milling Process Using Artificial Intelligence
by Jarosław Kurek, Artur Krupa, Izabella Antoniuk, Arlan Akhmet, Ulan Abdiomar, Michał Bukowski and Karol Szymanowski
Sensors 2023, 23(1), 448; https://doi.org/10.3390/s23010448 - 1 Jan 2023
Cited by 5 | Viewed by 1929
Abstract
In this article, an automated method for tool condition monitoring is presented. When producing items in large quantities, pointing out the exact time when the element needs to be exchanged is crucial. If performed too early, the operator gets rid of a good [...] Read more.
In this article, an automated method for tool condition monitoring is presented. When producing items in large quantities, pointing out the exact time when the element needs to be exchanged is crucial. If performed too early, the operator gets rid of a good drill, also resulting in production downtime increase if this operation is repeated too often. On the other hand, continuing production with a worn tool might result in a poor-quality product and financial loss for the manufacturer. In the presented approach, drill wear is classified using three states representing decreasing quality: green, yellow and red. A series of signals were collected as training data for the classification algorithms. Measurements were saved in separate data sets with corresponding time windows. A total of ten methods were evaluated in terms of overall accuracy and the number of misclassification errors. Three solutions obtained an acceptable accuracy rate above 85%. Algorithms were able to assign states without the most undesirable red-green and green-red errors. The best results were achieved by the Extreme Gradient Boosting algorithm. This approach achieved an overall accuracy of 93.33%, and the only misclassification was the yellow sample assigned as green. The presented solution achieves good results and can be applied in industry applications related to tool condition monitoring. Full article
(This article belongs to the Special Issue Engineering Applications of Artificial Intelligence for Sensors)
Show Figures

Figure 1

Back to TopTop