applsci-logo

Journal Browser

Journal Browser

Visual Inspection Using Machine Learning and Artificial Intelligence

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 18083

Special Issue Editors


E-Mail Website
Guest Editor
AutoAI Lab, Clemson University, Greenville, SC 29607, USA
Interests: sensing; perception; visual inspection; 3D inspection; machine learning; deep learning; artificial intelligence; multi-model learning; non-destructive testing (NDT); inspection robots

E-Mail Website
Co-Guest Editor
Kimia Lab, University of Waterloo, Waterloo, ON, Canada
Interests: digital histopathology; multi-instance learning; federated learning

Special Issue Information

Dear Colleagues,

We are inviting submissions for the Special Issue entitled “Visual Inspection Using Machine Learning and Artificial Intelligence”.

Visual inspection is a process to identify targeted information, anomalies or events for better decision making by acquiring and analyzing data from perceptive sensors, including but not limited to cameras, depth, thermal, ultrasonic, x-ray, infrared, LiDAR, radar, ground penetrating radar, and satellite images. Visual inspection has broad applications and impacts in various fields.

Machine learning and Artificial Intelligence approaches are essential for efficient visual inspection, and the success of deep learning neural networks in computer vision unlocks great potential for advances in the performance of visual inspection.

This Special Issue is devoted to both theoretical and experimental studies in the fields of visual inspection from application, data, machine learning and artificial intelligence (AI) perspectives. The AI refers to both conventional machine learning and deep learning methods. The inspection target can be either visible or invisible. Visual inspection normally only refers to non-invasive/destructive inspection with or without contact.

The scope of this Special Issue includes, but is not limited to:

  • Visual inspection situ applications, such as in manufacturing, automation, civil construction, medical/clinical, surveillance, remote sensing, and agriculture.
  • Visual inspection datasets in various domains for target or event detections.
  • Novel AI model design for perceptive sensor data analysis;
  • Novel AI model design leveraging the uniqueness of perceptive sensor data, such as the spatial–temporal continuity, frequencies, and multiple modalities;
  • Novel AI model design tackling the challenges in visual inspection, such as data imbalance, domain adaptation, data-efficient (weakly/semi/self/un-supervised) models, online adaptation, and high-resolution estimations;
  • Novel geometric or AI model design fusing multiple 2D perceptive data for 3D visual inspection.
  • Human-in-the-loop or novel bio-inspired methods in visual inspection;
  • Real-time visual inspection on edge AI mobile devices, AR/VR, robot systems, or with cloud-aided settings;
  • Non-destructive testing (NDT) for visual inspection;
  • Comprehensive review and survey papers in visual inspection.

Prof. Dr. Bing Li
Dr. Shivam Kalra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensing
  • perception
  • visual inspection
  • 3D inspection
  • machine learning
  • deep learning
  • artificial intelligence
  • multi-model learning
  • non-destructive testing (NDT)
  • inspection robots

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 1553 KiB  
Article
Detecting Underwater Concrete Cracks with Machine Learning: A Clear Vision of a Murky Problem
by Ugnė Orinaitė, Viltė Karaliūtė, Mayur Pal and Minvydas Ragulskis
Appl. Sci. 2023, 13(12), 7335; https://doi.org/10.3390/app13127335 - 20 Jun 2023
Cited by 7 | Viewed by 3137
Abstract
This paper presents the development of an underwater crack detection system for structural integrity assessment of submerged structures, such as offshore oil and gas installations, underwater pipelines, underwater foundations for bridges, dams, etc. Our focus is on the use of machine-learning-based approaches. First, [...] Read more.
This paper presents the development of an underwater crack detection system for structural integrity assessment of submerged structures, such as offshore oil and gas installations, underwater pipelines, underwater foundations for bridges, dams, etc. Our focus is on the use of machine-learning-based approaches. First, a detailed literature review of the state of the current methods for underwater surface crack detection is presented, highlighting challenges and opportunities. An overview of the image augmentation approach for the creation of underwater optical effects is also presented. Experimental results using a standard network-based machine learning approach, which is used for surface crack detection in onshore environments, are presented. A series of test cases is presented in which existing networks’ performance is improved using augmented images for underwater conditions. The effectiveness and accuracy of the proposed approach in detecting cracks in underwater concrete structures are demonstrated. The proposed approach has the potential to improve the safety and reliability of underwater structures and prevent catastrophic failures. Full article
(This article belongs to the Special Issue Visual Inspection Using Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

20 pages, 22469 KiB  
Article
Application of Deep Learning Techniques in Water Level Measurement: Combining Improved SegFormer-UNet Model with Virtual Water Gauge
by Zhifeng Xie, Jianhui Jin, Jianping Wang, Rongxing Zhang and Shenghong Li
Appl. Sci. 2023, 13(9), 5614; https://doi.org/10.3390/app13095614 - 2 May 2023
Cited by 7 | Viewed by 2296
Abstract
Most computer vision algorithms for water level measurement rely on a physical water gauge in the image, which can pose challenges when the gauge is partially or fully obscured. To overcome this issue, we propose a novel method that combines semantic segmentation with [...] Read more.
Most computer vision algorithms for water level measurement rely on a physical water gauge in the image, which can pose challenges when the gauge is partially or fully obscured. To overcome this issue, we propose a novel method that combines semantic segmentation with a virtual water gauge. Initially, we compute the perspective transformation matrix between the pixel coordinate system and the virtual water gauge coordinate system based on the projection relationship. We then use an improved SegFormer-UNet segmentation network to accurately segment the water body and background in the image, and determine the water level line based on their boundaries. Finally, we transform the water level line from the pixel coordinate system to the virtual gauge coordinate system using the perspective transformation matrix to obtain the final water level value. Experimental results show that the improved SegFormer-UNet segmentation network achieves an average pixel accuracy of 99.10% and an Intersection Over Union of 98.34%. Field tests confirm that the proposed method can accurately measure the water level with an error of less than 1 cm, meeting the practical application requirements. Full article
(This article belongs to the Special Issue Visual Inspection Using Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

13 pages, 10702 KiB  
Article
Low-Quality Integrated Circuits Image Verification Based on Low-Rank Subspace Clustering with High-Frequency Texture Components
by Guoliang Tan, Zexiao Liang, Yuan Chi, Qian Li, Bin Peng, Yuan Liu and Jianzhong Li
Appl. Sci. 2023, 13(1), 155; https://doi.org/10.3390/app13010155 - 22 Dec 2022
Viewed by 1683
Abstract
With the vigorous development of integrated circuit (IC) manufacturing, the harmfulness of defects and hardware Trojans is also rising. Therefore, chip verification becomes more and more important. At present, the accuracy of most existing chip verification methods depends on high-precision sample data of [...] Read more.
With the vigorous development of integrated circuit (IC) manufacturing, the harmfulness of defects and hardware Trojans is also rising. Therefore, chip verification becomes more and more important. At present, the accuracy of most existing chip verification methods depends on high-precision sample data of ICs. Paradoxically, it is more challenging to invent an efficient algorithm for high-precision noiseless data. Thus, we recently proposed a fusion clustering framework based on low-quality chip images named High-Frequency Low-Rank Subspace Clustering (HFLRSC), which can provide the data foundation for the verification task by effectively clustering those noisy and low-resolution partial images of multiple target ICs into the correct categories. The first step of the framework is to extract high-frequency texture components. Subsequently, the extracted texture components will be integrated into subspace learning so that the algorithm can not only learn the low-rank space but also retain high-frequency information with texture characteristics. In comparison with the benchmark and state-of-the-art method, the presented approach can more effectively process simulation low-quality IC images and achieve better performance. Full article
(This article belongs to the Special Issue Visual Inspection Using Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 6641 KiB  
Article
Vision Transformer in Industrial Visual Inspection
by Nils Hütten, Richard Meyes and Tobias Meisen
Appl. Sci. 2022, 12(23), 11981; https://doi.org/10.3390/app122311981 - 23 Nov 2022
Cited by 13 | Viewed by 3770
Abstract
Artificial intelligence as an approach to visual inspection in industrial applications has been considered for decades. Recent successes, driven by advances in deep learning, present a potential paradigm shift and have the potential to facilitate an automated visual inspection, even under complex environmental [...] Read more.
Artificial intelligence as an approach to visual inspection in industrial applications has been considered for decades. Recent successes, driven by advances in deep learning, present a potential paradigm shift and have the potential to facilitate an automated visual inspection, even under complex environmental conditions. Thereby, convolutional neural networks (CNN) have been the de facto standard in deep-learning-based computer vision (CV) for the last 10 years. Recently, attention-based vision transformer architectures emerged and surpassed the performance of CNNs on benchmark datasets, regarding regular CV tasks, such as image classification, object detection, or segmentation. Nevertheless, despite their outstanding results, the application of vision transformers to real world visual inspection is sparse. We suspect that this is likely due to the assumption that they require enormous amounts of data to be effective. In this study, we evaluate this assumption. For this, we perform a systematic comparison of seven widely-used state-of-the-art CNN and transformer based architectures trained in three different use cases in the domain of visual damage assessment for railway freight car maintenance. We show that vision transformer models achieve at least equivalent performance to CNNs in industrial applications with sparse data available, and significantly surpass them in increasingly complex tasks. Full article
(This article belongs to the Special Issue Visual Inspection Using Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 4623 KiB  
Article
Soldering Data Classification with a Deep Clustering Approach: Case Study of an Academic-Industrial Cooperation
by Kinga Bettina Faragó, Joul Skaf, Szabolcs Forgács, Bence Hevesi and András Lőrincz
Appl. Sci. 2022, 12(14), 6927; https://doi.org/10.3390/app12146927 - 8 Jul 2022
Cited by 2 | Viewed by 1786
Abstract
Modern industries still commonly use traditional methods to visually inspect products, even though automation has many advantages over the skills of human labour. The automation of redundant tasks is one of the greatest successes of Artificial Intelligence (AI). It employs human annotation and [...] Read more.
Modern industries still commonly use traditional methods to visually inspect products, even though automation has many advantages over the skills of human labour. The automation of redundant tasks is one of the greatest successes of Artificial Intelligence (AI). It employs human annotation and finds possible relationships between features within a particular dataset. However, until recently, this has always been the responsibility of AI specialists with a specific type of knowledge that is not available to the industrial domain experts. We documented the joint research of AI and domain experts as a case study on processing a soldering-related industrial dataset. Our image classification approach relies on the latent space representations of neural networks already trained on other databases. We perform dimensionality reduction of the representations of the new data and cluster the outputs in the lower dimension. This method requires little to no knowledge of the underlying architecture of neural networks by the domain experts, meaning it is easily manageable by them, supporting generalization to other use cases that can be investigated in future work. We also suggest a misclassification detecting method. We were able to achieve near-perfect test accuracy with minimal annotation work. Full article
(This article belongs to the Special Issue Visual Inspection Using Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

21 pages, 1597 KiB  
Article
Inline Defective Laser Weld Identification by Processing Thermal Image Sequences with Machine and Deep Learning Techniques
by Domenico Buongiorno, Michela Prunella, Stefano Grossi, Sardar Mehboob Hussain, Alessandro Rennola, Nicola Longo, Giovanni Di Stefano, Vitoantonio Bevilacqua and Antonio Brunetti
Appl. Sci. 2022, 12(13), 6455; https://doi.org/10.3390/app12136455 - 25 Jun 2022
Cited by 19 | Viewed by 3414
Abstract
The non-destructive testing methods offer great benefit in detecting and classifying the weld defects. Among these, infrared (IR) thermography stands out in the inspection, characterization, and analysis of the defects from the camera image sequences, particularly with the recent advent of deep learning. [...] Read more.
The non-destructive testing methods offer great benefit in detecting and classifying the weld defects. Among these, infrared (IR) thermography stands out in the inspection, characterization, and analysis of the defects from the camera image sequences, particularly with the recent advent of deep learning. However, in IR, the defect classification becomes a cumbersome task because of the exposure to the inconsistent and unbalanced heat source, which requires additional supervision. In light of this, authors present a fully automated system capable of detecting defective welds according to the electrical resistance properties in the inline mode. The welding process is captured by an IR camera that generates a video sequence. A set of features extracted by such video feeds supervised machine learning and deep learning algorithms in order to build an industrial diagnostic framework for weld defect detection. The experimental study validates the aptitude of a customized convolutional neural network architecture to classify the malfunctioning weld joints with mean accuracy of 99% and median f1 score of 73% across five-fold cross validation on our locally acquired real world dataset. The outcome encourages the integration of thermographic-based quality control frameworks in all applications where fast and accurate recognition and safety assurance are crucial industrial requirements across the production line. Full article
(This article belongs to the Special Issue Visual Inspection Using Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop