applsci-logo

Journal Browser

Journal Browser

Computer Vision in Automatic Detection and Identification

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: 10 March 2025 | Viewed by 11358

Special Issue Editors


E-Mail Website
Guest Editor
Australian Institute of Machine Learning (AIML), University of Adelaide, South Australia 5005, Australia
Interests: field robotics; intelligent perception; visual localization; artificial intelligence; image processing; pattern recognition
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Engineering, China Agricultural University, Beijing 100083, China
Interests: field robotics; SLAM; robot audition; computer vision; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Information and Engineering, Northwest A&F University, Xi’an 712100, China
Interests: computer graphics; computer vision; virtual reality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With recent advances, computer vision and Artificial Intelligence (AI) approaches have demonstrated great promise in Industry 4.0, smart agriculture, medicine, and other fields. The recent development and application of big data and AI approaches in particular boost computer-vision-based detection and identification. There are many theories, algorithms, and application approaches that have been proposed to solve challenges in the domains of science, engineering, and society. The purpose of this Special Issue is to report on advances and applications in computer-vision-based detection and identification. We welcome original research and review articles.

Potential topics include but are not limited to the following:

  • Detection and identification;
  • Image processing;
  • Object detection and segmentation;
  • Computer vision tools and applications;
  • Pattern recognition;
  • Digital image techniques;
  • Multispectral image-based detection

Dr. Yongliang Qiao
Dr. Daobilige Su
Prof. Dr. Meili Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • detection and identification
  • image processing
  • object detection and segmentation
  • computer vision tools and applications
  • pattern recognition
  • digital image techniques
  • multispectral image-based detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 2064 KiB  
Article
Approach for Tattoo Detection and Identification Based on YOLOv5 and Similarity Distance
by Gabija Pocevičė, Pavel Stefanovič, Simona Ramanauskaitė and Ernest Pavlov
Appl. Sci. 2024, 14(13), 5576; https://doi.org/10.3390/app14135576 - 26 Jun 2024
Viewed by 1044
Abstract
The large number of images in the different areas and the possibilities of technologies lead to various solutions in automatization using image data. In this paper, tattoo detection and identification were analyzed. The combination of YOLOv5 object detection methods and similarity measures was [...] Read more.
The large number of images in the different areas and the possibilities of technologies lead to various solutions in automatization using image data. In this paper, tattoo detection and identification were analyzed. The combination of YOLOv5 object detection methods and similarity measures was investigated. During the experimental research, various parameters have been investigated to determine the best combination of parameters for tattoo detection. In this case, the influence of data augmentation parameters, the size of the YOLOv5 models (n, s, m, l, x), and the three main hyperparameters of YOLOv5 were analyzed. Also, the efficiency of the most popular similarity distances cosine and Euclidean was analyzed in the tattoo identification process with the purpose of matching the detected tattoo with the person’s tattoo in the database. Experiments have been performed using the deMSI dataset, where images were manually labeled to be suitable for use by the YOLOv5 algorithm. To validate the results obtained, the newly collected tattoo dataset was used. The results have shown that the highest average accuracy of all tattoo detection experiments has been obtained using the YOLOv5l model, where [email protected]:0.95 is equal to 0.60, and [email protected] is equal to 0.79. The accuracy for tattoo identification reaches 0.98, and the F-score is up to 0.52 when the highest cosine similarity tattoo is associated. Meanwhile, to ensure that no suspects will be missed, the cosine similarity threshold value of 0.15 should be applied. Then, photos with higher similarity scores should be analyzed only. This would lead to a 1.0 recall and would reduce the manual tattoo comparison by 20%. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

19 pages, 9524 KiB  
Article
ODGNet: Robotic Grasp Detection Network Based on Omni-Dimensional Dynamic Convolution
by Xinghong Kuang and Bangsheng Tao
Appl. Sci. 2024, 14(11), 4653; https://doi.org/10.3390/app14114653 - 28 May 2024
Viewed by 771
Abstract
In this article, to further improve the accuracy and speed of grasp detection for unknown objects, a new omni-dimensional dynamic convolution grasp detection network (ODGNet) is proposed. The ODGNet includes two key designs. Firstly, it integrates omni-dimensional dynamic convolution to enhance the feature [...] Read more.
In this article, to further improve the accuracy and speed of grasp detection for unknown objects, a new omni-dimensional dynamic convolution grasp detection network (ODGNet) is proposed. The ODGNet includes two key designs. Firstly, it integrates omni-dimensional dynamic convolution to enhance the feature extraction of the graspable region. Secondly, it employs a grasping region feature enhancement fusion module to refine the features of the graspable region and promote the separation of the graspable region from the background. The ODGNet attained an accuracy of 98.4% and 97.8% on the image-wise and object-wise subsets of the Cornell dataset, respectively. Moreover, the ODGNet’s detection speed can reach 50 fps. A comparison with previous algorithms shows that the ODGNet not only improves the grasp detection accuracy, but also satisfies the requirement of real-time grasping. The grasping experiments in the simulation environment verify the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

21 pages, 5253 KiB  
Article
Using Voxelisation-Based Data Analysis Techniques for Porosity Prediction in Metal Additive Manufacturing
by Abraham George, Marco Trevisan Mota, Conor Maguire, Ciara O’Callaghan, Kevin Roche and Nikolaos Papakostas
Appl. Sci. 2024, 14(11), 4367; https://doi.org/10.3390/app14114367 - 22 May 2024
Viewed by 993
Abstract
Additive manufacturing workflows generate large amounts of data in each phase, which can be very useful for monitoring process performance and predicting the quality of the finished part if used correctly. In this paper, a framework is presented that utilises machine learning methods [...] Read more.
Additive manufacturing workflows generate large amounts of data in each phase, which can be very useful for monitoring process performance and predicting the quality of the finished part if used correctly. In this paper, a framework is presented that utilises machine learning methods to predict porosity defects in printed parts. Data from process settings, in-process sensor readings, and post-process computed tomography scans are first aligned and discretised using a voxelisation approach to create a training dataset. A multi-step classification system is then proposed to classify the presence and type of porosity in a voxel, which can then be utilised to find the distribution of porosity within the build volume. Titanium parts were printed using a laser powder bed fusion system. Two discretisation techniques based on voxelisation were utilised: a defect-centric and a uniform discretisation method. Different machine learning models, feature sets, and other parameters were also tested. Promising results were achieved in identifying porous voxels; however, the accuracy of the classification requires improvement before being applied industrially. The potential of the voxelisation-based framework for this application and its ability to incorporate data from different stages of the additive manufacturing workflow as well as different machine learning models was clearly demonstrated. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

20 pages, 4271 KiB  
Article
The Efficiency of YOLOv5 Models in the Detection of Similar Construction Details
by Tautvydas Kvietkauskas, Ernest Pavlov, Pavel Stefanovič and Birutė Pliuskuvienė
Appl. Sci. 2024, 14(9), 3946; https://doi.org/10.3390/app14093946 - 6 May 2024
Cited by 3 | Viewed by 2350
Abstract
Computer vision solutions have become widely used in various industries and as part of daily solutions. One task of computer vision is object detection. With the development of object detection algorithms and the growing number of various kinds of image data, different problems [...] Read more.
Computer vision solutions have become widely used in various industries and as part of daily solutions. One task of computer vision is object detection. With the development of object detection algorithms and the growing number of various kinds of image data, different problems arise in relation to the building of models suitable for various solutions. This paper investigates the influence of parameters used in the training process involved in detecting similar kinds of objects, i.e., the hyperparameters of the algorithm and the training parameters. This experimental investigation focuses on the widely used YOLOv5 algorithm and analyses the performance of different models of YOLOv5 (n, s, m, l, x). In the research, the newly collected construction details (22 categories) dataset is used. Experiments are performed using pre-trained models of the YOLOv5. A total of 185 YOLOv5 models are trained and evaluated. All models are tested on 3300 images photographed on three different backgrounds: mixed, neutral, and white. Additionally, the best-obtained models are evaluated using 150 new images, each of which has several dozen construction details and is photographed against different backgrounds. The deep analysis of different YOLOv5 models and the hyperparameters shows the influence of various parameters when analysing the object detection of similar objects. The best model was obtained when the YOLOv5l was used and the parameters are as follows: coloured images, image size—320; batch size—32; epoch number—300; layers freeze option—10; data augmentation—on; learning rate—0.001; momentum—0.95; and weight decay—0.0007. These results may be useful for various tasks in which small and similar objects are analysed. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

14 pages, 10759 KiB  
Article
A Robust Texture-Less Gray-Scale Surface Matching Method Applied to a Liquid Crystal Display TV Diffuser Plate Assembly System
by Sicong Li, Feng Zhu and Qingxiao Wu
Appl. Sci. 2024, 14(5), 2019; https://doi.org/10.3390/app14052019 - 29 Feb 2024
Viewed by 810
Abstract
In most liquid crystal display (LCD) backlight modules (BLMs), diffuser plates (DPs) play the essential role in blurring the backlight. A common BLM consists of multiple superimposed optical films. In a vision-based automated assembly system, to ensure sufficient accuracy, each of the multiple [...] Read more.
In most liquid crystal display (LCD) backlight modules (BLMs), diffuser plates (DPs) play the essential role in blurring the backlight. A common BLM consists of multiple superimposed optical films. In a vision-based automated assembly system, to ensure sufficient accuracy, each of the multiple cameras usually shoots a local corner of the DP and jointly estimates the target pose, guiding the robot to assemble the DP on the BLM. In general, DPs are typical of texture-less objects with simple shapes. Due to the image background of superimposed multilayer optical films, the robustness of the most common detection methods must be improved to meet industrial needs. To solve the above problem, a texture-less surface matching method based on gray-scale images was proposed. An augmented and normalized gray-scale vector represents the texture-less gray-scale surface in a low-dimensional space. The cosine distance is then used to calculate the similarity between the template and matching vectors, combined with shape-based matching (SBM); the proposed method can obtain high robustness when detecting DPs. An image database from actual production lines was used in the experiment. In comparative tests with the NCC, SBM, YOLOv5s, and YOLOv5x methods, our proposed method had the best precision at all confidence thresholds. Although recall was slightly inferior to SBM, the comprehensive evaluation F1-Score reached 0.826, significantly outperforming the other methods. Regarding localizing accuracy, our algorithm also performed best, reaching 5.7 pixels. Although the time consumption of a single prediction is about 0.6 s, it can still meet industrial needs. These experimental results show that the proposed method has high robustness in detecting DPs and is especially suitable for vision-based automatic assembly tasks in BLM. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

29 pages, 11588 KiB  
Article
Transmission Tower Re-Identification Algorithm Based on Machine Vision
by Lei Chen, Zuowei Yang, Fengyun Huang, Yiwei Dai, Rui Liu and Jiajia Li
Appl. Sci. 2024, 14(2), 539; https://doi.org/10.3390/app14020539 - 8 Jan 2024
Cited by 1 | Viewed by 1420
Abstract
Transmission tower re-identification refers to the recognition of the location and identity of transmission towers, facilitating the rapid localization of transmission towers during power system inspection. Although there are established methods for the defect detection of transmission towers and accessories (such as crossarms [...] Read more.
Transmission tower re-identification refers to the recognition of the location and identity of transmission towers, facilitating the rapid localization of transmission towers during power system inspection. Although there are established methods for the defect detection of transmission towers and accessories (such as crossarms and insulators), there is a lack of automated methods for transmission tower identity matching. This paper proposes an identity-matching method for transmission towers that integrates machine vision and deep learning. Initially, the method requires the creation of a template library. Firstly, the YOLOv8 object detection algorithm is employed to extract the transmission tower images, which are then mapped into a d-dimensional feature vector through a matching network. During the training process of the matching network, a strategy for the online generation of triplet samples is introduced. Secondly, a template library is built upon these d-dimensional feature vectors, which forms the basis of transmission tower re-identification. Subsequently, our method re-identifies the input images. Firstly, we propose that the YOLOv5n-conv head detects and crops the transmission towers in images. Secondly, images without transmission towers are skipped; for those with transmission towers, The matching network maps transmission tower instances into feature vectors. Ultimately, transmission tower re-identification is realized by comparing feature vectors with those in the template library using Euclidean distance. Concurrently, it can be combined with GPS information to narrow down the comparison range. Experiments show that the YOLOv5n-conv head model achieved a mean Average Precision at an Intersection Over Union threshold of 0.5 ([email protected]) score of 0.974 in transmission tower detection, reducing the detection speed by 2.4 ms compared to the original YOLOv5n. Integrating the online triplet sample generation into the matching network training with Inception-ResNet-v1 (d = 128) as the backbone enhanced the network’s rank-1 performance by 3.86%. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

17 pages, 13133 KiB  
Article
The Quality Control System of Planks Using Machine Vision
by Mariusz Cinal, Andrzej Sioma and Bartosz Lenty
Appl. Sci. 2023, 13(16), 9187; https://doi.org/10.3390/app13169187 - 12 Aug 2023
Cited by 8 | Viewed by 1327
Abstract
This article presents a vision method of identifying and measuring wood surface parameters to detect defects resulting from errors occurring during machining. The paper presents the method of recording a three–dimensional image of the wood surface using the laser triangulation method. It discusses [...] Read more.
This article presents a vision method of identifying and measuring wood surface parameters to detect defects resulting from errors occurring during machining. The paper presents the method of recording a three–dimensional image of the wood surface using the laser triangulation method. It discusses parameters related to imaging resolution and the impact of vision system configuration parameters on the measurement resolution and image acquisition time. For the recorded image, proposed algorithms detect defects like wade and bark at the board edges. Algorithms for measuring characteristic parameters describing the surface of the wood are presented. Validation tests performed using the prepared system in industrial conditions are provided and discussed. The proposed solution makes it possible to detect board defects in flow mode on belt conveyors operating at a speed of up to 1000 mm/s. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

20 pages, 16693 KiB  
Article
Rapid and Accurate Crayfish Sorting by Size and Maturity Based on Improved YOLOv5
by Xuhui Ye, Yuxiang Liu, Daode Zhang, Xinyu Hu, Zhuang He and Yan Chen
Appl. Sci. 2023, 13(15), 8619; https://doi.org/10.3390/app13158619 - 26 Jul 2023
Cited by 4 | Viewed by 1486
Abstract
In response to the issues of high-intensity labor, low efficiency, and potential damage to crayfish associated with traditional manual sorting methods, an automated and non-contact sorting approach based on an improved YOLOv5 algorithm is proposed for the rapid sorting of crayfish maturity and [...] Read more.
In response to the issues of high-intensity labor, low efficiency, and potential damage to crayfish associated with traditional manual sorting methods, an automated and non-contact sorting approach based on an improved YOLOv5 algorithm is proposed for the rapid sorting of crayfish maturity and size. To address the difficulty in focusing on small crayfish, the Backbone is augmented with Coordinate Attention to boost its capability to extract features. Additionally, to address the difficulty in achieving high overall algorithm efficiency and reducing feature redundancy, the Bottleneck Transformer is integrated into both the Backbone and Neck, which improves the accuracy, generalization performance, and the model’s computational proficiency. The dataset of 3464 images of crayfish collected from a crayfish breeding farm is used for the experiments. The dataset is partitioned randomly, with 80% of the data used for training and the remaining 20% used for testing. The results indicate that the proposed algorithm achieves an mAP of 98.8%. Finally, the model is deployed using TensorRT, and the processing time for an image is reduced to just 2 ms, which greatly improves the processing speed of the model. In conclusion, this approach provides an accurate, efficient, fast, and automated solution for crayfish sorting. Full article
(This article belongs to the Special Issue Computer Vision in Automatic Detection and Identification)
Show Figures

Figure 1

Back to TopTop