Applications of Machine Vision in Robotics

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 October 2024) | Viewed by 4831

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, West Pomeranian University of Technology, 70-313 Szczecin, Poland
Interests: applied computer science; particularly image processing and analysis; computer vision and machine vision in automation and robotics; image quality assessment; video and signal processing applications in intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

E-Mail
Guest Editor
Faculty of Electrical Engineering, West Pomeranian University of Technology, 70-313 Szczecin, Poland
Interests: applied computer science, particularly for Industry 4.0 solutions; computer networks; image processing and analysis in automation and robotics; IoT systems; machine vision for mobile robotics

Special Issue Information

Dear Colleagues,

Applications of machine vision in modern robotic systems are still one of the most dynamically growing areas of research in automation and robotics. Modern image analysis methods have become an integral part of many robotic systems, and these methods include video feedback in industrial systems, including robotic arms; vision-based navigation of mobile robots and Video Simultaneous Localization and Mapping (VSLAM) methods; and the use of video analysis in Unmanned Aerial Vehicles (UAVs).

Machine vision systems in robotics have also become one of the leading areas of practical applications for the recently developed artificial intelligence solutions based on computer vision algorithms. Nevertheless, due to the rapid development of deep neural networks, one of the most relevant issues is related to the “explainability” of algorithms as well as their universality and ability to recognize and classify previously unknown objects. Therefore, due to a limited amount of training data, typical for robotics, important challenges are also related to unsupervised learning.

Another key problem that influences the final results of image analysis in robotics is related to the quality of input data; hence, no-reference image and video quality assessment methods play an important role as well. Thanks to their applications, it is possible to prevent the use of some distorted video frames for image analysis utilized for the further control of, e.g., robot motion.

This Special Issue on “Applications of Machine Vision in Robotics” will bring together the research communities interested in computer and machine vision from various departments and universities, focusing on both robotics as well as applied computer science.

Topics of interest for this Special Issue include but are not limited to the following:

  • Video simultaneous localization and mapping (VSLAM) solutions;
  • Image-based navigation of unmanned aerial vehicles (UAVs) and other mobile robots;
  • Applications of computer vision in autonomous vehicles;
  • Fast algorithms useful for embedded solutions, e.g., based on the Monte Carlo method;
  • No-reference image quality assessment;
  • Texture analysis and shape recognition for UAVs and mobile robotics;
  • Novel image descriptors that are useful for image-based classification of objects;
  • Feature extraction and image registration, including remote sensing images.

Prof. Krzysztof Okarma
Dr. Piotr Lech
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image analysis
  • machine vision
  • video analysis
  • visual SLAM
  • visual inspection and diagnostics
  • robotic vision systems
  • video feedback in robotics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 10918 KiB  
Article
Multi-Scale Encoding Method with Spectral Shape Information for Hyperspectral Images
by Dong Zhao and Gong Zhang
Electronics 2024, 13(16), 3199; https://doi.org/10.3390/electronics13163199 - 13 Aug 2024
Viewed by 777
Abstract
Spectral encoding is an important way of describing spectral features and patterns. Traditional methods focused on encoding the spectral amplitude information (SAI). Abundant spectral shape information (SSI) was wasted. In addition, traditional statistical encoding methods might only gain local adaptability since different objects [...] Read more.
Spectral encoding is an important way of describing spectral features and patterns. Traditional methods focused on encoding the spectral amplitude information (SAI). Abundant spectral shape information (SSI) was wasted. In addition, traditional statistical encoding methods might only gain local adaptability since different objects should have their own best encoding scales. In order to obtain differential signals from hyperspectral images (HSI) for detecting ground objects correctly, a multi-scale encoding (MSE) method with SSI and two optimization strategies were proposed in this research. The proposed method concentrated on describing the SAI and SSI of the spectral reflectance signals. Four widely used open data sets were adopted to validate the performance of the proposed method. Experimental results indicated that the MSE method with SSI could describe the details of spectral signals accurately. It could obtain excellent performance for detecting similar objects with a small number of samples. In addition, the optimization strategies contributed to obtaining the best result from dynamic encoding scales. Full article
(This article belongs to the Special Issue Applications of Machine Vision in Robotics)
Show Figures

Graphical abstract

18 pages, 4498 KiB  
Article
Selective Grasping for Complex-Shaped Parts Using Topological Skeleton Extraction
by Andrea Pennisi, Monica Sileo, Domenico Daniele Bloisi and Francesco Pierri
Electronics 2024, 13(15), 3021; https://doi.org/10.3390/electronics13153021 - 31 Jul 2024
Viewed by 618
Abstract
To enhance the autonomy and flexibility of robotic systems, a crucial role is played by the capacity to perceive and grasp objects. More in detail, robot manipulators must detect the presence of the objects within their workspace, identify the grasping point, and compute [...] Read more.
To enhance the autonomy and flexibility of robotic systems, a crucial role is played by the capacity to perceive and grasp objects. More in detail, robot manipulators must detect the presence of the objects within their workspace, identify the grasping point, and compute a trajectory for approaching the objects with a pose of the end-effector suitable for performing the task. These can be challenging tasks in the presence of complex geometries, where multiple grasping-point candidates can be detected. In this paper, we present a novel approach for dealing with complex-shaped automotive parts consisting of a deep-learning-based method for topological skeleton extraction and an active grasping pose selection mechanism. In particular, we use a modified version of the well-known Lightweight OpenPose algorithm to estimate the topological skeleton of real-world automotive parts. The estimated skeleton is used to select the best grasping pose for the object at hand. Our approach is designed to be more computationally efficient with respect to other existing grasping pose detection methods. Quantitative experiments conducted with a 7 DoF manipulator on different real-world automotive components demonstrate the effectiveness of the proposed approach with a success rate of 87.04%. Full article
(This article belongs to the Special Issue Applications of Machine Vision in Robotics)
Show Figures

Figure 1

14 pages, 2793 KiB  
Article
A MobileFaceNet-Based Face Anti-Spoofing Algorithm for Low-Quality Images
by Jianyu Xiao, Wei Wang, Lei Zhang and Huanhua Liu
Electronics 2024, 13(14), 2801; https://doi.org/10.3390/electronics13142801 - 16 Jul 2024
Viewed by 928
Abstract
The Face Anti-Spoofing (FAS) methods plays a very important role in ensuring the security of face recognition systems. The existing FAS methods perform well in short-distance scenarios, e.g., phone unlocking, face payment, etc. However, it is still challenging to improve the generalization of [...] Read more.
The Face Anti-Spoofing (FAS) methods plays a very important role in ensuring the security of face recognition systems. The existing FAS methods perform well in short-distance scenarios, e.g., phone unlocking, face payment, etc. However, it is still challenging to improve the generalization of FAS in long-distance scenarios (e.g., surveillance) due to the varying image quality. In order to address the lack of low-quality images in real scenarios, we build a Low-Quality Face Anti-Spoofing Dataset (LQFA-D) by using Hikvision’s surveillance cameras. In order to deploy the model on an edge device with limited computation, we propose a lightweight FAS network based on MobileFaceNet, in which the Coordinate Attention (CA) attention model is introduced to capture the important spatial information. Then, we propose a multi-scale FAS framework for low-quality images to explore multi-scale features, which includes three multi-scale models. The experimental results of the LQFA-D show that the Average Classification Error Rate (ACER) and detection time of the proposed method are 1.39% and 45 ms per image for the low-quality images, respectively. It demonstrates the effectiveness of the proposed method in this paper. Full article
(This article belongs to the Special Issue Applications of Machine Vision in Robotics)
Show Figures

Figure 1

20 pages, 3670 KiB  
Article
Enhancing Visual Odometry with Estimated Scene Depth: Leveraging RGB-D Data with Deep Learning
by Aleksander Kostusiak and Piotr Skrzypczyński
Electronics 2024, 13(14), 2755; https://doi.org/10.3390/electronics13142755 - 13 Jul 2024
Viewed by 1021
Abstract
Advances in visual odometry (VO) systems have benefited from the widespread use of affordable RGB-D cameras, improving indoor localization and mapping accuracy. However, older sensors like the Kinect v1 face challenges due to depth inaccuracies and incomplete data. This study compares indoor VO [...] Read more.
Advances in visual odometry (VO) systems have benefited from the widespread use of affordable RGB-D cameras, improving indoor localization and mapping accuracy. However, older sensors like the Kinect v1 face challenges due to depth inaccuracies and incomplete data. This study compares indoor VO systems that use RGB-D images, exploring methods to enhance depth information. We examine conventional image inpainting techniques and a deep learning approach, utilizing newer depth data from devices like the Kinect v2. Our research highlights the importance of refining data from lower-quality sensors, which is crucial for cost-effective VO applications. By integrating deep learning models with richer context from RGB images and more comprehensive depth references, we demonstrate improved trajectory estimation compared to standard methods. This work advances budget-friendly RGB-D VO systems for indoor mobile robots, emphasizing deep learning’s role in leveraging connections between image appearance and depth data. Full article
(This article belongs to the Special Issue Applications of Machine Vision in Robotics)
Show Figures

Figure 1

14 pages, 4340 KiB  
Article
Methods for Magnetic Signature Comparison Evaluation in Vehicle Re-Identification Context
by Juozas Balamutas, Dangirutis Navikas, Vytautas Markevicius, Mindaugas Cepenas, Algimantas Valinevicius, Mindaugas Zilys, Michal Prauzek, Jaromir Konecny, Michal Frivaldsky, Zhixiong Li and Darius Andriukaitis
Electronics 2024, 13(14), 2722; https://doi.org/10.3390/electronics13142722 - 11 Jul 2024
Viewed by 623
Abstract
Intelligent transportation systems represent innovative solutions for traffic congestion minimization, mobility improvements and safety enhancement. These systems require various inputs about vehicles and traffic state. Vehicle re-identification systems based on video cameras are most popular; however, more strict privacy policy necessitates depersonalized vehicle [...] Read more.
Intelligent transportation systems represent innovative solutions for traffic congestion minimization, mobility improvements and safety enhancement. These systems require various inputs about vehicles and traffic state. Vehicle re-identification systems based on video cameras are most popular; however, more strict privacy policy necessitates depersonalized vehicle re-identification systems. Promising research for depersonalized vehicle re-identification systems involves leveraging the captured unique distortions induced in the Earth’s magnetic field by passing vehicles. Employing anisotropic magneto-resistive sensors embedded in the road surface system captures vehicle magnetic signatures for similarity evaluation. A novel vehicle re-identification algorithm utilizing Euclidean distances and Pearson correlation coefficients is analyzed, and performance is evaluated. Initial processing is applied on registered magnetic signatures, useful features for decision making are extracted, different classification algorithms are applied and prediction accuracy is checked. The results demonstrate the effectiveness of our approach, achieving 97% accuracy in vehicle re-identification for a subset of 300 different vehicles passing the sensor a few times. Full article
(This article belongs to the Special Issue Applications of Machine Vision in Robotics)
Show Figures

Figure 1

Back to TopTop