Intelligent Recognition and Detection for Unmanned Systems

A special issue of Drones (ISSN 2504-446X). This special issue belongs to the section "Drone Communications".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 41160

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
Interests: intelligent decision and control of UAVs; deep reinforcement learning; uncertain information processing; image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Engineering, London South Bank University, London SE1 0AA, UK
Interests: neural networks and artificial intelligence; machine learning data science
School of Information and Communications Engineering, Communication University of China, Beijing 100024,China
Interests: computer vision,convolutional neural nets,learning (artificial intelligence),object detection,5G mobile communication,cache storage,feature extraction,mobile computing,object recognition,Markov proces
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Unmanned systems (i.e., droned, robots and other intelligent systems) have played important roles in many fields, i.e., disaster relief, intelligent transportation, intelligent medical service and space exploration. Furthermore, object recognition and detection has extensive applications in these tasks. However, due to complex application environments, artficial intelligence techniques suffered from challenges in terms of robustness and flexibility. Thus, designing efficient and stable CNNs and other AI algorithms for object recognition and detection in unmanned systems are critical. 

Inspired by this, we host a SI to bring together the research accomplishments provided by researchers from academia and industry. The other goal is to show the latest research results in the field of deep learning for object recognition and detection and understand how governance strategy can influence it. We encourage prospective authors to submit related distinguished research papers on the subject of both theoretical approaches and practical case reviews.

Prof. Dr. Bo Li
Dr. Chunwei Tian
Dr. Daqing Chen
Dr. Ming Yan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object recognition (i.e., image recognition and speech recognition)
  • object detection
  • flexible CNNs
  • deep learning
  • NLP
  • drone
  • smart robot

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 16073 KiB  
Article
Multiple-Target Matching Algorithm for SAR and Visible Light Image Data Captured by Multiple Unmanned Aerial Vehicles
by Hang Zhang, Jiangbin Zheng and Chuang Song
Drones 2024, 8(3), 83; https://doi.org/10.3390/drones8030083 - 27 Feb 2024
Cited by 1 | Viewed by 1541
Abstract
Unmanned aerial vehicle (UAV) technology has witnessed widespread utilization in target surveillance activities. However, cooperative multiple UAVs for the identification of multiple targets poses a significant challenge due to the susceptibility of individual UAVs to false positive (FP) and false negative (FN) target [...] Read more.
Unmanned aerial vehicle (UAV) technology has witnessed widespread utilization in target surveillance activities. However, cooperative multiple UAVs for the identification of multiple targets poses a significant challenge due to the susceptibility of individual UAVs to false positive (FP) and false negative (FN) target detections. Specifically, the primary challenge addressed in this study stems from the weak discriminability of features in Synthetic Aperture Radar (SAR) imaging targets, leading to a high false alarm rate in SAR target detection. Additionally, the uncontrollable false alarm rate during electro-optical proximity detection results in an elevated false alarm rate as well. Consequently, a cumulative error propagation problem arises when SAR and electro-optical observations of the same target from different perspectives occur at different times. This paper delves into the target association problem within the realm of collaborative detection involving multiple unmanned aerial vehicles. We first propose an improved triplet loss function to effectively assess the similarity of targets detected by multiple UAVs, mitigating false positives and negatives. Then, a consistent discrimination algorithm is described for targets in multi-perspective scenarios using distributed computing. We established a multi-UAV multi-target detection database to alleviate training and validation issues for algorithms in this complex scenario. Our proposed method demonstrates a superior correlation performance compared to state-of-the-art networks. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

40 pages, 15848 KiB  
Article
Cooperative Standoff Target Tracking using Multiple Fixed-Wing UAVs with Input Constraints in Unknown Wind
by Zhong Liu, Lingshuang Xiang and Zemin Zhu
Drones 2023, 7(9), 593; https://doi.org/10.3390/drones7090593 - 20 Sep 2023
Cited by 3 | Viewed by 1603
Abstract
This paper investigates the problem of cooperative standoff tracking using multiple fixed-wing unmanned aerial vehicles (UAVs) with control input constraints. In order to achieve accurate moving target tracking in the presence of unknown background wind, a coordinated standoff target tracking algorithm is proposed. [...] Read more.
This paper investigates the problem of cooperative standoff tracking using multiple fixed-wing unmanned aerial vehicles (UAVs) with control input constraints. In order to achieve accurate moving target tracking in the presence of unknown background wind, a coordinated standoff target tracking algorithm is proposed. The objective of the research is to steer multiple UAVs to fly a circular orbit around a moving target with prescribed intervehicle angular spacing. To achieve this goal, two control laws are proposed, including relative range regulation and space phase separation. On one hand, a heading rate control law based on a Lyapunov guidance vector field is proposed. The convergence analysis shows that the UAVs can asymptotically converge to a desired circular orbit around the target, regardless of their initial position and heading. Through a rigorous theoretical proof, it is concluded that the command signal of the proposed heading rate controller will not violate the boundary constraint on the heading rate. On the other hand, a temporal phase is introduced to represent the phase separation and avoid discontinuity of the wrapped space phase angle. On this basis, a speed controller is developed to achieve equal phase separation. The proposed airspeed controller meets the requirements of the airspeed constraint. Furthermore, to improve the robustness of the aircraft during target tracking, an estimator is developed to estimate the composition velocity of the unknown wind and target motion. The proposed estimator uses the offset vector between the UAV’s actual flight path and the desired orbit, which is defined by the Lyapunov guidance vector field, to estimate the composition velocity. The stability of the estimator is proved. Simulations are conducted under different scenarios to demonstrate the effectiveness of the proposed cooperative standoff target tracking algorithm. The simulation results indicate that the temporal-phase-based speed controller can achieve a fast convergence speed and small phase separation error. Additionally, the composition velocity estimator exhibits a fast response speed and high estimation accuracy. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

24 pages, 3898 KiB  
Article
Research on Drone Fault Detection Based on Failure Mode Databases
by Defei Hou, Qingran Su, Yi Song and Yongfeng Yin
Drones 2023, 7(8), 486; https://doi.org/10.3390/drones7080486 - 25 Jul 2023
Cited by 5 | Viewed by 2761
Abstract
Drones are widely used in a number of key fields and are having a profound impact on all walks of life. Working out how to improve drone safety through fault detection is key to ensuring the smooth execution of tasks. At present, most [...] Read more.
Drones are widely used in a number of key fields and are having a profound impact on all walks of life. Working out how to improve drone safety through fault detection is key to ensuring the smooth execution of tasks. At present, most research focuses on fault detection at the component level as it is not possible to locate faults quickly from the global system state of a UAV. Moreover, most methods are offline detection methods, which cannot achieve real-time monitoring of UAV faults. To remedy this, this paper proposes a fault detection method based on a fault mode database and runtime verification. Firstly, a large body of historical fault information is analyzed to generate a summary of fault modes, including fault modes at the system level. The key safety properties of UAVs during operation are further studied in terms of system-level fault modes. Next, a monitor generation algorithm and code instrumentation framework are designed to monitor whether a certain safety attribute is violated during the operation of a UAV in real time. The experimental results show that the fault detection method proposed in this paper can detect abnormal situations in a timely and accurate manner. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

20 pages, 5797 KiB  
Article
Hierarchical Maneuver Decision Method Based on PG-Option for UAV Pursuit-Evasion Game
by Bo Li, Haohui Zhang, Pingkuan He, Geng Wang, Kaiqiang Yue and Evgeny Neretin
Drones 2023, 7(7), 449; https://doi.org/10.3390/drones7070449 - 6 Jul 2023
Cited by 5 | Viewed by 1802
Abstract
Aiming at the autonomous decision-making problem in an Unmanned aerial vehicle (UAV) pursuit-evasion game, this paper proposes a hierarchical maneuver decision method based on the PG-option. Firstly, considering various situations of the relationship of both sides comprehensively, this paper designs four maneuver decision [...] Read more.
Aiming at the autonomous decision-making problem in an Unmanned aerial vehicle (UAV) pursuit-evasion game, this paper proposes a hierarchical maneuver decision method based on the PG-option. Firstly, considering various situations of the relationship of both sides comprehensively, this paper designs four maneuver decision options: advantage game, quick escape, situation change and quick pursuit, and the four options are trained by Soft Actor-Critic (SAC) to obtain the corresponding meta-policy. In addition, to avoid high dimensions in the state space in the hierarchical model, this paper combines the policy gradient (PG) algorithm with the traditional hierarchical reinforcement learning algorithm based on the option. The PG algorithm is used to train the policy selector as the top-level strategy. Finally, to solve the problem of frequent switching of meta-policies, this paper sets the delay selection of the policy selector and introduces the expert experience to design the termination function of the meta-policies, which improves the flexibility of switching policies. Simulation experiments show that the PG-option algorithm has a good effect on UAV pursuit-evasion game and adapts to various environments by switching corresponding meta-policies according to current situation. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

15 pages, 3407 KiB  
Article
PFFNET: A Fast Progressive Feature Fusion Network for Detecting Drones in Infrared Images
by Ziqiang Han, Cong Zhang, Hengzhen Feng, Mingkai Yue and Kangnan Quan
Drones 2023, 7(7), 424; https://doi.org/10.3390/drones7070424 - 26 Jun 2023
Cited by 5 | Viewed by 1600
Abstract
The rampant misuse of drones poses a serious threat to national security and human life. Currently, CNN (Convolutional Neural Networks) are widely used to detect drones. However, small drone targets often reduced amplitude or even lost features in infrared images which traditional CNN [...] Read more.
The rampant misuse of drones poses a serious threat to national security and human life. Currently, CNN (Convolutional Neural Networks) are widely used to detect drones. However, small drone targets often reduced amplitude or even lost features in infrared images which traditional CNN cannot overcome. This paper proposes a Progressive Feature Fusion Network (PFFNET) and designs a Pooling Pyramid Fusion (PFM) to provide more effective global contextual priors for the highest downsampling output. Then, the Feature Selection Model (FSM) is designed to improve the use of the output coding graph and enhance the feature representation of the target in the network. Finally, a lightweight segmentation head is designed to achieve progressive feature fusion with multi-layer outputs. Experimental results show that the proposed algorithm has good real-time performance and high accuracy in drone target detection. On the public dataset, the intersection over union (IOU) is improved by 2.5% and the detection time is reduced by 81%. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

20 pages, 1977 KiB  
Article
UAV-Assisted Traffic Speed Prediction via Gray Relational Analysis and Deep Learning
by Yanliu Zheng, Juan Luo, Ying Qiao and Han Gao
Drones 2023, 7(6), 372; https://doi.org/10.3390/drones7060372 - 2 Jun 2023
Cited by 3 | Viewed by 1692
Abstract
Accurate traffic prediction is crucial to alleviating traffic congestion in cities. Existing physical sensor-based traffic data acquisition methods have high transmission costs, serious traffic information redundancy, and large calculation volumes for spatiotemporal data processing, thus making it difficult to ensure accuracy and real-time [...] Read more.
Accurate traffic prediction is crucial to alleviating traffic congestion in cities. Existing physical sensor-based traffic data acquisition methods have high transmission costs, serious traffic information redundancy, and large calculation volumes for spatiotemporal data processing, thus making it difficult to ensure accuracy and real-time traffic prediction. With the increasing resolution of UAV imagery, the use of unmanned aerial vehicles (UAV) imagery to obtain traffic information has become a hot spot. Still, analyzing and predicting traffic status after extracting traffic information is neglected. We develop a framework for traffic speed extraction and prediction based on UAV imagery processing, which consists of two parts: a traffic information extraction module based on UAV imagery recognition and a traffic speed prediction module based on deep learning. First, we use deep learning methods to automate the extraction of road information, implement vehicle recognition using convolutional neural networks and calculate the average speed of road sections based on panchromatic and multispectral image matching to construct a traffic prediction dataset. Then, we propose an attention-enhanced traffic speed prediction module that considers the spatiotemporal characteristics of traffic data and increases the weights of key roads by extracting important fine-grained spatiotemporal features twice to improve the prediction accuracy of the target roads. Finally, we validate the effectiveness of the proposed method on real data. Compared with the baseline algorithm, our algorithm achieves the best prediction performance regarding accuracy and stability. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

14 pages, 1423 KiB  
Article
Research on the Intelligent Construction of UAV Knowledge Graph Based on Attentive Semantic Representation
by Yi Fan, Baigang Mi, Yu Sun and Li Yin
Drones 2023, 7(6), 360; https://doi.org/10.3390/drones7060360 - 30 May 2023
Cited by 5 | Viewed by 2021
Abstract
Accurate target recognition of unmanned aerial vehicles (UAVs) in the intelligent warfare mode relies on a highly standardized UAV knowledge base, and thus it is crucial to construct a knowledge graph suitable for UAV multi-source information fusion. However, due to the lack of [...] Read more.
Accurate target recognition of unmanned aerial vehicles (UAVs) in the intelligent warfare mode relies on a highly standardized UAV knowledge base, and thus it is crucial to construct a knowledge graph suitable for UAV multi-source information fusion. However, due to the lack of domain knowledge and the cumbersome and inefficient construction techniques, the intelligent construction approaches of knowledge graphs for UAVs are relatively backward. To this end, this paper proposes a framework for the construction and application of a standardized knowledge graph from large-scale UAV unstructured data. First, UAV concept classes and relations are defined to form specialized ontology, and UAV knowledge extraction triples are labeled. Then, a two-stage knowledge extraction model based on relational attention-based contextual semantic representation (UASR) is designed based on the characteristics of the UAV knowledge extraction corpus. The contextual semantic representation is then applied to the downstream task as a key feature through the Multilayer Perceptron (MLP) attention method, while the relation attention mechanism-based approach is used to calculate the relational-aware contextual representation in the subject–object entity extraction stage. Extensive experiments were carried out on the final annotated dataset, and the model F1 score reached 70.23%. Based on this, visual presentation is achieved based on the UAV knowledge graph, which lays the foundation for the back-end application of the UAV knowledge graph intelligent construction technology. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

15 pages, 3171 KiB  
Article
Improved Radar Detection of Small Drones Using Doppler Signal-to-Clutter Ratio (DSCR) Detector
by Jiangkun Gong, Jun Yan, Huiping Hu, Deyong Kong and Deren Li
Drones 2023, 7(5), 316; https://doi.org/10.3390/drones7050316 - 10 May 2023
Cited by 8 | Viewed by 5102
Abstract
The detection of drones using radar presents challenges due to their small radar cross-section (RCS) values, slow velocities, and low altitudes. Traditional signal-to-noise ratio (SNR) detectors often fail to detect weak radar signals from small drones, resulting in high “Missed Target” rates due [...] Read more.
The detection of drones using radar presents challenges due to their small radar cross-section (RCS) values, slow velocities, and low altitudes. Traditional signal-to-noise ratio (SNR) detectors often fail to detect weak radar signals from small drones, resulting in high “Missed Target” rates due to the dependence of SNR values on RCS and detection range. To overcome this issue, we propose the use of a Doppler signal-to-clutter ratio (DSCR) detector that can extract both amplitude and Doppler information from drone signals. Theoretical calculations suggest that the DSCR of a target is less dependent on the detection range than the SNR. Experimental results using a Ku-band pulsed-Doppler surface surveillance radar and an X-band marine surveillance radar demonstrate that the DSCR detector can effectively extract radar signals from small drones, even when the signals are similar to clutter levels. Compared to the SNR detector, the DSCR detector reduces missed target rates by utilizing a lower detection threshold. Our tests include quad-rotor, fixed-wing, and hybrid vertical take-off and landing (VTOL) drones, with mean SNR values comparable to the surrounding clutter but with DSCR values above 10 dB, significantly higher than the clutter. The simplicity and low radar requirements of the DSCR detector make it a promising solution for drone detection in radar engineering applications. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

17 pages, 5553 KiB  
Article
A Real-Time UAV Target Detection Algorithm Based on Edge Computing
by Qianqing Cheng, Hongjun Wang, Bin Zhu, Yingchun Shi and Bo Xie
Drones 2023, 7(2), 95; https://doi.org/10.3390/drones7020095 - 30 Jan 2023
Cited by 22 | Viewed by 5473
Abstract
Small UAV target detection plays an important role in maintaining the security of cities and citizens. UAV targets have the characteristics of low-altitude flights, slow speeds, and miniaturization. Taking these characteristics into account, we present a real-time UAV target detection algorithm called Fast-YOLOv4 [...] Read more.
Small UAV target detection plays an important role in maintaining the security of cities and citizens. UAV targets have the characteristics of low-altitude flights, slow speeds, and miniaturization. Taking these characteristics into account, we present a real-time UAV target detection algorithm called Fast-YOLOv4 based on edge computing. By adopting Fast-YOLOv4 in the edge computing platform NVIDIA Jetson Nano, intelligent analysis can be performed on the video to realize the fast detection of UAV targets. However, the current iteration of the edge-embedded detection algorithm has low accuracy and poor real-time performance. To solve these problems, this paper introduces the lightweight networks MobileNetV3, Multiscale-PANet, and soft-merge to improve YOLOv4, thus obtaining the Fast-YOLOv4 model. The backbone of the model uses depth-wise separable convolution and an inverse residual structure to simplify the network’s structure and to improve its detection speed. The neck of the model adds a scale fusion branch to improve the feature extraction ability and strengthen small-scale target detection. Then, the predicted boxes filtering algorithm uses the soft-merge function to replace the traditionally used NMS (non-maximum suppression). Soft-merge can improve the model’s detection accuracy by fusing the information of predicted boxes. Finally, the experimental results show that the mAP (mean average precision) and FPS (frames per second) of Fast-YOLOv4 reach 90.62% and 54 f/s, respectively, in the workstation. In the NVIDIA Jetson Nano platform, the FPS of Fast-YOLOv4 is 2.5 times that of YOLOv4. This improved model performance meets the requirements for real-time detection and thus has theoretical significance and application value. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

29 pages, 7545 KiB  
Article
Multidomain Joint Learning of Pedestrian Detection for Application to Quadrotors
by Yuan-Kai Wang, Jonathan Guo and Tung-Ming Pan
Drones 2022, 6(12), 430; https://doi.org/10.3390/drones6120430 - 19 Dec 2022
Cited by 1 | Viewed by 2706
Abstract
Pedestrian detection and tracking are critical functions in the application of computer vision for autonomous driving in terms of accident avoidance and safety. Extending the application to drones expands the monitoring space from 2D to 3D but complicates the task. Images captured from [...] Read more.
Pedestrian detection and tracking are critical functions in the application of computer vision for autonomous driving in terms of accident avoidance and safety. Extending the application to drones expands the monitoring space from 2D to 3D but complicates the task. Images captured from various angles pose a great challenge for pedestrian detection, because image features from different angles tremendously vary and the detection performance of deep neural networks deteriorates. In this paper, this multiple-angle issue is treated as a multiple-domain problem, and a novel multidomain joint learning (MDJL) method is proposed to train a deep neural network using drone data from multiple domains. Domain-guided dropout, a critical mechanism in MDJL, is developed to self-organize domain-specific features according to neuron impact scores. After training and fine-tuning the network, the accuracy of the obtained model improved in all the domains. In addition, we also combined the MDJL with Markov decision-process trackers to create a multiobject tracking system for flying drones. Experiments are conducted on many benchmarks, and the proposed method is compared with several state-of-the-art methods. Experimental results show that the MDJL effectively tackles many scenarios and significantly improves tracking performance. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

18 pages, 3441 KiB  
Article
FEC: Fast Euclidean Clustering for Point Cloud Segmentation
by Yu Cao, Yancheng Wang, Yifei Xue, Huiqing Zhang and Yizhen Lao
Drones 2022, 6(11), 325; https://doi.org/10.3390/drones6110325 - 27 Oct 2022
Cited by 22 | Viewed by 6707
Abstract
Segmentation from point cloud data is essential in many applications, such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. A fast solution for point cloud [...] Read more.
Segmentation from point cloud data is essential in many applications, such as remote sensing, mobile robots, or autonomous cars. However, the point clouds captured by the 3D range sensor are commonly sparse and unstructured, challenging efficient segmentation. A fast solution for point cloud instance segmentation with small computational demands is lacking. To this end, we propose a novel fast Euclidean clustering (FEC) algorithm which applies a point-wise scheme over the cluster-wise scheme used in existing works. The proposed method avoids traversing every point constantly in each nested loop, which is time and memory-consuming. Our approach is conceptually simple, easy to implement (40 lines in C++), and achieves two orders of magnitudes faster against the classical segmentation methods while producing high-quality results. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

18 pages, 10943 KiB  
Article
Weld Seam Identification and Tracking of Inspection Robot Based on Deep Learning Network
by Jie Li, Beibei Li, Linjie Dong, Xingsong Wang and Mengqian Tian
Drones 2022, 6(8), 216; https://doi.org/10.3390/drones6080216 - 20 Aug 2022
Cited by 17 | Viewed by 4344
Abstract
The weld seams of large spherical tank equipment should be regularly inspected. Autonomous inspection robots can greatly enhance inspection efficiency and save costs. However, the accurate identification and tracking of weld seams by inspection robots remains a challenge. Based on the designed wall-climbing [...] Read more.
The weld seams of large spherical tank equipment should be regularly inspected. Autonomous inspection robots can greatly enhance inspection efficiency and save costs. However, the accurate identification and tracking of weld seams by inspection robots remains a challenge. Based on the designed wall-climbing robot, an intelligent inspection robotic system based on deep learning is proposed to achieve the weld seam identification and tracking in this study. The inspection robot used mecanum wheels and permanent magnets to adsorb metal walls. In the weld seam identification, Mask R-CNN was used to segment the instance of weld seams. Through image processing combined with Hough transform, weld paths were extracted with a high accuracy. The robotic system efficiently completed the weld seam instance segmentation through training and learning with 2281 weld seam images. Experimental results indicated that the robotic system based on deep learning was faster and more accurate than previous methods, and the average time of identifying and calculating weld paths was about 180 ms, and the mask average precision (AP) was about 67.6%. The inspection robot could automatically track seam paths, and the maximum drift angle and offset distance were 3° and 10 mm, respectively. This intelligent weld seam identification system will greatly promote the application of inspection robots. Full article
(This article belongs to the Special Issue Intelligent Recognition and Detection for Unmanned Systems)
Show Figures

Figure 1

Back to TopTop