sensors-logo

Journal Browser

Journal Browser

UAV Imagery for Engineering Applications Using Artificial Intelligence Techniques (UAV-AI)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 50603

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore, Singapore
Interests: deep learning; reinforcement learning; optimizations; multiagent systems; materials informatics; remote sensing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue will focus on the development of UAV imagery for engineering applications using Artificial Intelligence techniques (UAV-AI). UAV imagery broadens the set of engineering application domains to which AI techniques are applied. With greater data stemming from UAV, there is a need to build new AI technologies like machine learning and optimization in image processing for effective and efficient ways to solve a wide range of engineering applications. UAV-AI have the advantage of automatic processing, with short temporal information that can learn from the environment and past experiences and adapt to accommodate fast-changing environments and goals. Each task in a UAV-AI system is interesting and valuable in its own right, but building such a system can facilitate a fundamental shift in the way we see them for solving complex problems. There is a need for a UAV imaging platform which includes visual RGB, multispectral, hyperspectral, thermal IR, LiDAR, etc. and AI technologies in machine learning, specifically shallow artificial neural networks, deep learning neural networks, spiking neural networks, online learning, neurofuzzy networks, as well as gradient-free optimization techniques like genetic algorithms, particle swarm optimization, ant colony optimization, cuckoo search algorithms, firefly algorithms, etc., in decision-making and modeling in various engineering applications. UAV-AI invites authors to submit their contributions in the areas, including, but not limited to, the following:
• Time-series data for short term change detection in disaster monitoring;
• Time-series data for long term change detection in land-surface phenology and forest monitoring;
• Radiometric calibration for camera of different imaging conditions;
• Multispectral/hyperspectral image processing for agricultural monitoring;
• Anomaly detection like suspicious detection;
• Generalized road extraction system;
• Power line monitoring;
• Target detection for forest fire mitigation;
• Radiation monitoring;
• Water/air pollution monitoring.

Prof. Jon Atli Benediktsson
Dr. Senthilnath Jayavelu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5432 KiB  
Article
Solar Panel Detection within Complex Backgrounds Using Thermal Images Acquired by UAVs
by Jhon Jairo Vega Díaz, Michiel Vlaminck, Dionysios Lefkaditis, Sergio Alejandro Orjuela Vargas and Hiep Luong
Sensors 2020, 20(21), 6219; https://doi.org/10.3390/s20216219 - 31 Oct 2020
Cited by 44 | Viewed by 8428
Abstract
The installation of solar plants everywhere in the world increases year by year. Automated diagnostic methods are needed to inspect the solar plants and to identify anomalies within these photovoltaic panels. The inspection is usually carried out by unmanned aerial vehicles (UAVs) using [...] Read more.
The installation of solar plants everywhere in the world increases year by year. Automated diagnostic methods are needed to inspect the solar plants and to identify anomalies within these photovoltaic panels. The inspection is usually carried out by unmanned aerial vehicles (UAVs) using thermal imaging sensors. The first step in the whole process is to detect the solar panels in those images. However, standard image processing techniques fail in case of low-contrast images or images with complex backgrounds. Moreover, the shades of power lines or structures similar to solar panels impede the automated detection process. In this research, two self-developed methods are compared for the detection of panels in this context, one based on classical techniques and another one based on deep learning, both with a common post-processing step. The first method is based on edge detection and classification, in contrast to the second method is based on training a region based convolutional neural networks to identify a panel. The first method corrects for the low contrast of the thermal image using several preprocessing techniques. Subsequently, edge detection, segmentation and segment classification are applied. The latter is done using a support vector machine trained with an optimized texture descriptor vector. The second method is based on deep learning trained with images that have been subjected to three different pre-processing operations. The postprocessing use the detected panels to infer the location of panels that were not detected. This step selects contours from detected panels based on the panel area and the angle of rotation. Then new panels are determined by the extrapolation of these contours. The panels in 100 random images taken from eleven UAV flights over three solar plants are labeled and used to evaluate the detection methods. The metrics for the new method based on classical techniques reaches a precision of 0.997, a recall of 0.970 and a F1 score of 0.983. The metrics for the method of deep learning reaches a precision of 0.996, a recall of 0.981 and a F1 score of 0.989. The two panel detection methods are highly effective in the presence of complex backgrounds. Full article
Show Figures

Figure 1

22 pages, 13196 KiB  
Article
Scaling Effects on Chlorophyll Content Estimations with RGB Camera Mounted on a UAV Platform Using Machine-Learning Methods
by Yahui Guo, Guodong Yin, Hongyong Sun, Hanxi Wang, Shouzhi Chen, J. Senthilnath, Jingzhe Wang and Yongshuo Fu
Sensors 2020, 20(18), 5130; https://doi.org/10.3390/s20185130 - 9 Sep 2020
Cited by 62 | Viewed by 6290
Abstract
Timely monitoring and precise estimation of the leaf chlorophyll contents of maize are crucial for agricultural practices. The scale effects are very important as the calculated vegetation index (VI) were crucial for the quantitative remote sensing. In this study, the scale effects were [...] Read more.
Timely monitoring and precise estimation of the leaf chlorophyll contents of maize are crucial for agricultural practices. The scale effects are very important as the calculated vegetation index (VI) were crucial for the quantitative remote sensing. In this study, the scale effects were investigated by analyzing the linear relationships between VI calculated from red–green–blue (RGB) images from unmanned aerial vehicles (UAV) and ground leaf chlorophyll contents of maize measured using SPAD-502. The scale impacts were assessed by applying different flight altitudes and the highest coefficient of determination (R2) can reach 0.85. We found that the VI from images acquired from flight altitude of 50 m was better to estimate the leaf chlorophyll contents using the DJI UAV platform with this specific camera (5472 × 3648 pixels). Moreover, three machine-learning (ML) methods including backpropagation neural network (BP), support vector machine (SVM), and random forest (RF) were applied for the grid-based chlorophyll content estimation based on the common VI. The average values of the root mean square error (RMSE) of chlorophyll content estimations using ML methods were 3.85, 3.11, and 2.90 for BP, SVM, and RF, respectively. Similarly, the mean absolute error (MAE) were 2.947, 2.460, and 2.389, for BP, SVM, and RF, respectively. Thus, the ML methods had relative high precision in chlorophyll content estimations using VI; in particular, the RF performed better than BP and SVM. Our findings suggest that the integrated ML methods with RGB images of this camera acquired at a flight altitude of 50 m (spatial resolution 0.018 m) can be perfectly applied for estimations of leaf chlorophyll content in agriculture. Full article
Show Figures

Figure 1

19 pages, 3209 KiB  
Article
A UAV-Based Framework for Semi-Automated Thermographic Inspection of Belt Conveyors in the Mining Industry
by Regivaldo Carvalho, Richardson Nascimento, Thiago D’Angelo, Saul Delabrida, Andrea G. C. Bianchi, Ricardo A. R. Oliveira, Héctor Azpúrua and Luis G. Uzeda Garcia
Sensors 2020, 20(8), 2243; https://doi.org/10.3390/s20082243 - 15 Apr 2020
Cited by 49 | Viewed by 6450
Abstract
Frequent and accurate inspections of industrial components and equipment are essential because failures can cause unscheduled downtimes, massive material, and financial losses or even endanger workers. In the mining industry, belt idlers or rollers are examples of such critical components. Although there are [...] Read more.
Frequent and accurate inspections of industrial components and equipment are essential because failures can cause unscheduled downtimes, massive material, and financial losses or even endanger workers. In the mining industry, belt idlers or rollers are examples of such critical components. Although there are many precise laboratory techniques to assess the condition of a roller, companies still have trouble implementing a reliable and scalable procedure to inspect their field assets. This article enumerates and discusses the existing roller inspection techniques and presents a novel approach based on an Unmanned Aerial Vehicle (UAV) integrated with a thermal imaging camera. Our preliminary results indicate that using a signal processing technique, we are able to identify roller failures automatically. We also proposed and implemented a back-end platform that enables field and cloud connectivity with enterprise systems. Finally, we have also cataloged the anomalies detected during the extensive field tests in order to build a structured dataset that will allow for future experimentation. Full article
Show Figures

Figure 1

20 pages, 10276 KiB  
Article
Automatic Pixel-Level Crack Detection on Dam Surface Using Deep Convolutional Network
by Chuncheng Feng, Hua Zhang, Haoran Wang, Shuang Wang and Yonglong Li
Sensors 2020, 20(7), 2069; https://doi.org/10.3390/s20072069 - 7 Apr 2020
Cited by 101 | Viewed by 8524
Abstract
Crack detection on dam surfaces is an important task for safe inspection of hydropower stations. More and more object detection methods based on deep learning are being applied to crack detection. However, most of the methods can only achieve the classification and rough [...] Read more.
Crack detection on dam surfaces is an important task for safe inspection of hydropower stations. More and more object detection methods based on deep learning are being applied to crack detection. However, most of the methods can only achieve the classification and rough location of cracks. Pixel-level crack detection can provide more intuitive and accurate detection results for dam health assessment. To realize pixel-level crack detection, a method of crack detection on dam surface (CDDS) using deep convolution network is proposed. First, we use an unmanned aerial vehicle (UAV) to collect dam surface images along a predetermined trajectory. Second, raw images are cropped. Then crack regions are manually labelled on cropped images to create the crack dataset, and the architecture of CDDS network is designed. Finally, the CDDS network is trained, validated and tested using the crack dataset. To validate the performance of the CDDS network, the predicted results are compared with ResNet152-based, SegNet, UNet and fully convolutional network (FCN). In terms of crack segmentation, the recall, precision, F-measure and IoU are 80.45%, 80.31%, 79.16%, and 66.76%. The results on test dataset show that the CDDS network has better performance for crack detection of dam surfaces. Full article
Show Figures

Figure 1

20 pages, 7255 KiB  
Article
Applying Fully Convolutional Architectures for Semantic Segmentation of a Single Tree Species in Urban Environment on High Resolution UAV Optical Imagery
by Daliana Lobo Torres, Raul Queiroz Feitosa, Patrick Nigri Happ, Laura Elena Cué La Rosa, José Marcato Junior, José Martins, Patrik Olã Bressan, Wesley Nunes Gonçalves and Veraldo Liesenberg
Sensors 2020, 20(2), 563; https://doi.org/10.3390/s20020563 - 20 Jan 2020
Cited by 83 | Viewed by 8206
Abstract
This study proposes and evaluates five deep fully convolutional networks (FCNs) for the semantic segmentation of a single tree species: SegNet, U-Net, FC-DenseNet, and two DeepLabv3+ variants. The performance of the FCN designs is evaluated experimentally in terms of classification accuracy and computational [...] Read more.
This study proposes and evaluates five deep fully convolutional networks (FCNs) for the semantic segmentation of a single tree species: SegNet, U-Net, FC-DenseNet, and two DeepLabv3+ variants. The performance of the FCN designs is evaluated experimentally in terms of classification accuracy and computational load. We also verify the benefits of fully connected conditional random fields (CRFs) as a post-processing step to improve the segmentation maps. The analysis is conducted on a set of images captured by an RGB camera aboard a UAV flying over an urban area. The dataset also contains a mask that indicates the occurrence of an endangered species called Dipteryx alata Vogel, also known as cumbaru, taken as the species to be identified. The experimental analysis shows the effectiveness of each design and reports average overall accuracy ranging from 88.9% to 96.7%, an F1-score between 87.0% and 96.1%, and IoU from 77.1% to 92.5%. We also realize that CRF consistently improves the performance, but at a high computational cost. Full article
Show Figures

Figure 1

23 pages, 19120 KiB  
Article
Improved UAV Opium Poppy Detection Using an Updated YOLOv3 Model
by Jun Zhou, Yichen Tian, Chao Yuan, Kai Yin, Guang Yang and Meiping Wen
Sensors 2019, 19(22), 4851; https://doi.org/10.3390/s19224851 - 7 Nov 2019
Cited by 39 | Viewed by 9725
Abstract
Rapid detection of illicit opium poppy plants using UAV (unmanned aerial vehicle) imagery has become an important means to prevent and combat crimes related to drug cultivation. However, current methods rely on time-consuming visual image interpretation. Here, the You Only Look Once version [...] Read more.
Rapid detection of illicit opium poppy plants using UAV (unmanned aerial vehicle) imagery has become an important means to prevent and combat crimes related to drug cultivation. However, current methods rely on time-consuming visual image interpretation. Here, the You Only Look Once version 3 (YOLOv3) network structure was used to assess the influence that different backbone networks have on the average precision and detection speed of an UAV-derived dataset of poppy imagery, with MobileNetv2 (MN) selected as the most suitable backbone network. A Spatial Pyramid Pooling (SPP) unit was introduced and Generalized Intersection over Union (GIoU) was used to calculate the coordinate loss. The resulting SPP-GIoU-YOLOv3-MN model improved the average precision by 1.62% (from 94.75% to 96.37%) without decreasing speed and achieved an average precision of 96.37%, with a detection speed of 29 FPS using an RTX 2080Ti platform. The sliding window method was used for detection in complete UAV images, which took approximately 2.2 sec/image, approximately 10× faster than visual interpretation. The proposed technique significantly improved the efficiency of poppy detection in UAV images while also maintaining a high detection accuracy. The proposed method is thus suitable for the rapid detection of illicit opium poppy cultivation in residential areas and farmland where UAVs with ordinary visible light cameras can be operated at low altitudes (relative height < 200 m). Full article
Show Figures

Figure 1

Back to TopTop