sensors-logo

Journal Browser

Journal Browser

UAV Imagery and Its Applications Using Artificial Intelligence Techniques and Explainability-Based Models

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 19852

Special Issue Editor


grade E-Mail Website
Guest Editor
Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), University of Technology Sydney, Sydney, NSW 2007, Australia
Interests: radar image processing remote sensing and GIS applications GIS for engineers forecasting disaster hazard; stochastic analysis and modelling; natural hazards; environmental engineering modelling; geospatial information systems; photogrammetry and remote sensing; unmanned aerial vehicles (UAVs).
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

This Special Issue will focus on the development of artificial-intelligence-based models for several environmental applications including but not limited to natural hazard management, urban planning, and urbanization problems. The availability of higher resolution images captured from UAV has provided new directions in several fields, and therefore, there is a need to build more robust and advanced AI-based models. These models should provide better solutions to existing environmental issues ranging from developing novel architectures and its optimization to explaining the model outcomes. UAV-based AI models have the benefit of automatic processing owing to shorter temporal resolution, which would learn from historic experiences and provide solutions to the fast-changing environment and goals. The tasks in the UAV–AI system are interesting, each being valuable in a specific domain, with an aim to better explain the model results, and providing a reasonable explanation of the results, thereby achieving the ultimate goal of explainable AI. There is a growing need for UAV imaging platforms including image types such as hyperspectral, multispectral, LiDAR, and several others, including AI technologies such as machine learning, specifically, deep neural networks, knowledge graphs, neurofuzzy models, along with optimization techniques, such as genetic algorithms, particle swarm optimization, firefly algorithms, etc., for decision-making and modeling purposes. This Special Issue invites authors to submit their contributions in the following areas:

  • Building footprint detection;
  • Urban and peri-urban grass mapping;
  • Damage detection using UAV images;
  • Multispectral/hyperspectral image processing for environmental problems;
  • Road extraction and segmentation;
  • Time series analysis for short- and long-term change detection in disaster monitoring and environmental monitoring;
  • Power line monitoring;
  • Target detection for forest fire mitigation;
  • Radiometric calibration for camera of different imaging conditions;
  • Multispectral/hyperspectral image processing for agricultural monitoring;
  • Anomaly detection like suspicious detection;
  • Water/air pollution monitoring.

Prof. Dr. Biswajeet Pradhan
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 2832 KiB  
Article
Mapping Tree Canopy in Urban Environments Using Point Clouds from Airborne Laser Scanning and Street Level Imagery
by Francisco Rodríguez-Puerta, Carlos Barrera, Borja García, Fernando Pérez-Rodríguez and Angel M. García-Pedrero
Sensors 2022, 22(9), 3269; https://doi.org/10.3390/s22093269 - 24 Apr 2022
Cited by 7 | Viewed by 4076
Abstract
Resilient cities incorporate a social, ecological, and technological systems perspective through their trees, both in urban and peri-urban forests and linear street trees, and help promote and understand the concept of ecosystem resilience. Urban tree inventories usually involve the collection of field data [...] Read more.
Resilient cities incorporate a social, ecological, and technological systems perspective through their trees, both in urban and peri-urban forests and linear street trees, and help promote and understand the concept of ecosystem resilience. Urban tree inventories usually involve the collection of field data on the location, genus, species, crown shape and volume, diameter, height, and health status of these trees. In this work, we have developed a multi-stage methodology to update urban tree inventories in a fully automatic way, and we have applied it in the city of Pamplona (Spain). We have compared and combined two of the most common data sources for updating urban tree inventories: Airborne Laser Scanning (ALS) point clouds combined with aerial orthophotographs, and street-level imagery from Google Street View (GSV). Depending on the data source, different methodologies were used to identify the trees. In the first stage, the use of individual tree detection techniques in ALS point clouds was compared with the detection of objects (trees) on street level images using computer vision (CV) techniques. In both cases, a high success rate or recall (number of true positive with respect to all detectable trees) was obtained, where between 85.07% and 86.42% of the trees were well-identified, although many false positives (FPs) or trees that did not exist or that had been confused with other objects were always identified. In order to reduce these errors or FPs, a second stage was designed, where FP debugging was performed through two methodologies: (a) based on the automatic checking of all possible trees with street level images, and (b) through a machine learning binary classification model trained with spectral data from orthophotographs. After this second stage, the recall decreased to about 75% (between 71.43 and 78.18 depending on the procedure used) but most of the false positives were eliminated. The results obtained with both data sources were robust and accurate. We can conclude that the results obtained with the different methodologies are very similar, where the main difference resides in the access to the starting information. While the use of street-level images only allows for the detection of trees growing in trafficable streets and is a source of information that is usually paid for, the use of ALS and aerial orthophotographs allows for the location of trees anywhere in the city, including public and private parks and gardens, and in many countries, these data are freely available. Full article
Show Figures

Figure 1

16 pages, 8314 KiB  
Article
Urban Vegetation Mapping from Aerial Imagery Using Explainable AI (XAI)
by Abolfazl Abdollahi and Biswajeet Pradhan
Sensors 2021, 21(14), 4738; https://doi.org/10.3390/s21144738 - 11 Jul 2021
Cited by 74 | Viewed by 8083
Abstract
Urban vegetation mapping is critical in many applications, i.e., preserving biodiversity, maintaining ecological balance, and minimizing the urban heat island effect. It is still challenging to extract accurate vegetation covers from aerial imagery using traditional classification approaches, because urban vegetation categories have complex [...] Read more.
Urban vegetation mapping is critical in many applications, i.e., preserving biodiversity, maintaining ecological balance, and minimizing the urban heat island effect. It is still challenging to extract accurate vegetation covers from aerial imagery using traditional classification approaches, because urban vegetation categories have complex spatial structures and similar spectral properties. Deep neural networks (DNNs) have shown a significant improvement in remote sensing image classification outcomes during the last few years. These methods are promising in this domain, yet unreliable for various reasons, such as the use of irrelevant descriptor features in the building of the models and lack of quality in the labeled image. Explainable AI (XAI) can help us gain insight into these limits and, as a result, adjust the training dataset and model as needed. Thus, in this work, we explain how an explanation model called Shapley additive explanations (SHAP) can be utilized for interpreting the output of the DNN model that is designed for classifying vegetation covers. We want to not only produce high-quality vegetation maps, but also rank the input parameters and select appropriate features for classification. Therefore, we test our method on vegetation mapping from aerial imagery based on spectral and textural features. Texture features can help overcome the limitations of poor spectral resolution in aerial imagery for vegetation mapping. The model was capable of obtaining an overall accuracy (OA) of 94.44% for vegetation cover mapping. The conclusions derived from SHAP plots demonstrate the high contribution of features, such as Hue, Brightness, GLCM_Dissimilarity, GLCM_Homogeneity, and GLCM_Mean to the output of the proposed model for vegetation mapping. Therefore, the study indicates that existing vegetation mapping strategies based only on spectral characteristics are insufficient to appropriately classify vegetation covers. Full article
Show Figures

Graphical abstract

22 pages, 4479 KiB  
Article
Earthquake-Induced Building-Damage Mapping Using Explainable AI (XAI)
by Sahar S. Matin and Biswajeet Pradhan
Sensors 2021, 21(13), 4489; https://doi.org/10.3390/s21134489 - 30 Jun 2021
Cited by 44 | Viewed by 6571
Abstract
Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based [...] Read more.
Building-damage mapping using remote sensing images plays a critical role in providing quick and accurate information for the first responders after major earthquakes. In recent years, there has been an increasing interest in generating post-earthquake building-damage maps automatically using different artificial intelligence (AI)-based frameworks. These frameworks in this domain are promising, yet not reliable for several reasons, including but not limited to the site-specific design of the methods, the lack of transparency in the AI-model, the lack of quality in the labelled image, and the use of irrelevant descriptor features in building the AI-model. Using explainable AI (XAI) can lead us to gain insight into identifying these limitations and therefore, to modify the training dataset and the model accordingly. This paper proposes the use of SHAP (Shapley additive explanation) to interpret the outputs of a multilayer perceptron (MLP)—a machine learning model—and analyse the impact of each feature descriptor included in the model for building-damage assessment to examine the reliability of the model. In this study, a post-event satellite image from the 2018 Palu earthquake was used. The results show that MLP can classify the collapsed and non-collapsed buildings with an overall accuracy of 84% after removing the redundant features. Further, spectral features are found to be more important than texture features in distinguishing the collapsed and non-collapsed buildings. Finally, we argue that constructing an explainable model would help to understand the model’s decision to classify the buildings as collapsed and non-collapsed and open avenues to build a transferable AI model. Full article
Show Figures

Graphical abstract

Back to TopTop