remotesensing-logo

Journal Browser

Journal Browser

Advanced Artificial Intelligence for Remote Sensing: Methodology and Application

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (1 June 2021) | Viewed by 20923

Special Issue Editors

Cleveland Vision & AI Lab, Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, OH, USA
Interests: artificial intelligence; deep learning; computer vision; remote sensing
Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, OH, USA
Interests: signal processing; computer intelligence; evolutionary algorithms; state estimation

E-Mail Website
Guest Editor
Department of Big Data Management and Applications, Chang’an University, Xi'an, China
Interests: UAV tracking; scene understanding of remote sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Technical University of Munich, Germany
Interests: remote sensing; change detection; domain adaptation; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the fast popularization of remote sensing data for various applications, such as transportation, smart city, agriculture, geophysics, urban planning, etc., remote sensing has entered a big data era. The demand for addressing fine-level data understanding tasks in remote sensing is booming in recent years. However, because of the large-scale size and extremely complex feature distribution of remote sensing data, previous research still faces severe difficulties when it comes to obtaining ideal results for remote sensing data understanding. Moreover, there is a large insufficiency for embodying detailed domain knowledge in various remote sensing applications. Under these circumstances, advanced artificial intelligence (AI) models, especially involving the special domain knowledge in different remote sensing applications, are promising and enable machines to solve more targeted tasks under special environments with large-scale remote sensing data. Principled solutions to fulfill this goal are still understudied. Currently, various novel AI models, such as convolutional neural networks, graph convolutional networks, transformers, generative adversarial networks, transfer learning, AutoML, spring up like bamboo shoots after a spring rain. There are new opportunities to seek solutions for advanced AI models for data understanding in remote sensing applications. We welcome high-quality original submissions promoting cutting-edge research along this direction.

Topics of interests include but are not limited to:

  • Advanced AI models for remote sensing data understanding, such as convolutional neural networks, generative adversarial networks, transformer, sparse coding, adversarial attack, AutoML, etc.;
  • Novel applications of AI models for remote sensing, such as transportation, smart city, agriculture, UAV, geophysics, urban planning, etc.;
  • Emerging computer vision, signal processing, and evolutionary algorithms for remote sensing;
  • Transfer learning and domain adaptation for remote sensing with limited data;  
  • Weakly supervised learning for remote sensing with weak supervisions;
  • AI methods and applications for satellite, multispectral, hyperspectral, and UAV images;
  • Semantic remote sensing image segmentation;
  • Detection for interested objects and changes for remote sensing.

Dr. Hongkai Yu
Prof. Dr. Dan Simon
Dr. Jianwu Fang
Dr. Sudipan Saha
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • remote sensing data understanding
  • artificial intelligence
  • neural networks

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

21 pages, 2691 KiB  
Article
Geometry-Aware Discriminative Dictionary Learning for PolSAR Image Classification
by Yachao Zhang, Xuan Lai, Yuan Xie, Yanyun Qu and Cuihua Li
Remote Sens. 2021, 13(6), 1218; https://doi.org/10.3390/rs13061218 - 23 Mar 2021
Cited by 5 | Viewed by 2622
Abstract
In this paper, we propose a new discriminative dictionary learning method based on Riemann geometric perception for polarimetric synthetic aperture radar (PolSAR) image classification. We made an optimization model for geometry-aware discrimination dictionary learning in which the dictionary learning (GADDL) is generalized from [...] Read more.
In this paper, we propose a new discriminative dictionary learning method based on Riemann geometric perception for polarimetric synthetic aperture radar (PolSAR) image classification. We made an optimization model for geometry-aware discrimination dictionary learning in which the dictionary learning (GADDL) is generalized from Euclidian space to Riemannian manifolds, and dictionary atoms are composed of manifold data. An efficient optimization algorithm based on an alternating direction multiplier method was developed to solve the model. Experiments were implemented on three public datasets: Flevoland-1989, San Francisco and Flevoland-1991. The experimental results show that the proposed method learned a discriminative dictionary with accuracies better those of comparative methods. The convergence of the model and the robustness of the initial dictionary were also verified through experiments. Full article
Show Figures

Figure 1

24 pages, 65348 KiB  
Article
Deep Neural Networks for Road Sign Detection and Embedded Modeling Using Oblique Aerial Images
by Zhu Mao, Fan Zhang, Xianfeng Huang, Xiangyang Jia, Yiping Gong and Qin Zou
Remote Sens. 2021, 13(5), 879; https://doi.org/10.3390/rs13050879 - 26 Feb 2021
Cited by 5 | Viewed by 3745
Abstract
Oblique photogrammetry-based three-dimensional (3D) urban models are widely used for smart cities. In 3D urban models, road signs are small but provide valuable information for navigation. However, due to the problems of sliced shape features, blurred texture and high incline angles, road signs [...] Read more.
Oblique photogrammetry-based three-dimensional (3D) urban models are widely used for smart cities. In 3D urban models, road signs are small but provide valuable information for navigation. However, due to the problems of sliced shape features, blurred texture and high incline angles, road signs cannot be fully reconstructed in oblique photogrammetry, even with state-of-the-art algorithms. The poor reconstruction of road signs commonly leads to less informative guidance and unsatisfactory visual appearance. In this paper, we present a pipeline for embedding road sign models based on deep convolutional neural networks (CNNs). First, we present an end-to-end balanced-learning framework for small object detection that takes advantage of the region-based CNN and a data synthesis strategy. Second, under the geometric constraints placed by the bounding boxes, we use the scale-invariant feature transform (SIFT) to extract the corresponding points on the road signs. Third, we obtain the coarse location of a single road sign by triangulating the corresponding points and refine the location via outlier removal. Least-squares fitting is then applied to the refined point cloud to fit a plane for orientation prediction. Finally, we replace the road signs with computer-aided design models in the 3D urban scene with the predicted location and orientation. The experimental results show that the proposed method achieves a high mAP in road sign detection and produces visually plausible embedded results, which demonstrates its effectiveness for road sign modeling in oblique photogrammetry-based 3D scene reconstruction. Full article
Show Figures

Figure 1

23 pages, 9505 KiB  
Article
Geo-Object-Based Vegetation Mapping via Machine Learning Methods with an Intelligent Sample Collection Scheme: A Case Study of Taibai Mountain, China
by Tianjun Wu, Jiancheng Luo, Lijing Gao, Yingwei Sun, Wen Dong, Ya’nan Zhou, Wei Liu, Xiaodong Hu, Jiangbo Xi, Changpeng Wang and Yun Yang
Remote Sens. 2021, 13(2), 249; https://doi.org/10.3390/rs13020249 - 13 Jan 2021
Cited by 9 | Viewed by 4771
Abstract
Precise vegetation maps of mountainous areas are of great significance to grasp the situation of an ecological environment and forest resources. In this paper, while multi-source geospatial data can generally be quickly obtained at present, to realize effective vegetation mapping in mountainous areas [...] Read more.
Precise vegetation maps of mountainous areas are of great significance to grasp the situation of an ecological environment and forest resources. In this paper, while multi-source geospatial data can generally be quickly obtained at present, to realize effective vegetation mapping in mountainous areas when samples are difficult to collect due to their perilous terrain and inaccessible deep forest, we propose a novel and intelligent method of sample collection for machine-learning (ML)-based vegetation mapping. First, we employ geo-objects (i.e., polygons) from topographic partitioning and constrained segmentation as basic mapping units and formalize the problem as a supervised classification process using ML algorithms. Second, a previously available vegetation map with rough-scale label information is overlaid on the geo-object-level polygons, and candidate geo-object-based samples can be identified when all the grids’ labels of vegetation types within the geo-objects are the same. Third, various kinds of geo-object-level features are extracted according to high-spatial-resolution remote sensing (HSR-RS) images and multi-source geospatial data. Some unreliable geo-object-based samples are rejected in the candidate set by comparing their features and the rules based on local expert knowledge. Finally, based on these automatically collected samples, we train the model using a random forest (RF)-based algorithm and classify all the geo-objects with labels of vegetation types. A case experiment of Taibai Mountain in China shows that the methodology has the ability to achieve good vegetation mapping results with the rapid and convenient sample collection scheme. The map with a finer geographic distribution pattern of vegetation could clearly promote the vegetation resources investigation and monitoring of the study area; thus, the methodological framework is worth popularizing in the mapping areas such as mountainous regions where the field survey sampling is difficult to implement. Full article
Show Figures

Figure 1

17 pages, 4359 KiB  
Article
Augmenting Crop Detection for Precision Agriculture with Deep Visual Transfer Learning—A Case Study of Bale Detection
by Wei Zhao, William Yamada, Tianxin Li, Matthew Digman and Troy Runge
Remote Sens. 2021, 13(1), 23; https://doi.org/10.3390/rs13010023 - 23 Dec 2020
Cited by 39 | Viewed by 5152
Abstract
In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision [...] Read more.
In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

15 pages, 9776 KiB  
Technical Note
A Nonlinear Radiometric Normalization Model for Satellite Imgaes Time Series Based on Artificial Neural Networks and Greedy Algroithm
by Zhaohui Yin, Lejun Zou, Jiayu Sun, Haoran Zhang, Wenyi Zhang and Xiaohua Shen
Remote Sens. 2021, 13(5), 933; https://doi.org/10.3390/rs13050933 - 2 Mar 2021
Cited by 5 | Viewed by 2997
Abstract
Satellite Image Time Series (SITS) is a data set that includes satellite images across several years with a high acquisition rate. Radiometric normalization is a fundamental and important preprocessing method for remote sensing applications using SITS due to the radiometric distortion caused by [...] Read more.
Satellite Image Time Series (SITS) is a data set that includes satellite images across several years with a high acquisition rate. Radiometric normalization is a fundamental and important preprocessing method for remote sensing applications using SITS due to the radiometric distortion caused by noise between images. Normalizing the subject image based on the reference image is a general strategy when using traditional radiometric normalization methods to normalize multi-temporal imagery (usually two or three scenes in different time phases). However, these methods are unsuitable for calibrating SITS because they cannot minimize the radiometric distortion between any pair of images in SITS. The existing relative radiometric normalization methods for SITS are based on linear assumptions, which cannot effectively reduce nonlinear radiometric distortion caused by continuously changing noise in SITS. To overcome this problem and obtain a more accurate SITS, we propose a nonlinear radiometric normalization model (NMAG) for SITS based on Artificial Neural Networks (ANN) and Greedy Algorithm (GA). In this method, GA is used to determine the correction order of SITS and calculate the error between the image to be corrected and normalized images, which avoids the selection of a single reference image. ANN is used to obtain the optimal solution of error function, which minimizes the radiometric distortion between different images in SITS. The SITS composed of 21 Landsat-8 images in Tianjin, China, from October 2017 to January 2019 was selected to test the method. We compared NMAG with other two contrast methods (Contrast Method 1 (CM1) and Contrast Method 2 (CM2)), and found that the average root mean square error (μRMSE) of NMAG (497.22) is significantly smaller than those of CM1 (641.39) and CM2 (543.47), and the accuracy of normalized SITS obtained using NMAG increases by 22.4% and 8.5% compared with CM1 and CM2, respectively. These experimental results confirm the effectiveness of NMAG in reducing radiometric distortion caused by continuously changing noise between images in SITS. Full article
Show Figures

Graphical abstract

Back to TopTop