remotesensing-logo

Journal Browser

Journal Browser

Semantic Interpretation of Remotely Sensed Images

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 36746

Special Issue Editors


E-Mail Website
Guest Editor
Department of Spatial Information Engineering, Pukyong National University, Busan 48513, Republic of Korea
Interests: artificial intelligence; semantic segmentation; remote sensing of disaster; applications in agriculture, forest, hydrology, and meteorology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Civil, Urban, Earth, and Environmental Engineering, UNIST (Ulsan National Institute of Science and Technology), Ulsan, Republic of Korea
Interests: satellite remote sensing; aerosols; air quality; wild fire; urban heatwave; drought; artificial intelligence; machine learning; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Applied Plant Science, Chonnam National University, 77 Yongbong-ro, Gwangju 61186, Korea
Interests: remote sensing of vegetation; applications in agriculture, hydrology, and micrometeorology; interactions between atmosphere and biosphere
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Innovative Meteorological Research Department, Korea Meteorological Administration, Seoul, Korea
Interests: satellite meteorology; global atmosphere watch using in situ and satellite observations; applications in meteorology and climate change (e.g., cloud characteristics, aerosols, atmospheric composition, surface information)

Special Issue Information

Dear Colleagues,

The environmental changes currently taking place on Earth may unfortunately accelerate even more than they already have. As the old adage goes, the Earth is not ours, but it is borrowed from our descendants, and we must thus return it to them safe and clean. Numerous efforts have been made to monitor the Earth’s health for a long time, with remote sensing being the most promising technology for accurate monitoring of not only its health but also its safety. Recently, a variety of satellites have been launched to assist in Earth observation from space, providing spatially and temporally continuous information, such as disaster, meteorology, air quality, vegetation, hydrology, ocean, and polar regions. Additionally, recent artificial intelligence technologies can facilitate the analysis and prediction of the Earth’s environmental phenomena by coping with the complexity and nonlinearity problems using advanced computing power. In this Special Issue on “Semantic Interpretation of Remotely Sensed Images”, we invite colleagues’ insights and contributions to the broad fields of remote sensing to deal with all types of remotely sensed images from various sensors onboard satellites and drones. Papers can be focused on but are not limited to:

* Semantic segmentation with artificial intelligence methods;

* Spatiotemporal data fusion using advanced statistical methods;

* Change detection via semantic interpretation of remotely sensed images;

* Interpretation of remotely sensed images for disaster management;

* Interpretation of high-resolution images for agriculture, forest, and hydrology applications;

* Spatiotemporal analysis of remotely sensed images;

* Knowledge discovery for the Earth environment using remotely sensed images;

* UAV applications for disaster monitoring;

* UAV applications for agriculture and forest.

Prof. Dr. Yang-Won Lee
Prof. Dr. Jungho Im
Prof. Dr. Jaeil Cho
Dr. Chu-Yong Chung
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • image processing
  • data fusion
  • change detection
  • disaster monitoring
  • agriculture
  • forest
  • hydrology
  • meteorology
  • UAV

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

20 pages, 7345 KiB  
Article
An Advanced Operational Approach for Tropical Cyclone Center Estimation Using Geostationary-Satellite-Based Water Vapor and Infrared Channels
by Yeji Shin, Juhyun Lee, Jungho Im and Seongmun Sim
Remote Sens. 2022, 14(19), 4800; https://doi.org/10.3390/rs14194800 - 26 Sep 2022
Cited by 2 | Viewed by 1886
Abstract
Tropical cyclones (TCs) are destructive natural disasters. Accurate prediction and monitoring are important to mitigate the effects of natural disasters. Although remarkable efforts have been made to understand TCs, operational monitoring information still depends on the experience and knowledge of forecasters. In this [...] Read more.
Tropical cyclones (TCs) are destructive natural disasters. Accurate prediction and monitoring are important to mitigate the effects of natural disasters. Although remarkable efforts have been made to understand TCs, operational monitoring information still depends on the experience and knowledge of forecasters. In this study, a fully automated geostationary-satellite-based TC center estimation approach is proposed. The proposed approach consists of two improved methods: the setting of regions of interest (ROI) using a score matrix (SCM) and a TC center determination method using an enhanced logarithmic spiral band (LSB) and SCM. The former enables prescreening of the regions that may be misidentified as TC centers during the ROI setting step, and the latter contributes to the determination of an accurate TC center, considering the size and length of the TC rainband in relation to its intensity. Two schemes, schemes A and B, were examined depending on whether the forecasting data or real-time observations were used to determine the initial guess of the TC centers. For each scheme, two models were evaluated to discern whether SCM was combined with LSB for TC center determination. The results were investigated based on TC intensity and phase to determine the impact of TC structural characteristics on TC center determination. While both proposed models improved the detection performance over the existing approach, the best-performing model (i.e., LSB combined with SCM) achieved skill scores (SSs) of +17.4% and +20.8% for the two schemes. In particular, the model resulted in a significant improvement for strong TCs (categories 4 and 5), with SSs of +47.8% and +72.8% and +41.2% and +72.3% for schemes A and B, respectively. The research findings provide an improved understanding of the intensity- and phase-wise spatial characteristics of TCs, which contributes to objective TC center estimation. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Graphical abstract

22 pages, 6021 KiB  
Article
Controllable Fused Semantic Segmentation with Adaptive Edge Loss for Remote Sensing Parsing
by Xudong Sun, Min Xia and Tianfang Dai
Remote Sens. 2022, 14(1), 207; https://doi.org/10.3390/rs14010207 - 3 Jan 2022
Cited by 13 | Viewed by 3349
Abstract
High-resolution remote sensing images have been put into the application in remote sensing parsing. General remote sensing parsing methods based on semantic segmentation still have limitations, which include frequent neglect of tiny objects, high complexity in image understanding and sample imbalance. Therefore, a [...] Read more.
High-resolution remote sensing images have been put into the application in remote sensing parsing. General remote sensing parsing methods based on semantic segmentation still have limitations, which include frequent neglect of tiny objects, high complexity in image understanding and sample imbalance. Therefore, a controllable fusion module (CFM) is proposed to alleviate the problem of implicit understanding of complicated categories. Moreover, an adaptive edge loss function (AEL) was proposed to alleviate the problem of the recognition of tiny objects and sample imbalance. Our proposed method combining CFM and AEL optimizes edge features and body features in a coupled mode. The verification on Potsdam and Vaihingen datasets shows that our method can significantly improve the parsing effect of satellite images in terms of mIoU and MPA. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

20 pages, 1355 KiB  
Article
Multi-Scale Feature Aggregation Network for Water Area Segmentation
by Kai Hu, Meng Li, Min Xia and Haifeng Lin
Remote Sens. 2022, 14(1), 206; https://doi.org/10.3390/rs14010206 - 3 Jan 2022
Cited by 41 | Viewed by 4395
Abstract
Water area segmentation is an important branch of remote sensing image segmentation, but in reality, most water area images have complex and diverse backgrounds. Traditional detection methods cannot accurately identify small tributaries due to incomplete mining and insufficient utilization of semantic information, and [...] Read more.
Water area segmentation is an important branch of remote sensing image segmentation, but in reality, most water area images have complex and diverse backgrounds. Traditional detection methods cannot accurately identify small tributaries due to incomplete mining and insufficient utilization of semantic information, and the edge information of segmentation is rough. To solve the above problems, we propose a multi-scale feature aggregation network. In order to improve the ability of the network to process boundary information, we design a deep feature extraction module using a multi-scale pyramid to extract features, combined with the designed attention mechanism and strip convolution, extraction of multi-scale deep semantic information and enhancement of spatial and location information. Then, the multi-branch aggregation module is used to interact with different scale features to enhance the positioning information of the pixels. Finally, the two high-performance branches designed in the Feature Fusion Upsample module are used to deeply extract the semantic information of the image, and the deep information is fused with the shallow information generated by the multi-branch module to improve the ability of the network. Global and local features are used to determine the location distribution of each image category. The experimental results show that the accuracy of the segmentation method in this paper is better than that in the previous detection methods, and has important practical significance for the actual water area segmentation. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

27 pages, 5809 KiB  
Article
Attentively Learning Edge Distributions for Semantic Segmentation of Remote Sensing Imagery
by Xin Li, Tao Li, Ziqi Chen, Kaiwen Zhang and Runliang Xia
Remote Sens. 2022, 14(1), 102; https://doi.org/10.3390/rs14010102 - 26 Dec 2021
Cited by 18 | Viewed by 3321
Abstract
Semantic segmentation has been a fundamental task in interpreting remote sensing imagery (RSI) for various downstream applications. Due to the high intra-class variants and inter-class similarities, inflexibly transferring natural image-specific networks to RSI is inadvisable. To enhance the distinguishability of learnt representations, attention [...] Read more.
Semantic segmentation has been a fundamental task in interpreting remote sensing imagery (RSI) for various downstream applications. Due to the high intra-class variants and inter-class similarities, inflexibly transferring natural image-specific networks to RSI is inadvisable. To enhance the distinguishability of learnt representations, attention modules were developed and applied to RSI, resulting in satisfactory improvements. However, these designs capture contextual information by equally handling all the pixels regardless of whether they around edges. Therefore, blurry boundaries are generated, rising high uncertainties in classifying vast adjacent pixels. Hereby, we propose an edge distribution attention module (EDA) to highlight the edge distributions of leant feature maps in a self-attentive fashion. In this module, we first formulate and model column-wise and row-wise edge attention maps based on covariance matrix analysis. Furthermore, a hybrid attention module (HAM) that emphasizes the edge distributions and position-wise dependencies is devised combing with non-local block. Consequently, a conceptually end-to-end neural network, termed as EDENet, is proposed to integrate HAM hierarchically for the detailed strengthening of multi-level representations. EDENet implicitly learns representative and discriminative features, providing available and reasonable cues for dense prediction. The experimental results evaluated on ISPRS Vaihingen, Potsdam and DeepGlobe datasets show the efficacy and superiority to the state-of-the-art methods on overall accuracy (OA) and mean intersection over union (mIoU). In addition, the ablation study further validates the effects of EDA. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

25 pages, 18504 KiB  
Article
A Dual Network for Super-Resolution and Semantic Segmentation of Sentinel-2 Imagery
by Saüc Abadal, Luis Salgueiro, Javier Marcello and Verónica Vilaplana
Remote Sens. 2021, 13(22), 4547; https://doi.org/10.3390/rs13224547 - 12 Nov 2021
Cited by 13 | Viewed by 4528
Abstract
There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this [...] Read more.
There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

22 pages, 8258 KiB  
Article
CDUNet: Cloud Detection UNet for Remote Sensing Imagery
by Kai Hu, Dongsheng Zhang and Min Xia
Remote Sens. 2021, 13(22), 4533; https://doi.org/10.3390/rs13224533 - 11 Nov 2021
Cited by 33 | Viewed by 4548
Abstract
Cloud detection is a key step in the preprocessing of optical satellite remote sensing images. In the existing literature, cloud detection methods are roughly divided into threshold methods and deep-learning methods. Most of the traditional threshold methods are based on the spectral characteristics [...] Read more.
Cloud detection is a key step in the preprocessing of optical satellite remote sensing images. In the existing literature, cloud detection methods are roughly divided into threshold methods and deep-learning methods. Most of the traditional threshold methods are based on the spectral characteristics of clouds, so it is easy to lose the spatial location information in the high-reflection area, resulting in misclassification. Besides, due to the lack of generalization, the traditional deep-learning network also easily loses the details and spatial information if it is directly applied to cloud detection. In order to solve these problems, we propose a deep-learning model, Cloud Detection UNet (CDUNet), for cloud detection. The characteristics of the network are that it can refine the division boundary of the cloud layer and capture its spatial position information. In the proposed model, we introduced a High-frequency Feature Extractor (HFE) and a Multiscale Convolution (MSC) to refine the cloud boundary and predict fragmented clouds. Moreover, in order to improve the accuracy of thin cloud detection, the Spatial Prior Self-Attention (SPSA) mechanism was introduced to establish the cloud spatial position information. Additionally, a dual-attention mechanism is proposed to reduce the proportion of redundant information in the model and improve the overall performance of the model. The experimental results showed that our model can cope with complex cloud cover scenes and has excellent performance on cloud datasets and SPARCS datasets. Its segmentation accuracy is better than the existing methods, which is of great significance for cloud-detection-related work. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

18 pages, 15230 KiB  
Article
Semantic Segmentation of Urban Buildings Using a High-Resolution Network (HRNet) with Channel and Spatial Attention Gates
by Seonkyeong Seong and Jaewan Choi
Remote Sens. 2021, 13(16), 3087; https://doi.org/10.3390/rs13163087 - 5 Aug 2021
Cited by 54 | Viewed by 6011
Abstract
In this study, building extraction in aerial images was performed using csAG-HRNet by applying HRNet-v2 in combination with channel and spatial attention gates. HRNet-v2 consists of transition and fusion processes based on subnetworks according to various resolutions. The channel and spatial attention gates [...] Read more.
In this study, building extraction in aerial images was performed using csAG-HRNet by applying HRNet-v2 in combination with channel and spatial attention gates. HRNet-v2 consists of transition and fusion processes based on subnetworks according to various resolutions. The channel and spatial attention gates were applied in the network to efficiently learn important features. A channel attention gate assigns weights in accordance with the importance of each channel, and a spatial attention gate assigns weights in accordance with the importance of each pixel position for the entire channel. In csAG-HRNet, csAG modules consisting of a channel attention gate and a spatial attention gate were applied to each subnetwork of stage and fusion modules in the HRNet-v2 network. In experiments using two datasets, it was confirmed that csAG-HRNet could minimize false detections based on the shapes of large buildings and small nonbuilding objects compared to existing deep learning models. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

19 pages, 9485 KiB  
Article
Tar Spot Disease Quantification Using Unmanned Aircraft Systems (UAS) Data
by Sungchan Oh, Da-Young Lee, Carlos Gongora-Canul, Akash Ashapure, Joshua Carpenter, A. P. Cruz, Mariela Fernandez-Campos, Brenden Z. Lane, Darcy E. P. Telenko, Jinha Jung and C. D. Cruz
Remote Sens. 2021, 13(13), 2567; https://doi.org/10.3390/rs13132567 - 30 Jun 2021
Cited by 9 | Viewed by 4322
Abstract
Tar spot is a foliar disease of corn characterized by fungal fruiting bodies that resemble tar spots. The disease emerged in the U.S. in 2015, and severe outbreaks in 2018 caused an economic impact on corn yields throughout the Midwest. Adequate epidemiological surveillance [...] Read more.
Tar spot is a foliar disease of corn characterized by fungal fruiting bodies that resemble tar spots. The disease emerged in the U.S. in 2015, and severe outbreaks in 2018 caused an economic impact on corn yields throughout the Midwest. Adequate epidemiological surveillance and disease quantification are necessary to develop immediate and long-term management strategies. This study presents a measurement framework that evaluates the disease severity of tar spot using unmanned aircraft systems (UAS)-based plant phenotyping and regression techniques. UAS-based plant phenotypic information, such as canopy cover, canopy volume, and vegetation indices, were used as explanatory variables. Visual estimations of disease severity were performed by expert plant pathologists per experiment plot basis and used as response variables. Three regression methods, namely ordinary least squares (OLS), support vector regression (SVR), and multilayer perceptron (MLP), were used to determine an optimal regression method for UAS-based tar spot measurement. The cross-validation results showed that the regression model based on MLP provides the highest accuracy of disease measurements. By training and testing the model with spatially separated datasets, the proposed regression model achieved a Lin’s concordance correlation coefficient (ρc) of 0.82 and a root mean square error (RMSE) of 6.42. This study demonstrated that we could use the proposed UAS-based method for the disease quantification of tar spot, which shows a gradual spectral response as the disease develops. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Figure 1

Other

Jump to: Research

16 pages, 1663 KiB  
Technical Note
RSSGG_CS: Remote Sensing Image Scene Graph Generation by Fusing Contextual Information and Statistical Knowledge
by Zhiyuan Lin, Feng Zhu, Qun Wang, Yanzi Kong, Jianyu Wang, Liang Huang and Yingming Hao
Remote Sens. 2022, 14(13), 3118; https://doi.org/10.3390/rs14133118 - 29 Jun 2022
Cited by 4 | Viewed by 2707
Abstract
To semantically understand remote sensing images, it is not only necessary to detect the objects in them but also to recognize the semantic relationships between the instances. Scene graph generation aims to represent the image as a semantic structural graph, where objects and [...] Read more.
To semantically understand remote sensing images, it is not only necessary to detect the objects in them but also to recognize the semantic relationships between the instances. Scene graph generation aims to represent the image as a semantic structural graph, where objects and relationships between them are described as nodes and edges, respectively. Some existing methods rely only on visual features to sequentially predict the relationships between objects, ignoring contextual information and making it difficult to generate high-quality scene graphs, especially for remote sensing images. Therefore, we propose a novel model for remote sensing image scene graph generation by fusing contextual information and statistical knowledge, namely RSSGG_CS. To integrate contextual information and calculate attention among all objects, the RSSGG_CS model adopts a filter module (FiM) that is based on adjusted transformer architecture. Moreover, to reduce the blindness of the model when searching semantic space, statistical knowledge of relational predicates between objects from the training dataset and the cleaned Wikipedia text is used as supervision when training the model. Experiments show that fusing contextual information and statistical knowledge allows the model to generate more complete scene graphs of remote sensing images and facilitates the semantic understanding of remote sensing images. Full article
(This article belongs to the Special Issue Semantic Interpretation of Remotely Sensed Images)
Show Figures

Graphical abstract

Back to TopTop