remotesensing-logo

Journal Browser

Journal Browser

Deep Learning Approaches for Urban Sensing Data Analytics

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (31 January 2020) | Viewed by 29785

Special Issue Editors

School of Engineering, Newcastle University, Newcastle Upon Tyne NE1 7RU, UK
Interests: machine learning; smart cities; remote sensing; geographic information science; geospatial cyber-infrastructure

E-Mail Website
Guest Editor
School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
Interests: lidar mapping; 3D vision; change detection
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: mathematical models for visual information; graph matching problem and its applications; computer vision and machine learning; large-scale 3D reconstruction of visual scenes; information processing, fusion, and scene understanding in unmanned intelligent systems; interpretation and information mining of remote sensing images
Special Issues, Collections and Topics in MDPI journals

grade E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
Interests: pattern analysis and machine learning; image processing engineering; application of remote sensing; computational intelligence and its application in remote sensing image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep Learning (DL) has attracted burgeoning research interest in the past few years, due to its strength in automatic learning of hierarchical features from big data. At the same time, different types of remote sensing, such as satellite and airborne imagery and video systems, as well as ground-level mobile mapping systems (e.g., mobile laser scanning systems) have been widely used in urban environment monitoring and analytics at various scales. In addition, existing sensing infrastructures (e.g., CCTV) can be harnessed to extract new information (e.g., pedestrian/vehicle moving patterns) with the help of DL. Although DL is rapidly gaining popularity in remote sensing (Zhang et al., 2016), we are facing numerous challenges in applying it to urban sensing data, such as noisy training datasets, incompatible spatial scales, dense mixture of image objects, short update intervals, onerous hyper parameter tuning, and limited prior knowledge. All these challenges are requiring us to develop special DL approaches for urban sensing data analytics.

This Special Issue aims to provide new DL methods that could transform big urban sensing data into knowledge with limited intervention. Due to the high variety of urban sensing systems, how to develop common architectures of deep neural networks will become the major concern of this Special Issue. Topics of interest mainly include but are not limited to:

  • New deep neural network models for urban scene classification;
  • 3D deep learning for urban scene understanding;
  • New recurrent neural network algorithms for urban change detection;
  • Advanced training and testing of deep learning methods;
  • Real-time urban sensing data analytics using deep learning algorithms;
  • Generative adversarial network for remote sensing data fusion;
  • Innovative reinforcement learning algorithms for transportation management.

Dr. Jin Xing
Dr. Wen Xiao
Prof. Gui-Song Xia
Prof.  Liangpei Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Convolutional neural network
  • Recurrent neural network
  • Deep belief network
  • Remote sensing
  • Lidar data analytics
  • Smart city
  • Sensor network
  • Transfer learning

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

24 pages, 7597 KiB  
Article
A New Framework for Automatic Airports Extraction from SAR Images Using Multi-Level Dual Attention Mechanism
by Lifu Chen, Siyu Tan, Zhouhao Pan, Jin Xing, Zhihui Yuan, Xuemin Xing and Peng Zhang
Remote Sens. 2020, 12(3), 560; https://doi.org/10.3390/rs12030560 - 7 Feb 2020
Cited by 24 | Viewed by 4395
Abstract
The detection of airports from Synthetic Aperture Radar (SAR) images is of great significance in various research fields. However, it is challenging to distinguish the airport from surrounding objects in SAR images. In this paper, a new framework, multi-level and densely dual attention [...] Read more.
The detection of airports from Synthetic Aperture Radar (SAR) images is of great significance in various research fields. However, it is challenging to distinguish the airport from surrounding objects in SAR images. In this paper, a new framework, multi-level and densely dual attention (MDDA) network is proposed to extract airport runway areas (runways, taxiways, and parking lots) in SAR images to achieve automatic airport detection. The framework consists of three parts: down-sampling of original SAR images, MDDA network for feature extraction and classification, and up-sampling of airports extraction results. First, down-sampling is employed to obtain a medium-resolution SAR image from the high-resolution SAR images to ensure the samples (500 × 500) can contain adequate information about airports. The dataset is then input to the MDDA network, which contains an encoder and a decoder. The encoder uses ResNet_101 to extract four-level features with different resolutions, and the decoder performs fusion and further feature extraction on these features. The decoder integrates the chained residual pooling network (CRP_Net) and the dual attention fusion and extraction (DAFE) module. The CRP_Net module mainly uses chained residual pooling and multi-feature fusion to extract advanced semantic features. In the DAFE module, position attention module (PAM) and channel attention mechanism (CAM) are combined with weighted filtering. The entire decoding network is constructed in a densely connected manner to enhance the gradient transmission among features and take full advantage of them. Finally, the airport results extracted by the decoding network were up-sampled by bilinear interpolation to accomplish airport extraction from high-resolution SAR images. To verify the proposed framework, experiments were performed using Gaofen-3 SAR images with 1 m resolution, and three different airports were selected for accuracy evaluation. The results showed that the mean pixels accuracy (MPA) and mean intersection over union (MIoU) of the MDDA network was 0.98 and 0.97, respectively, which is much higher than RefineNet and DeepLabV3. Therefore, MDDA can achieve automatic airport extraction from high-resolution SAR images with satisfying accuracy. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Figure 1

19 pages, 7909 KiB  
Article
A New Deep Learning Network for Automatic Bridge Detection from SAR Images Based on Balanced and Attention Mechanism
by Lifu Chen, Ting Weng, Jin Xing, Zhouhao Pan, Zhihui Yuan, Xuemin Xing and Peng Zhang
Remote Sens. 2020, 12(3), 441; https://doi.org/10.3390/rs12030441 - 31 Jan 2020
Cited by 32 | Viewed by 3993
Abstract
Bridge detection from Synthetic Aperture Radar (SAR) images has very important strategic significance and practical value, but there are still many challenges in end-to-end bridge detection. In this paper, a new deep learning-based network is proposed to identify bridges from SAR images, namely, [...] Read more.
Bridge detection from Synthetic Aperture Radar (SAR) images has very important strategic significance and practical value, but there are still many challenges in end-to-end bridge detection. In this paper, a new deep learning-based network is proposed to identify bridges from SAR images, namely, multi-resolution attention and balance network (MABN). It mainly includes three parts, the attention and balanced feature pyramid (ABFP) network, the region proposal network (RPN), and the classification and regression. First, the ABFP network extracts various features from SAR images, which integrates the ResNeXt backbone network, balanced feature pyramid, and the attention mechanism. Second, extracted features are used by RPN to generate candidate boxes of different resolutions and fused. Furthermore, the candidate boxes are combined with the features extracted by the ABFP network through the region of interest (ROI) pooling strategy. Finally, the detection results of the bridges are produced by the classification and regression module. In addition, intersection over union (IOU) balanced sampling and balanced L1 loss functions are introduced for optimal training of the classification and regression network. In the experiment, TerraSAR data with 3-m resolution and Gaofen-3 data with 1-m resolution are used, and the results are compared with faster R-CNN and SSD. The proposed network has achieved the highest detection precision (P) and average precision (AP) among the three networks, as 0.877 and 0.896, respectively, with the recall rate (RR) as 0.917. Compared with the other two networks, the false alarm targets and missed targets of the proposed network in this paper are greatly reduced, so the precision is greatly improved. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

25 pages, 6711 KiB  
Article
Online Semantic Subspace Learning with Siamese Network for UAV Tracking
by Yufei Zha, Min Wu, Zhuling Qiu, Jingxian Sun, Peng Zhang and Wei Huang
Remote Sens. 2020, 12(2), 325; https://doi.org/10.3390/rs12020325 - 19 Jan 2020
Cited by 8 | Viewed by 3883
Abstract
In urban environment monitoring, visual tracking on unmanned aerial vehicles (UAVs) can produce more applications owing to the inherent advantages, but it also brings new challenges for existing visual tracking approaches (such as complex background clutters, rotation, fast motion, small objects, and realtime [...] Read more.
In urban environment monitoring, visual tracking on unmanned aerial vehicles (UAVs) can produce more applications owing to the inherent advantages, but it also brings new challenges for existing visual tracking approaches (such as complex background clutters, rotation, fast motion, small objects, and realtime issues due to camera motion and viewpoint changes). Based on the Siamese network, tracking can be conducted efficiently in recent UAV datasets. Unfortunately, the learned convolutional neural network (CNN) features are not discriminative when identifying the target from the background/clutter, In particular for the distractor, and cannot capture the appearance variations temporally. Additionally, occlusion and disappearance are also reasons for tracking failure. In this paper, a semantic subspace module is designed to be integrated into the Siamese network tracker to encode the local fine-grained details of the target for UAV tracking. More specifically, the target’s semantic subspace is learned online to adapt to the target in the temporal domain. Additionally, the pixel-wise response of the semantic subspace can be used to detect occlusion and disappearance of the target, and this enables reasonable updating to relieve model drifting. Substantial experiments conducted on challenging UAV benchmarks illustrate that the proposed method can obtain competitive results in both accuracy and efficiency when they are applied to UAV videos. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

17 pages, 5433 KiB  
Article
Detecting Building Changes between Airborne Laser Scanning and Photogrammetric Data
by Zhenchao Zhang, George Vosselman, Markus Gerke, Claudio Persello, Devis Tuia and Michael Ying Yang
Remote Sens. 2019, 11(20), 2417; https://doi.org/10.3390/rs11202417 - 18 Oct 2019
Cited by 53 | Viewed by 5989
Abstract
Detecting topographic changes in an urban environment and keeping city-level point clouds up-to-date are important tasks for urban planning and monitoring. In practice, remote sensing data are often available only in different modalities for two epochs. Change detection between airborne laser scanning data [...] Read more.
Detecting topographic changes in an urban environment and keeping city-level point clouds up-to-date are important tasks for urban planning and monitoring. In practice, remote sensing data are often available only in different modalities for two epochs. Change detection between airborne laser scanning data and photogrammetric data is challenging due to the multi-modality of the input data and dense matching errors. This paper proposes a method to detect building changes between multimodal acquisitions. The multimodal inputs are converted and fed into a light-weighted pseudo-Siamese convolutional neural network (PSI-CNN) for change detection. Different network configurations and fusion strategies are compared. Our experiments on a large urban data set demonstrate the effectiveness of the proposed method. Our change map achieves a recall rate of 86.17%, a precision rate of 68.16%, and an F1-score of 76.13%. The comparison between Siamese architecture and feed-forward architecture brings many interesting findings and suggestions to the design of networks for multimodal data processing. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

19 pages, 6971 KiB  
Article
Deep Learning Based Fossil-Fuel Power Plant Monitoring in High Resolution Remote Sensing Images: A Comparative Study
by Haopeng Zhang and Qin Deng
Remote Sens. 2019, 11(9), 1117; https://doi.org/10.3390/rs11091117 - 10 May 2019
Cited by 22 | Viewed by 5650
Abstract
The frequent hazy weather with air pollution in North China has aroused wide attention in the past few years. One of the most important pollution resource is the anthropogenic emission by fossil-fuel power plants. To relieve the pollution and assist urban environment monitoring, [...] Read more.
The frequent hazy weather with air pollution in North China has aroused wide attention in the past few years. One of the most important pollution resource is the anthropogenic emission by fossil-fuel power plants. To relieve the pollution and assist urban environment monitoring, it is necessary to continuously monitor the working status of power plants. Satellite or airborne remote sensing provides high quality data for such tasks. In this paper, we design a power plant monitoring framework based on deep learning to automatically detect the power plants and determine their working status in high resolution remote sensing images (RSIs). To this end, we collected a dataset named BUAA-FFPP60 containing RSIs of over 60 fossil-fuel power plants in the Beijing-Tianjin-Hebei region in North China, which covers about 123 km 2 of an urban area. We compared eight state-of-the-art deep learning models and comprehensively analyzed their performance on accuracy, speed, and hardware cost. Experimental results illustrate that our deep learning based framework can effectively detect the fossil-fuel power plants and determine their working status with mean average precision up to 0.8273, showing good potential for urban environment monitoring. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

Other

Jump to: Research

14 pages, 8464 KiB  
Technical Note
Obtaining Urban Waterlogging Depths from Video Images Using Synthetic Image Data
by Jingchao Jiang, Cheng-Zhi Qin, Juan Yu, Changxiu Cheng, Junzhi Liu and Jingzhou Huang
Remote Sens. 2020, 12(6), 1014; https://doi.org/10.3390/rs12061014 - 22 Mar 2020
Cited by 17 | Viewed by 4529
Abstract
Reference objects in video images can be used to indicate urban waterlogging depths. The detection of reference objects is the key step to obtain waterlogging depths from video images. Object detection models with convolutional neural networks (CNNs) have been utilized to detect reference [...] Read more.
Reference objects in video images can be used to indicate urban waterlogging depths. The detection of reference objects is the key step to obtain waterlogging depths from video images. Object detection models with convolutional neural networks (CNNs) have been utilized to detect reference objects. These models require a large number of labeled images as the training data to ensure the applicability at a city scale. However, it is hard to collect a sufficient number of urban flooding images containing valuable reference objects, and manually labeling images is time-consuming and expensive. To solve the problem, we present a method to synthesize image data as the training data. Firstly, original images containing reference objects and original images with water surfaces are collected from open data sources, and reference objects and water surfaces are cropped from these original images. Secondly, the reference objects and water surfaces are further enriched via data augmentation techniques to ensure the diversity. Finally, the enriched reference objects and water surfaces are combined to generate a synthetic image dataset with annotations. The synthetic image dataset is further used for training an object detection model with CNN. The waterlogging depths are calculated based on the reference objects detected by the trained model. A real video dataset and an artificial image dataset are used to evaluate the effectiveness of the proposed method. The results show that the detection model trained using the synthetic image dataset can effectively detect reference objects from images, and it can achieve acceptable accuracies of waterlogging depths based on the detected reference objects. The proposed method has the potential to monitor waterlogging depths at a city scale. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

Back to TopTop