Next Article in Journal
Relationship between Light Use Efficiency and Photochemical Reflectance Index Corrected Using a BRDF Model at a Subtropical Mixed Forest
Previous Article in Journal
Two-Phase Object-Based Deep Learning for Multi-Temporal SAR Image Change Detection
Previous Article in Special Issue
EU-Net: An Efficient Fully Convolutional Network for Building Extraction from Optical Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Editorial for Special Issue: “Remote Sensing based Building Extraction”

1
Institute for Integrated and Intelligent Systems, Griffith University, Nathan QLD 4111, Australia
2
School of Remote Sensing and Information Engineering, Wuhan University, 129 Luoyu Road, Wuhan, Hubei 430079, China
3
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan, Hubei 430072, China
4
Remote Sensing Technology Institute, German Aerospace Center (DLR), Muenchener Strasse 20, 82234 Wessling, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(3), 549; https://doi.org/10.3390/rs12030549
Submission received: 21 January 2020 / Accepted: 4 February 2020 / Published: 7 February 2020
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Building extraction from remote sensing data plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications [1]. Even though significant research has been carried out for more than two decades, the success of automatic building extraction and modelling is still largely impeded by scene complexity, incomplete cue extraction and sensor dependency of data. Most recently, deep neural networks (DNN) have been widely applied for high classification accuracy in various areas including land-cover and land-use classification [2]. Therefore, intelligent and innovative algorithms are in dire need for high success of automatic building extraction and modelling. This Special Issue focuses on the newly-developed methods for classification and feature extraction from remote sensing data for automatic building extraction and 3D roof modelling.
In the Special Issue, the published papers cover a wide range of related topics including building detection [3], boundary extraction [4] and regularization [5], 3D indoor space (room) modelling [6], land cover classification [7], building height model extraction [8], 3D roof modelling [6,9] and change detection [9].
In terms of datasets, some of the published works use publicly available benchmark datasets, e.g., ISPRS (International Society for Photogrammetry and Remote Sensing) urban object extraction and modelling datasets [4,5,10]; ISPRS 2D semantic labelling datasets [1]; Inria aerial image labelling benchmark datasets [11,12,13]; and IEEE (Institute of Electrical and Electronics Engineers) DeepGlobe Satellite Challenge datasets [14].
The proposed methods fall into two main categories depending the use of the input data sources: Methods based on single source data, and methods that use multi-source data. Methods based on single source data can use point cloud data [9], aerial imagery [4] and digital surface models (DSM) [8]. The multi-source data-based methods can use the same types of data, e.g., panchromatic band and multispectral imagery [7], optical imagery and light detection and ranging (LiDAR) data [4].
Recently, the rapid development of DNNs has been focused in remote sensing, and the networks have achieved remarkable progress in image classification and segmentation tasks [11]. The majority of the articles published in the Special Issue propose classification based on the DNN [1,2,3,4,5,6,8,11,12,13]. There are also a small number of methods based on segmentation [6] and morphological filtering [15].
Using aerial LiDAR data, Awrangjeb et al. [16] introduce a new 3D roof reconstruction technique that constructs an adjacency matrix to define the topological relationships among the roof planes. This method then uses the generated building models to detect 3D changes in buildings.
Among the methods that integrate data from multiple sources, Lai et al. [16] apply a particle swarm optimization algorithm for building extraction based on the fusion of LiDAR point cloud and texture features from the elevation map which is generated from the LiDAR point cloud. Ying et al. [1] combine the optical imagery and LiDAR data in a robust classification framework using the convolutional neural networks (CNN) and active contour model (ACM) to overcome the current limitations (e.g., salt and pepper artefacts) in algorithms for building boundary extraction. The influence of vegetation and salt and pepper artefacts in the extracted buildings is reduced. Li et al. [14] propose a DNN to fuse high-resolution satellite images and multi-source GIS data for building footprint extraction. This method offers better results than the top three solutions in the SpaceNet building detection competition. Dong et al. [10] present a framework for detecting and regularizing the boundary of individual buildings using a feature-level-fusion strategy based on features from dense image matching point clouds, orthophoto and original aerial images. Song et al. [7] present a comparative study on image fusion methods, that achieves the complementarity information of the panchromatic band and multispectral bands in high spatial resolution remote sensing images.
By using optical imagery only, Lu et al. [3] propose a building edge detection model using a richer convolutional features (RCF) network. The RCF-building model can detect building edges accurately and completely, with at least 5% better performance than the baseline methods. Wu et al. [17] present a boundary regulated network called BR-Net for accurate aerial image segmentation and building outline extraction. The BR-Net achieves significantly higher performance than the state-of-the-art U-Net model. Yang et al. [1] propose a novel deep network based on DenseNets and the attention mechanism, called the dense-attention network (DAN), to overcome the difficulty with using both high-level and low-level feature maps in the same network. The results show that DAN offers better performance than other deep networks. Yi et al. [14] effectively perform urban building segmentation from high resolution imagery using a DNN and generate accurate segmentation results. This method outperforms the six existing methods and particularly shows better results for irregular-shaped and small-sized buildings. Zhang et al. [18] use a nested network architecture for building extraction from aerial imageries. It can even extract the building areas covered by shadows. Kang et al. [13] design a dense spatial pyramid pooling to extract dense and multi-scale features simultaneously, to facilitate the extraction of buildings at all scales. He et al. [18] present an effective approach to extracting buildings from Unmanned Aerial Vehicle (UAV) images through the incorporation of superpixel segmentation and semantic recognition. Pan et al. [13] propose a generative adversarial network with spatial and channel attention mechanisms (GAN-SCA) for the robust segmentation of buildings in remote sensing images. Experimental results show that the proposed GAN-SCA achieves a higher accuracy than several state-of-the-art approaches.
Among the other published papers, Cui et al. [6] present a novel method coupling linear structures with three-dimensional geometric surfaces to automatically reconstruct 3D models using point cloud data from mobile laser scanning [6]. A new morphological attribute building index (MABI) and shadow index (MASI) are proposed in Ma et al. [15] for automatically extracting building features from high-resolution remote sensing satellite images. In experiments, this method shows better performance than the two widely used supervised classifiers, namely the support vector machine (SVM) and random forest (RF). Misra et al. [8] compare the digital building height models extracted from four freely available but coarse-resolution global DSMs. Thus, these DSMs can help to cost effectively analyse the vertical urban growth of rapidly growing cities. Xie et al. [5] propose a hierarchical regularization method for noisy building boundary points, through fusion of aerial laser scanning or photogrammetric point clouds. This is formulated as a Markov random field and solved efficiently via graph cut.

Acknowledgments

We want to thank the authors who contributed towards this Special Issue on “Remote Sensing based Building Extraction”, as well as the reviewers who provided the authors with comments and very constructive feedback.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, H.; Wu, P.; Yao, X.; Wu, Y.; Wang, B.; Xu, Y. Building Extraction in Very High Resolution Imagery by Dense-Attention Networks. Remote Sens. 2018, 10, 1768. [Google Scholar] [CrossRef] [Green Version]
  2. Jahan, F.; Zhou, J.; Awrangjeb, M.; Gao, Y. Fusion of Hyperspectral and LiDAR Data Using Discriminant Correlation Analysis for Land Cover Classification. IEEE J. Select. Topics Appl. Earth Obs. Remote Sens. 2018, 11, 3905–3917. [Google Scholar] [CrossRef] [Green Version]
  3. Lu, T.; Ming, D.; Lin, X.; Hong, Z.; Bai, X.; Fang, J. Detecting Building Edges from High Spatial Resolution Remote Sensing Imagery Using Richer Convolution Features Network. Remote Sens. 2018, 10, 1496. [Google Scholar] [CrossRef] [Green Version]
  4. Sun, Y.; Zhang, X.; Zhao, X.; Xin, Q. Extracting Building Boundaries from High Resolution Optical Images and LiDAR Data by Integrating the Convolutional Neural Network and the Active Contour Model. Remote Sens. 2018, 10, 1459. [Google Scholar] [CrossRef] [Green Version]
  5. Xie, L.; Zhu, Q.; Hu, H.; Wu, B.; Li, Y.; Zhang, Y.; Zhong, R. Hierarchical Regularization of Building Boundaries in Noisy Aerial Laser Scanning and Photogrammetric Point Clouds. Remote Sens. 2018, 10, 1996. [Google Scholar] [CrossRef] [Green Version]
  6. Cui, Y.; Li, Q.; Dong, Z. Structural 3D Reconstruction of Indoor Space for 5G Signal Simulation with Mobile Laser Scanning Point Clouds. Remote Sens. 2019, 11, 2262. [Google Scholar] [CrossRef] [Green Version]
  7. Song, S.; Liu, J.; Pu, H.; Liu, Y.; Luo, J. The Comparison of Fusion Methods for HSRRSI Considering the Effectiveness of Land Cover (Features) Object Recognition Based on Deep Learning. Remote Sens. 2019, 11, 1435. [Google Scholar] [CrossRef] [Green Version]
  8. Misra, P.; Avtar, R.; Takeuchi, W. Comparison of Digital Building Height Models Extracted from AW3D, TanDEM-X, ASTER, and SRTM Digital Surface Models over Yangon City. Remote Sens. 2018, 10, 2008. [Google Scholar] [CrossRef] [Green Version]
  9. Dong, Y.; Zhang, L.; Cui, X.; Ai, H.; Xu, B. Extraction of Buildings from Multiple-View Aerial Images Using a Feature-Level-Fusion Strategy. Remote Sens. 2018, 10, 1947. [Google Scholar] [CrossRef] [Green Version]
  10. Awrangjeb, M.; Gilani, S.A.N.; Siddiqui, F.U. An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection. Remote Sens. 2018, 10, 1512. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, Y.; Gong, W.; Sun, J.; Li, W. Web-Net: A Novel Nest Networks with Ultra-Hierarchical Sampling for Building Extraction from Aerial Imageries. Remote Sens. 2019, 11, 1897. [Google Scholar] [CrossRef] [Green Version]
  12. Kang, W.; Xiang, Y.; Wang, F.; You, H. EU-Net: An Efficient Fully Convolutional Network for Building Extraction from Optical Remote Sensing Images. Remote Sens. 2019, 11, 2813. [Google Scholar] [CrossRef] [Green Version]
  13. Pan, X.; Yang, F.; Gao, L.; Chen, Z.; Zhang, B.; Fan, H.; Ren, J. Building Extraction from High-Resolution Aerial Imagery Using a Generative Adversarial Network with Spatial and Channel Attention Mechanisms. Remote Sens. 2019, 11, 917. [Google Scholar] [CrossRef] [Green Version]
  14. Yi, Y.; Zhang, Z.; Zhang, W.; Zhang, C.; Li, W.; Zhao, T. Semantic Segmentation of Urban Buildings from VHR Remote Sensing Imagery Using a Deep Convolutional Neural Network. Remote Sens. 2019, 11, 1774. [Google Scholar] [CrossRef] [Green Version]
  15. Ma, W.; Wan, Y.; Li, J.; Zhu, S.; Wang, M. An Automatic Morphological Attribute Building Extraction Approach for Satellite High Spatial Resolution Imagery. Remote Sens. 2019, 11, 337. [Google Scholar] [CrossRef] [Green Version]
  16. Lai, X.; Yang, J.; Li, Y.; Wang, M. A Building Extraction Approach Based on the Fusion of LiDAR Point Cloud and Elevation Map Texture Features. Remote Sens. 2019, 11, 1636. [Google Scholar] [CrossRef] [Green Version]
  17. Wu, G.; Guo, Z.; Shi, X.; Chen, Q.; Xu, Y.; Shibasaki, R.; Shao, X. A Boundary Regulated Network for Accurate Roof Segmentation and Outline Extraction. Remote Sens. 2018, 10, 1195. [Google Scholar] [CrossRef] [Green Version]
  18. He, H.; Zhou, J.; Chen, M.; Chen, T.; Li, D.; Cheng, P. Building Extraction from UAV Images Jointly Using 6D-SLIC and Multiscale Siamese Convolutional Networks. Remote Sens. 2019, 11, 1040. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Awrangjeb, M.; Hu, X.; Yang, B.; Tian, J. Editorial for Special Issue: “Remote Sensing based Building Extraction”. Remote Sens. 2020, 12, 549. https://doi.org/10.3390/rs12030549

AMA Style

Awrangjeb M, Hu X, Yang B, Tian J. Editorial for Special Issue: “Remote Sensing based Building Extraction”. Remote Sensing. 2020; 12(3):549. https://doi.org/10.3390/rs12030549

Chicago/Turabian Style

Awrangjeb, Mohammad, Xiangyun Hu, Bisheng Yang, and Jiaojiao Tian. 2020. "Editorial for Special Issue: “Remote Sensing based Building Extraction”" Remote Sensing 12, no. 3: 549. https://doi.org/10.3390/rs12030549

APA Style

Awrangjeb, M., Hu, X., Yang, B., & Tian, J. (2020). Editorial for Special Issue: “Remote Sensing based Building Extraction”. Remote Sensing, 12(3), 549. https://doi.org/10.3390/rs12030549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop