remotesensing-logo

Journal Browser

Journal Browser

Remote Sensing based Building Extraction

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (31 October 2019) | Viewed by 131400

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Integrated and Intelligent Systems, Griffith University, Nathan, QLD 4111, Australia
Interests: deep learning; remote sensing image processing; point cloud processing; change detection; object recognition; object modelling; remote sensing data registration; remote sensing of environment
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Remote Sensing and Information Engineering, Wuhan University, 129 Luoyu Road, Wuhan 430079, Hubei Province, China
Interests: feature extraction; computer vision; pattern recognition; LiDAR data processing; machine learning

E-Mail Website
Guest Editor
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS), Wuhan University, Wuhan 430072, China
Interests: laser scanning; mobile mapping; UAV mapping; point cloud processing; 3D scene understanding; GIS applications

E-Mail Website
Guest Editor
Remote Sensing Technology Institute, German Aerospace Center (DLR), Muenchener Strasse 20, 82234 Wessling, Germany
Interests: forest remote sensing building extraction; 2D/3D change detection; data fusion; time-series image analysis; semantic 3D point cloud segmentation; computer vision; 3D reconstruction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The rapid growth of sensor technologies, such as airborne and terrestrial laser scanning, and satellite and aerial imaging systems, poses unique challenges in the detection, extraction and modelling of buildings from remote sensing data. In fact, building detection, boundary extraction, and rooftop modelling from remotely-sensed data are important to various applications, such as the real estate industry, city planning, homeland security, automatic solar potential estimation, and disaster (flood or bushfire) management. The automated extraction of building boundaries is a crucial step towards generating city models. In addition, automatic building change detection is vital for monitoring urban growth and locating illegal building extensions.

Despite the fact that significant research has been ongoing for more than two decades, the success of automatic building extraction and modelling is still largely impeded by scene complexity, incomplete cue extraction and sensor dependency of data. Vegetation, and especially trees, can be the prime cause of scene complexity and incomplete cue extraction. The situation becomes complex in hilly and densely-vegetated areas where only a few buildings are present, these being surrounded by trees. Important building cues can be completely or partially missed due to occlusions and shadowing from trees. Trees also change colors in different seasons and may be deciduous. Moreover, image quality may vary for the same scene even if images are captured by the same sensor, but at different dates and times. Thus, when the same detection and modelling algorithms are applied to two different sets of data of the same scene, the outcomes may well be different. Particularly, small building structures, such as garden sheds and roof planes, are often missed in low-resolution data. The automatically-generated models either require significant human interaction to fix inaccuracies (as a post-processing step) or are useless in practical applications.

Therefore, intelligent and innovative algorithms are in dire need for the success of automatic building extraction and modelling. This Special Issue will focus on the newly-developed methods for classification and feature extraction from remote sensing data and will cover (but is not limited to) the following topics:

  • Aerial and satellite data collected from different sensors (VHR, hyperspectral, SAR, LiDAR, UAV, thermal imagery, oblique imagery, etc.);
  • Data analysis and data fusion for building detection, boundary extraction, rooftop modelling, and change detection;
  • Data analysis and data fusion for land cover classification (semantic segmentation, buildings/roads extraction, vehicle detection, land use/cover mapping, etc.).

Moreover, we cordially welcome application papers, including change detection, urban growth monitoring, disaster management, and technical reviews.

Dr. Mohammad Awrangjeb
Prof. Xiangyun Hu
Prof. Bisheng Yang
Dr. Jiaojiao Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Building detection
  • Building extraction
  • Roof reconstruction
  • 3D building modelling
  • Building change detection
  • Remote sensing data
  • LiDAR
  • VHR
  • Hyperspectral imagery
  • Multispectral imagery
  • SAR
  • Data fusion
  • point cloud
  • Aerial imagery
  • Satellite imagery

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 156 KiB  
Editorial
Editorial for Special Issue: “Remote Sensing based Building Extraction”
by Mohammad Awrangjeb, Xiangyun Hu, Bisheng Yang and Jiaojiao Tian
Remote Sens. 2020, 12(3), 549; https://doi.org/10.3390/rs12030549 - 7 Feb 2020
Cited by 6 | Viewed by 2613
Abstract
Building extraction from remote sensing data plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications [...] Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)

Research

Jump to: Editorial

26 pages, 81090 KiB  
Article
EU-Net: An Efficient Fully Convolutional Network for Building Extraction from Optical Remote Sensing Images
by Wenchao Kang, Yuming Xiang, Feng Wang and Hongjian You
Remote Sens. 2019, 11(23), 2813; https://doi.org/10.3390/rs11232813 - 27 Nov 2019
Cited by 81 | Viewed by 6121
Abstract
Automatic building extraction from high-resolution remote sensing images has many practical applications, such as urban planning and supervision. However, fine details and various scales of building structures in high-resolution images bring new challenges to building extraction. An increasing number of neural network-based models [...] Read more.
Automatic building extraction from high-resolution remote sensing images has many practical applications, such as urban planning and supervision. However, fine details and various scales of building structures in high-resolution images bring new challenges to building extraction. An increasing number of neural network-based models have been proposed to handle these issues, while they are not efficient enough, and still suffer from the error ground truth labels. To this end, we propose an efficient end-to-end model, EU-Net, in this paper. We first design the dense spatial pyramid pooling (DSPP) to extract dense and multi-scale features simultaneously, which facilitate the extraction of buildings at all scales. Then, the focal loss is used in reverse to suppress the impact of the error labels in ground truth, making the training stage more stable. To assess the universality of the proposed model, we tested it on three public aerial remote sensing datasets: WHU aerial imagery dataset, Massachusetts buildings dataset, and Inria aerial image labeling dataset. Experimental results show that the proposed EU-Net is superior to the state-of-the-art models of all three datasets and increases the prediction efficiency by two to four times. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

30 pages, 8432 KiB  
Article
Structural 3D Reconstruction of Indoor Space for 5G Signal Simulation with Mobile Laser Scanning Point Clouds
by Yang Cui, Qingquan Li and Zhen Dong
Remote Sens. 2019, 11(19), 2262; https://doi.org/10.3390/rs11192262 - 27 Sep 2019
Cited by 22 | Viewed by 4355
Abstract
3D modelling of indoor environment is essential in smart city applications such as building information modelling (BIM), spatial location application, energy consumption estimation, and signal simulation, etc. Fast and stable reconstruction of 3D models from point clouds has already attracted considerable research interest. [...] Read more.
3D modelling of indoor environment is essential in smart city applications such as building information modelling (BIM), spatial location application, energy consumption estimation, and signal simulation, etc. Fast and stable reconstruction of 3D models from point clouds has already attracted considerable research interest. However, in the complex indoor environment, automated reconstruction of detailed 3D models still remains a serious challenge. To address these issues, this paper presents a novel method that couples linear structures with three-dimensional geometric surfaces to automatically reconstruct 3D models using point cloud data from mobile laser scanning. In our proposed approach, a fully automatic room segmentation is performed on the unstructured point clouds via multi-label graph cuts with semantic constraints, which can overcome the over-segmentation in the long corridor. Then, the horizontal slices of point clouds with individual room are projected onto the plane to form a binary image, which is followed by line extraction and regularization to generate floorplan lines. The 3D structured models are reconstructed by multi-label graph cuts, which is designed to combine segmented room, line and surface elements as semantic constraints. Finally, this paper proposed a novel application that 5G signal simulation based on the output structural model to aim at determining the optimal location of 5G small base station in a large-scale indoor scene for the future. Four datasets collected using handheld and backpack laser scanning systems in different locations were used to evaluate the proposed method. The results indicate our proposed methodology provides an accurate and efficient reconstruction of detailed structured models from complex indoor scenes. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

23 pages, 5129 KiB  
Article
Web-Net: A Novel Nest Networks with Ultra-Hierarchical Sampling for Building Extraction from Aerial Imageries
by Yan Zhang, Weiguo Gong, Jingxi Sun and Weihong Li
Remote Sens. 2019, 11(16), 1897; https://doi.org/10.3390/rs11161897 - 14 Aug 2019
Cited by 38 | Viewed by 4265
Abstract
How to efficiently utilize vast amounts of easily accessed aerial imageries is a critical challenge for researchers with the proliferation of high-resolution remote sensing sensors and platforms. Recently, the rapid development of deep neural networks (DNN) has been a focus in remote sensing, [...] Read more.
How to efficiently utilize vast amounts of easily accessed aerial imageries is a critical challenge for researchers with the proliferation of high-resolution remote sensing sensors and platforms. Recently, the rapid development of deep neural networks (DNN) has been a focus in remote sensing, and the networks have achieved remarkable progress in image classification and segmentation tasks. However, the current DNN models inevitably lose the local cues during the downsampling operation. Additionally, even with skip connections, the upsampling methods cannot properly recover the structural information, such as the edge intersections, parallelism, and symmetry. In this paper, we propose the Web-Net, which is a nested network architecture with hierarchical dense connections, to handle these issues. We design the Ultra-Hierarchical Sampling (UHS) block to absorb and fuse the inter-level feature maps to propagate the feature maps among different levels. The position-wise downsampling/upsampling methods in the UHS iteratively change the shape of the inputs while preserving the number of their parameters, so that the low-level local cues and high-level semantic cues are properly preserved. We verify the effectiveness of the proposed Web-Net in the Inria Aerial Dataset and WHU Dataset. The results of the proposed Web-Net achieve an overall accuracy of 96.97% and an IoU (Intersection over Union) of 80.10% on the Inria Aerial Dataset, which surpasses the state-of-the-art SegNet 1.8% and 9.96%, respectively; the results on the WHU Dataset also support the effectiveness of the proposed Web-Net. Additionally, benefitting from the nested network architecture and the UHS block, the extracted buildings on the prediction maps are obviously sharper and more accurately identified, and even the building areas that are covered by shadows can also be correctly extracted. The verified results indicate that the proposed Web-Net is both effective and efficient for building extraction from high-resolution remote sensing images. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

19 pages, 7862 KiB  
Article
Semantic Segmentation of Urban Buildings from VHR Remote Sensing Imagery Using a Deep Convolutional Neural Network
by Yaning Yi, Zhijie Zhang, Wanchang Zhang, Chuanrong Zhang, Weidong Li and Tian Zhao
Remote Sens. 2019, 11(15), 1774; https://doi.org/10.3390/rs11151774 - 28 Jul 2019
Cited by 183 | Viewed by 11703
Abstract
Urban building segmentation is a prevalent research domain for very high resolution (VHR) remote sensing; however, various appearances and complicated background of VHR remote sensing imagery make accurate semantic segmentation of urban buildings a challenge in relevant applications. Following the basic architecture of [...] Read more.
Urban building segmentation is a prevalent research domain for very high resolution (VHR) remote sensing; however, various appearances and complicated background of VHR remote sensing imagery make accurate semantic segmentation of urban buildings a challenge in relevant applications. Following the basic architecture of U-Net, an end-to-end deep convolutional neural network (denoted as DeepResUnet) was proposed, which can effectively perform urban building segmentation at pixel scale from VHR imagery and generate accurate segmentation results. The method contains two sub-networks: One is a cascade down-sampling network for extracting feature maps of buildings from the VHR image, and the other is an up-sampling network for reconstructing those extracted feature maps back to the same size of the input VHR image. The deep residual learning approach was adopted to facilitate training in order to alleviate the degradation problem that often occurred in the model training process. The proposed DeepResUnet was tested with aerial images with a spatial resolution of 0.075 m and was compared in performance under the exact same conditions with six other state-of-the-art networks—FCN-8s, SegNet, DeconvNet, U-Net, ResUNet and DeepUNet. Results of extensive experiments indicated that the proposed DeepResUnet outperformed the other six existing networks in semantic segmentation of urban buildings in terms of visual and quantitative evaluation, especially in labeling irregular-shape and small-size buildings with higher accuracy and entirety. Compared with the U-Net, the F1 score, Kappa coefficient and overall accuracy of DeepResUnet were improved by 3.52%, 4.67% and 1.72%, respectively. Moreover, the proposed DeepResUnet required much fewer parameters than the U-Net, highlighting its significant improvement among U-Net applications. Nevertheless, the inference time of DeepResUnet is slightly longer than that of the U-Net, which is subject to further improvement. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Figure 1

18 pages, 2813 KiB  
Article
A Building Extraction Approach Based on the Fusion of LiDAR Point Cloud and Elevation Map Texture Features
by Xudong Lai, Jingru Yang, Yongxu Li and Mingwei Wang
Remote Sens. 2019, 11(14), 1636; https://doi.org/10.3390/rs11141636 - 10 Jul 2019
Cited by 27 | Viewed by 5424
Abstract
Building extraction is an important way to obtain information in urban planning, land management, and other fields. As remote sensing has various advantages such as large coverage and real-time capability, it becomes an essential approach for building extraction. Among various remote sensing technologies, [...] Read more.
Building extraction is an important way to obtain information in urban planning, land management, and other fields. As remote sensing has various advantages such as large coverage and real-time capability, it becomes an essential approach for building extraction. Among various remote sensing technologies, the capability of providing 3D features makes the LiDAR point cloud become a crucial means for building extraction. However, the LiDAR point cloud has difficulty distinguishing objects with similar heights, in which case texture features are able to extract different objects in a 2D image. In this paper, a building extraction method based on the fusion of point cloud and texture features is proposed, and the texture features are extracted by using an elevation map that expresses the height of each point. The experimental results show that the proposed method obtains better extraction results than that of other texture feature extraction methods and ENVI software in all experimental areas, and the extraction accuracy is always higher than 87%, which is satisfactory for some practical work. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

30 pages, 40517 KiB  
Article
The Comparison of Fusion Methods for HSRRSI Considering the Effectiveness of Land Cover (Features) Object Recognition Based on Deep Learning
by Shiran Song, Jianhua Liu, Heng Pu, Yuan Liu and Jingyan Luo
Remote Sens. 2019, 11(12), 1435; https://doi.org/10.3390/rs11121435 - 17 Jun 2019
Cited by 14 | Viewed by 4451
Abstract
The efficient and accurate application of deep learning in the remote sensing field largely depends on the pre-processing technology of remote sensing images. Particularly, image fusion is the essential way to achieve the complementarity of the panchromatic band and multispectral bands in high [...] Read more.
The efficient and accurate application of deep learning in the remote sensing field largely depends on the pre-processing technology of remote sensing images. Particularly, image fusion is the essential way to achieve the complementarity of the panchromatic band and multispectral bands in high spatial resolution remote sensing images. In this paper, we not only pay attention to the visual effect of fused images, but also focus on the subsequent application effectiveness of information extraction and feature recognition based on fused images. Based on the WorldView-3 images of Tongzhou District of Beijing, we apply the fusion results to conduct the experiments of object recognition of typical urban features based on deep learning. Furthermore, we perform a quantitative analysis for the existing pixel-based mainstream fusion methods of IHS (Intensity-Hue Saturation), PCS (Principal Component Substitution), GS (Gram Schmidt), ELS (Ehlers), HPF (High-Pass Filtering), and HCS (Hyper spherical Color Space) from the perspectives of spectrum, geometric features, and recognition accuracy. The results show that there are apparent differences in visual effect and quantitative index among different fusion methods, and the PCS fusion method has the most satisfying comprehensive effectiveness in the object recognition of land cover (features) based on deep learning. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

33 pages, 15428 KiB  
Article
Building Extraction from UAV Images Jointly Using 6D-SLIC and Multiscale Siamese Convolutional Networks
by Haiqing He, Junchao Zhou, Min Chen, Ting Chen, Dajun Li and Penggen Cheng
Remote Sens. 2019, 11(9), 1040; https://doi.org/10.3390/rs11091040 - 1 May 2019
Cited by 24 | Viewed by 5181
Abstract
Automatic building extraction using a single data type, either 2D remotely-sensed images or light detection and ranging 3D point clouds, remains insufficient to accurately delineate building outlines for automatic mapping, despite active research in this area and the significant progress which has been [...] Read more.
Automatic building extraction using a single data type, either 2D remotely-sensed images or light detection and ranging 3D point clouds, remains insufficient to accurately delineate building outlines for automatic mapping, despite active research in this area and the significant progress which has been achieved in the past decade. This paper presents an effective approach to extracting buildings from Unmanned Aerial Vehicle (UAV) images through the incorporation of superpixel segmentation and semantic recognition. A framework for building extraction is constructed by jointly using an improved Simple Linear Iterative Clustering (SLIC) algorithm and Multiscale Siamese Convolutional Networks (MSCNs). The SLIC algorithm, improved by additionally imposing a digital surface model for superpixel segmentation, namely 6D-SLIC, is suited for building boundary detection under building and image backgrounds with similar radiometric signatures. The proposed MSCNs, including a feature learning network and a binary decision network, are used to automatically learn a multiscale hierarchical feature representation and detect building objects under various complex backgrounds. In addition, a gamma-transform green leaf index is proposed to truncate vegetation superpixels for further processing to improve the robustness and efficiency of building detection, the Douglas–Peucker algorithm and iterative optimization are used to eliminate jagged details generated from small structures as a result of superpixel segmentation. In the experiments, the UAV datasets, including many buildings in urban and rural areas with irregular shapes and different heights and that are obscured by trees, are collected to evaluate the proposed method. The experimental results based on the qualitative and quantitative measures confirm the effectiveness and high accuracy of the proposed framework relative to the digitized results. The proposed framework performs better than state-of-the-art building extraction methods, given its higher values of recall, precision, and intersection over Union (IoU). Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

18 pages, 6913 KiB  
Article
Building Extraction from High-Resolution Aerial Imagery Using a Generative Adversarial Network with Spatial and Channel Attention Mechanisms
by Xuran Pan, Fan Yang, Lianru Gao, Zhengchao Chen, Bing Zhang, Hairui Fan and Jinchang Ren
Remote Sens. 2019, 11(8), 917; https://doi.org/10.3390/rs11080917 - 15 Apr 2019
Cited by 121 | Viewed by 7649
Abstract
Segmentation of high-resolution remote sensing images is an important challenge with wide practical applications. The increasing spatial resolution provides fine details for image segmentation but also incurs segmentation ambiguities. In this paper, we propose a generative adversarial network with spatial and channel attention [...] Read more.
Segmentation of high-resolution remote sensing images is an important challenge with wide practical applications. The increasing spatial resolution provides fine details for image segmentation but also incurs segmentation ambiguities. In this paper, we propose a generative adversarial network with spatial and channel attention mechanisms (GAN-SCA) for the robust segmentation of buildings in remote sensing images. The segmentation network (generator) of the proposed framework is composed of the well-known semantic segmentation architecture (U-Net) and the spatial and channel attention mechanisms (SCA). The adoption of SCA enables the segmentation network to selectively enhance more useful features in specific positions and channels and enables improved results closer to the ground truth. The discriminator is an adversarial network with channel attention mechanisms that can properly discriminate the outputs of the generator and the ground truth maps. The segmentation network and adversarial network are trained in an alternating fashion on the Inria aerial image labeling dataset and Massachusetts buildings dataset. Experimental results show that the proposed GAN-SCA achieves a higher score (the overall accuracy and intersection over the union of Inria aerial image labeling dataset are 96.61% and 77.75%, respectively, and the F1-measure of the Massachusetts buildings dataset is 96.36%) and outperforms several state-of-the-art approaches. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

19 pages, 7881 KiB  
Article
Semantic Segmentation-Based Building Footprint Extraction Using Very High-Resolution Satellite Images and Multi-Source GIS Data
by Weijia Li, Conghui He, Jiarui Fang, Juepeng Zheng, Haohuan Fu and Le Yu
Remote Sens. 2019, 11(4), 403; https://doi.org/10.3390/rs11040403 - 16 Feb 2019
Cited by 182 | Viewed by 16099
Abstract
Automatic extraction of building footprints from high-resolution satellite imagery has become an important and challenging research issue receiving greater attention. Many recent studies have explored different deep learning-based semantic segmentation methods for improving the accuracy of building extraction. Although they record substantial land [...] Read more.
Automatic extraction of building footprints from high-resolution satellite imagery has become an important and challenging research issue receiving greater attention. Many recent studies have explored different deep learning-based semantic segmentation methods for improving the accuracy of building extraction. Although they record substantial land cover and land use information (e.g., buildings, roads, water, etc.), public geographic information system (GIS) map datasets have rarely been utilized to improve building extraction results in existing studies. In this research, we propose a U-Net-based semantic segmentation method for the extraction of building footprints from high-resolution multispectral satellite images using the SpaceNet building dataset provided in the DeepGlobe Satellite Challenge of IEEE Conference on Computer Vision and Pattern Recognition 2018 (CVPR 2018). We explore the potential of multiple public GIS map datasets (OpenStreetMap, Google Maps, and MapWorld) through integration with the WorldView-3 satellite datasets in four cities (Las Vegas, Paris, Shanghai, and Khartoum). Several strategies are designed and combined with the U-Net–based semantic segmentation model, including data augmentation, post-processing, and integration of the GIS map data and satellite images. The proposed method achieves a total F1-score of 0.704, which is an improvement of 1.1% to 12.5% compared with the top three solutions in the SpaceNet Building Detection Competition and 3.0% to 9.2% compared with the standard U-Net–based method. Moreover, the effect of each proposed strategy and the possible reasons for the building footprint extraction results are analyzed substantially considering the actual situation of the four cities. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Figure 1

24 pages, 12319 KiB  
Article
An Automatic Morphological Attribute Building Extraction Approach for Satellite High Spatial Resolution Imagery
by Weixuan Ma, Youchuan Wan, Jiayi Li, Sa Zhu and Mingwei Wang
Remote Sens. 2019, 11(3), 337; https://doi.org/10.3390/rs11030337 - 8 Feb 2019
Cited by 22 | Viewed by 4318
Abstract
A new morphological attribute building index (MABI) and shadow index (MASI) are proposed here for automatically extracting building features from very high-resolution (VHR) remote sensing satellite images. By investigating the associated attributes in morphological attribute filters (AFs), the proposed method establishes a relationship [...] Read more.
A new morphological attribute building index (MABI) and shadow index (MASI) are proposed here for automatically extracting building features from very high-resolution (VHR) remote sensing satellite images. By investigating the associated attributes in morphological attribute filters (AFs), the proposed method establishes a relationship between AFs and the characteristics of buildings/shadows in VHR images (e.g., high local contrast, internal homogeneity, shape, and size). In the pre-processing step of the proposed work, attribute filtering was conducted on the original VHR spectral reflectance data to obtain the input, which has a high homogeneity, and to suppress elongated objects (potential non-buildings). Then, the MABI and MASI were calculated by taking the obtained input as a base image. The dark buildings were considered separately in the MABI to reduce the omission of the dark roofs. To better detect buildings from the MABI feature image, an object-oriented analysis and building-shadow concurrence relationships were utilized to further filter out non-building land covers, such as roads and bare ground, that are confused for buildings. Three VHR datasets from two satellite sensors, i.e., Worldview-2 and QuickBird, were tested to determine the detection performance. In view of both the visual inspection and quantitative assessment, the results of the proposed work are superior to recent automatic building index and supervised binary classification approach results. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

25 pages, 17485 KiB  
Article
Comparison of Digital Building Height Models Extracted from AW3D, TanDEM-X, ASTER, and SRTM Digital Surface Models over Yangon City
by Prakhar Misra, Ram Avtar and Wataru Takeuchi
Remote Sens. 2018, 10(12), 2008; https://doi.org/10.3390/rs10122008 - 11 Dec 2018
Cited by 47 | Viewed by 8534
Abstract
Vertical urban growth in the form of urban volume or building height is increasingly being seen as a significant indicator and constituent of the urban environment. Although high-resolution digital surface models can provide valuable information, various places lack access to such resources. The [...] Read more.
Vertical urban growth in the form of urban volume or building height is increasingly being seen as a significant indicator and constituent of the urban environment. Although high-resolution digital surface models can provide valuable information, various places lack access to such resources. The objective of this study is to explore the feasibility of using open digital surface models (DSMs), such as the AW3D30, ASTER, and SRTM datasets, for extracting digital building height models (DBHs) and comparing their accuracy. A multidirectional processing and slope-dependent filtering approach for DBH extraction was used. Yangon was chosen as the study location since it represents a rapidly developing Asian city where urban changes can be observed during the acquisition period of the aforementioned open DSM datasets (2001–2011). The effect of resolution degradation on the accuracy of the coarse AW3D30 DBH with respect to the high-resolution AW3D5 DBH was also examined. It is concluded that AW3D30 is the most suitable open DSM for DBH generation and for observing buildings taller than 9 m. Furthermore, the AW3D30 DBH, ASTER DBH, and SRTM DBH are suitable for observing vertical changes in urban structures. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

22 pages, 34534 KiB  
Article
Hierarchical Regularization of Building Boundaries in Noisy Aerial Laser Scanning and Photogrammetric Point Clouds
by Linfu Xie, Qing Zhu, Han Hu, Bo Wu, Yuan Li, Yeting Zhang and Ruofei Zhong
Remote Sens. 2018, 10(12), 1996; https://doi.org/10.3390/rs10121996 - 10 Dec 2018
Cited by 26 | Viewed by 5195
Abstract
Aerial laser scanning or photogrammetric point clouds are often noisy at building boundaries. In order to produce regularized polygons from such noisy point clouds, this study proposes a hierarchical regularization method for the boundary points. Beginning with detected planar structures from raw point [...] Read more.
Aerial laser scanning or photogrammetric point clouds are often noisy at building boundaries. In order to produce regularized polygons from such noisy point clouds, this study proposes a hierarchical regularization method for the boundary points. Beginning with detected planar structures from raw point clouds, two stages of regularization are employed. In the first stage, the boundary points of an individual plane are consolidated locally by shifting them along their refined normal vector to resist noise, and then grouped into piecewise smooth segments. In the second stage, global regularities among different segments from different planes are softly enforced through a labeling process, in which the same label represents parallel or orthogonal segments. This is formulated as a Markov random field and solved efficiently via graph cut. The performance of the proposed method is evaluated for extracting 2D footprints and 3D polygons of buildings in metropolitan area. The results reveal that the proposed method is superior to the state-of-art methods both qualitatively and quantitatively in compactness. The simplified polygons could fit the original boundary points with an average residuals of 0.2 m, and in the meantime reduce up to 90% complexities of the edges. The satisfactory performances of the proposed method show a promising potential for 3D reconstruction of polygonal models from noisy point clouds. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

30 pages, 20293 KiB  
Article
Extraction of Buildings from Multiple-View Aerial Images Using a Feature-Level-Fusion Strategy
by Youqiang Dong, Li Zhang, Ximin Cui, Haibin Ai and Biao Xu
Remote Sens. 2018, 10(12), 1947; https://doi.org/10.3390/rs10121947 - 4 Dec 2018
Cited by 13 | Viewed by 3895
Abstract
Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework [...] Read more.
Aerial images are widely used for building detection. However, the performance of building detection methods based on aerial images alone is typically poorer than that of building detection methods using both LiDAR and image data. To overcome these limitations, we present a framework for detecting and regularizing the boundary of individual buildings using a feature-level-fusion strategy based on features from dense image matching (DIM) point clouds, orthophoto and original aerial images. The proposed framework is divided into three stages. In the first stage, the features from the original aerial image and DIM points are fused to detect buildings and obtain the so-called blob of an individual building. Then, a feature-level fusion strategy is applied to match the straight-line segments from original aerial images so that the matched straight-line segment can be used in the later stage. Finally, a new footprint generation algorithm is proposed to generate the building footprint by combining the matched straight-line segments and the boundary of the blob of the individual building. The performance of our framework is evaluated on a vertical aerial image dataset (Vaihingen) and two oblique aerial image datasets (Potsdam and Lunen). The experimental results reveal 89% to 96% per-area completeness with accuracy above almost 93%. Relative to six existing methods, our proposed method not only is more robust but also can obtain a similar performance to the methods based on LiDAR and images. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

16 pages, 4759 KiB  
Article
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
by Hui Yang, Penghai Wu, Xuedong Yao, Yanlan Wu, Biao Wang and Yongyang Xu
Remote Sens. 2018, 10(11), 1768; https://doi.org/10.3390/rs10111768 - 8 Nov 2018
Cited by 103 | Viewed by 7223
Abstract
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this [...] Read more.
Building extraction from very high resolution (VHR) imagery plays an important role in urban planning, disaster management, navigation, updating geographic databases, and several other geospatial applications. Compared with the traditional building extraction approaches, deep learning networks have recently shown outstanding performance in this task by using both high-level and low-level feature maps. However, it is difficult to utilize different level features rationally with the present deep learning networks. To tackle this problem, a novel network based on DenseNets and the attention mechanism was proposed, called the dense-attention network (DAN). The DAN contains an encoder part and a decoder part which are separately composed of lightweight DenseNets and a spatial attention fusion module. The proposed encoder–decoder architecture can strengthen feature propagation and effectively bring higher-level feature information to suppress the low-level feature and noises. Experimental results based on public international society for photogrammetry and remote sensing (ISPRS) datasets with only red–green–blue (RGB) images demonstrated that the proposed DAN achieved a higher score (96.16% overall accuracy (OA), 92.56% F1 score, 90.56% mean intersection over union (MIOU), less training and response time and higher-quality value) when compared with other deep learning methods. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

31 pages, 9376 KiB  
Article
An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection
by Mohammad Awrangjeb, Syed Ali Naqi Gilani and Fasahat Ullah Siddiqui
Remote Sens. 2018, 10(10), 1512; https://doi.org/10.3390/rs10101512 - 21 Sep 2018
Cited by 54 | Viewed by 4951
Abstract
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data [...] Read more.
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data is still in its development stage for a number of reasons. For instance, there are difficulties in determining the neighbourhood relationships among the planes on a complex building roof, locating the step edges from point cloud data often requires additional information or may impose constraints, and missing roof planes attract human interaction and often produces high reconstruction errors. This research introduces a new 3-D roof reconstruction technique that constructs an adjacency matrix to define the topological relationships among the roof planes. It identifies any missing planes through an analysis using the 3-D plane intersection lines between adjacent planes. Then, it generates new planes to fill gaps of missing planes. Finally, it obtains complete building models through insertion of approximate wall planes and building floor. The reported research in this paper then uses the generated building models to detect 3-D changes in buildings. Plane connections between neighbouring planes are first defined to establish relationships between neighbouring planes. Then, each building in the reference and test model sets is represented using a graph data structure. Finally, the height intensity images, and if required the graph representations, of the reference and test models are directly compared to find and categorise 3-D changes into five groups: new, unchanged, demolished, modified and partially-modified planes. Experimental results on two Australian datasets show high object- and pixel-based accuracy in terms of completeness, correctness, and quality for both 3-D roof reconstruction and change detection techniques. The proposed change detection technique is robust to various changes including addition of a new veranda to or removal of an existing veranda from a building and increase of the height of a building. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

19 pages, 5112 KiB  
Article
Detecting Building Edges from High Spatial Resolution Remote Sensing Imagery Using Richer Convolution Features Network
by Tingting Lu, Dongping Ming, Xiangguo Lin, Zhaoli Hong, Xueding Bai and Ju Fang
Remote Sens. 2018, 10(9), 1496; https://doi.org/10.3390/rs10091496 - 19 Sep 2018
Cited by 64 | Viewed by 7579
Abstract
As the basic feature of building, building edges play an important role in many fields such as urbanization monitoring, city planning, surveying and mapping. Building edges detection from high spatial resolution remote sensing (HSRRS) imagery has always been a long-standing problem. Inspired by [...] Read more.
As the basic feature of building, building edges play an important role in many fields such as urbanization monitoring, city planning, surveying and mapping. Building edges detection from high spatial resolution remote sensing (HSRRS) imagery has always been a long-standing problem. Inspired by the recent success of deep-learning-based edge detection, a building edge detection model using a richer convolutional features (RCF) network is employed in this paper to detect building edges. Firstly, a dataset for building edges detection is constructed by the proposed most peripheral constraint conversion algorithm. Then, based on this dataset the RCF network is retrained. Finally, the edge probability map is obtained by RCF-building model, and this paper involves a geomorphological concept to refine edge probability map according to geometric morphological analysis of topographic surface. The experimental results suggest that RCF-building model can detect building edges accurately and completely, and that this model has an edge detection F-measure that is at least 5% higher than that of other three typical building extraction methods. In addition, the ablation experiment result proves that using the most peripheral constraint conversion algorithm can generate more superior dataset, and the involved refinement algorithm shows a higher F-measure and better visual effect contrasted with the non-maximal suppression algorithm. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

17 pages, 9969 KiB  
Article
Extracting Building Boundaries from High Resolution Optical Images and LiDAR Data by Integrating the Convolutional Neural Network and the Active Contour Model
by Ying Sun, Xinchang Zhang, Xiaoyang Zhao and Qinchuan Xin
Remote Sens. 2018, 10(9), 1459; https://doi.org/10.3390/rs10091459 - 12 Sep 2018
Cited by 65 | Viewed by 8848
Abstract
Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results [...] Read more.
Identifying and extracting building boundaries from remote sensing data has been one of the hot topics in photogrammetry for decades. The active contour model (ACM) is a robust segmentation method that has been widely used in building boundary extraction, but which often results in biased building boundary extraction due to tree and background mixtures. Although the classification methods can improve this efficiently by separating buildings from other objects, there are often ineluctable salt and pepper artifacts. In this paper, we combine the robust classification convolutional neural networks (CNN) and ACM to overcome the current limitations in algorithms for building boundary extraction. We conduct two types of experiments: the first integrates ACM into the CNN construction progress, whereas the second starts building footprint detection with a CNN and then uses ACM for post processing. Three level assessments conducted demonstrate that the proposed methods could efficiently extract building boundaries in five test scenes from two datasets. The achieved mean accuracies in terms of the F1 score for the first type (and the second type) of the experiment are 96.43 ± 3.34% (95.68 ± 3.22%), 88.60 ± 3.99% (89.06 ± 3.96%), and 91.62 ±1.61% (91.47 ± 2.58%) at the scene, object, and pixel levels, respectively. The combined CNN and ACM solutions were shown to be effective at extracting building boundaries from high-resolution optical images and LiDAR data. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

19 pages, 29239 KiB  
Article
A Boundary Regulated Network for Accurate Roof Segmentation and Outline Extraction
by Guangming Wu, Zhiling Guo, Xiaodan Shi, Qi Chen, Yongwei Xu, Ryosuke Shibasaki and Xiaowei Shao
Remote Sens. 2018, 10(8), 1195; https://doi.org/10.3390/rs10081195 - 30 Jul 2018
Cited by 53 | Viewed by 9738
Abstract
The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance [...] Read more.
The automatic extraction of building outlines from aerial imagery for the purposes of navigation and urban planning is a long-standing problem in the field of remote sensing. Currently, most methods utilize variants of fully convolutional networks (FCNs), which have significantly improved model performance for this task. However, pursuing more accurate segmentation results is still critical for additional applications, such as automatic mapping and building change detection. In this study, we propose a boundary regulated network called BR-Net, which utilizes both local and global information, to perform roof segmentation and outline extraction. The BR-Net method consists of a shared backend utilizing a modified U-Net and a multitask framework to generate predictions for segmentation maps and building outlines based on a consistent feature representation from the shared backend. Because of the restriction and regulation of additional boundary information, the proposed model can achieve superior performance compared to existing methods. Experiments on an aerial image dataset covering 32 km2 and containing more than 58,000 buildings indicate that our method performs well at both roof segmentation and outline extraction. The proposed BR-Net method significantly outperforms the classic FCN8s model. Compared to the state-of-the-art U-Net model, our BR-Net achieves 6.2% (0.869 vs. 0.818), 10.6% (0.772 vs. 0.698), and 8.7% (0.840 vs. 0.773) improvements in F1 score, Jaccard index, and kappa coefficient, respectively. Full article
(This article belongs to the Special Issue Remote Sensing based Building Extraction)
Show Figures

Graphical abstract

Back to TopTop