remotesensing-logo

Journal Browser

Journal Browser

3D City Modelling and Change Detection Using Remote Sensing Data

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (1 November 2021) | Viewed by 40267

Special Issue Editor

Research Associate, Faculty of Built Environment, UNSW, Sydney, Australia
Interests: remote sensing; photogrammetry; 3D reconstruction; change detection; development and application of spatial algorithms
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Countless articles in the Remote Sensing literature start their introductions by articulating the importance of the field in a changing world. Detecting changes and keeping track of them over time is a key motivation for Remote Sensing activities. Data with ever-increasing accuracy and resolution, both spatial and temporal, from terrestrial, airborne, and space-borne sensors are being made widely available, and in recent years, the production of three-dimensional information has received much attention in geoinformation research and production. A large base of 3D city models is currently in existence. Rather than producing new information from scratch when new data become available, taking existing information into consideration as much as possible may help to reduce cost and effort. In unchanged areas, the existing information becomes more valuable when confirmed by new data. In changed situations, updating the information may still be a challenge, however to a much-limited extent. Research and development in the above-described sort of scenario is sought for the Special Issue on 3D Modelling and Change Detection using Remote Sensing Data. Other examples of research within the scope of the Special Issue might concern those where changes are found in multitemporal 3D datasets, such as point clouds from Lidar and photogrammetry. Furthermore, work on integrating (existing) 3D models with (new) designs in building information models (BIM) will certainly be considered, as well as contributions at the application side, where 3D changes are triggering actions concerning planning, policy and management.

Dr. Ben Gorte
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 5274 KiB  
Article
An Efficient Lightweight Neural Network for Remote Sensing Image Change Detection
by Kaiqiang Song, Fengzhi Cui and Jie Jiang
Remote Sens. 2021, 13(24), 5152; https://doi.org/10.3390/rs13245152 - 18 Dec 2021
Cited by 20 | Viewed by 3885
Abstract
Remote sensing (RS) image change detection (CD) is a critical technique of detecting land surface changes in earth observation. Deep learning (DL)-based approaches have gained popularity and have made remarkable progress in change detection. The recent advances in DL-based methods mainly focus on [...] Read more.
Remote sensing (RS) image change detection (CD) is a critical technique of detecting land surface changes in earth observation. Deep learning (DL)-based approaches have gained popularity and have made remarkable progress in change detection. The recent advances in DL-based methods mainly focus on enhancing the feature representation ability for performance improvement. However, deeper networks incorporated with attention-based or multiscale context-based modules involve a large number of network parameters and require more inference time. In this paper, we first proposed an effective network called 3M-CDNet that requires about 3.12 M parameters for accuracy improvement. Furthermore, a lightweight variant called 1M-CDNet, which only requires about 1.26 M parameters, was proposed for computation efficiency with the limitation of computing power. 3M-CDNet and 1M-CDNet have the same backbone network architecture but different classifiers. Specifically, the application of deformable convolutions (DConv) in the lightweight backbone made the model gain a good geometric transformation modeling capacity for change detection. The two-level feature fusion strategy was applied to improve the feature representation. In addition, the classifier that has a plain design to facilitate the inference speed applied dropout regularization to improve generalization ability. Online data augmentation (DA) was also applied to alleviate overfitting during model training. Extensive experiments have been conducted on several public datasets for performance evaluation. Ablation studies have proved the effectiveness of the core components. Experiment results demonstrate that the proposed networks achieved performance improvements compared with the state-of-the-art methods. Specifically, 3M-CDNet achieved the best F1-score on two datasets, i.e., LEVIR-CD (0.9161) and Season-Varying (0.9749). Compared with existing methods, 1M-CDNet achieved a higher F1-score, i.e., LEVIR-CD (0.9118) and Season-Varying (0.9680). In addition, the runtime of 1M-CDNet is superior to most, which exhibits a better trade-off between accuracy and efficiency. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

24 pages, 13296 KiB  
Article
Urban Building Mesh Polygonization Based on 1-Ring Patch and Topology Optimization
by Li Yan, Yao Li and Hong Xie
Remote Sens. 2021, 13(23), 4777; https://doi.org/10.3390/rs13234777 - 25 Nov 2021
Cited by 5 | Viewed by 2440
Abstract
With the development of UAV and oblique photogrammetry technology, the multi-view stereo image has become an important data source for 3D urban reconstruction, and the surface meshes generated by it have become a common way to represent the building surface model due to [...] Read more.
With the development of UAV and oblique photogrammetry technology, the multi-view stereo image has become an important data source for 3D urban reconstruction, and the surface meshes generated by it have become a common way to represent the building surface model due to their high geometric similarity and high shape representation ability. However, due to the problem of data quality and lack of building structure information in multi-view stereo image data sources, it is a huge challenge to generate simplified polygonal models from building surface meshes with high data redundancy and fuzzy structural boundaries, along with high time consumption, low accuracy, and poor robustness. In this paper, an improved mesh representation strategy based on 1-ring patches is proposed, and the topology validity is improved on this basis. Experimental results show that our method can reconstruct the concise, manifold, and watertight surface models of different buildings, and it can improve the processing efficiency, parameter adaptability, and model quality. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

15 pages, 6105 KiB  
Article
Dual-Task Semantic Change Detection for Remote Sensing Images Using the Generative Change Field Module
by Shao Xiang, Mi Wang, Xiaofan Jiang, Guangqi Xie, Zhiqi Zhang and Peng Tang
Remote Sens. 2021, 13(16), 3336; https://doi.org/10.3390/rs13163336 - 23 Aug 2021
Cited by 28 | Viewed by 3733
Abstract
With the advent of very-high-resolution remote sensing images, semantic change detection (SCD) based on deep learning has become a research hotspot in recent years. SCD aims to observe the change in the Earth’s land surface and plays a vital role in monitoring the [...] Read more.
With the advent of very-high-resolution remote sensing images, semantic change detection (SCD) based on deep learning has become a research hotspot in recent years. SCD aims to observe the change in the Earth’s land surface and plays a vital role in monitoring the ecological environment, land use and land cover. Existing research mainly focus on single-task semantic change detection; the problem they face is that existing methods are incapable of identifying which change type has occurred in each multi-temporal image. In addition, few methods use the binary change region to help train a deep SCD-based network. Hence, we propose a dual-task semantic change detection network (GCF-SCD-Net) by using the generative change field (GCF) module to locate and segment the change region; what is more, the proposed network is end-to-end trainable. In the meantime, because of the influence of the imbalance label, we propose a separable loss function to alleviate the over-fitting problem. Extensive experiments are conducted in this work to validate the performance of our method. Finally, our work achieves a 69.9% mIoU and 17.9 Sek on the SECOND dataset. Compared with traditional networks, GCF-SCD-Net achieves the best results and promising performances. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

29 pages, 7735 KiB  
Article
Change Detection in Urban Point Clouds: An Experimental Comparison with Simulated 3D Datasets
by Iris de Gélis, Sébastien Lefèvre and Thomas Corpetti
Remote Sens. 2021, 13(13), 2629; https://doi.org/10.3390/rs13132629 - 4 Jul 2021
Cited by 25 | Viewed by 5135
Abstract
In the context of rapid urbanization, monitoring the evolution of cities is crucial. To do so, 3D change detection and characterization is of capital importance since, unlike 2D images, 3D data contain vertical information of utmost importance to monitoring city evolution (that occurs [...] Read more.
In the context of rapid urbanization, monitoring the evolution of cities is crucial. To do so, 3D change detection and characterization is of capital importance since, unlike 2D images, 3D data contain vertical information of utmost importance to monitoring city evolution (that occurs along both horizontal and vertical axes). Urban 3D change detection has thus received growing attention, and various methods have been published on the topic. Nevertheless, no quantitative comparison on a public dataset has been reported yet. This study presents an experimental comparison of six methods: three traditional (difference of DSMs, C2C and M3C2), one machine learning with hand-crafted features (a random forest model with a stability feature) and two deep learning (feed-forward and Siamese architectures). In order to compare these methods, we prepared five sub-datasets containing simulated pairs of 3D annotated point clouds with different characteristics: from high to low resolution, with various levels of noise. The methods have been tested on each sub-dataset for binary and multi-class segmentation. For supervised methods, we also assessed the transfer learning capacity and the influence of the training set size. The methods we used provide various kinds of results (2D pixels, 2D patches or 3D points), and each of them is impacted by the resolution of the PCs. However, while the performances of deep learning methods highly depend on the size of the training set, they seem to be less impacted by training on datasets with different characteristics. Oppositely, conventional machine learning methods exhibit stable results, even with smaller training sets, but embed low transfer learning capacities. While the main changes in our datasets were usually identified, there were still numerous instances of false detection, especially in dense urban areas, thereby calling for further development in this field. To assist such developments, we provide a public dataset composed of pairs of point clouds with different qualities together with their change-related annotations. This dataset was built with an original simulation tool which allows one to generate bi-temporal urban point clouds under various conditions. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Graphical abstract

29 pages, 8260 KiB  
Article
A Disparity Refinement Algorithm for Satellite Remote Sensing Images Based on Mean-Shift Plane Segmentation
by Zhihui Li, Jiaxin Liu, Yang Yang and Jing Zhang
Remote Sens. 2021, 13(10), 1903; https://doi.org/10.3390/rs13101903 - 13 May 2021
Cited by 1 | Viewed by 2495
Abstract
Objects in satellite remote sensing image sequences often have large deformations, and the stereo matching of this kind of image is so difficult that the matching rate generally drops. A disparity refinement method is needed to correct and fill the disparity. A method [...] Read more.
Objects in satellite remote sensing image sequences often have large deformations, and the stereo matching of this kind of image is so difficult that the matching rate generally drops. A disparity refinement method is needed to correct and fill the disparity. A method for disparity refinement based on the results of plane segmentation is proposed in this paper. The plane segmentation algorithm includes two steps: Initial segmentation based on mean-shift and alpha-expansion-based energy minimization. According to the results of plane segmentation and fitting, the disparity is refined by filling missed matching regions and removing outliers. The experimental results showed that the proposed plane segmentation method could not only accurately fit the plane in the presence of noise but also approximate the surface by plane combination. After the proposed plane segmentation method was applied to the disparity refinement of remote sensing images, many missed matches were filled, and the elevation errors were reduced. This proved that the proposed algorithm was effective. For difficult evaluations resulting from significant variations in remote sensing images of different satellites, the edge matching rate and the edge matching map are proposed as new stereo matching evaluation and analysis tools. Experiment results showed that they were easy to use, intuitive, and effective. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

22 pages, 9525 KiB  
Article
Snake-Based Model for Automatic Roof Boundary Extraction in the Object Space Integrating a High-Resolution Aerial Images Stereo Pair and 3D Roof Models
by Michelle S. Y. Ywata, Aluir P. Dal Poz, Milton H. Shimabukuro and Henrique C. de Oliveira
Remote Sens. 2021, 13(8), 1429; https://doi.org/10.3390/rs13081429 - 7 Apr 2021
Cited by 5 | Viewed by 2483
Abstract
The accelerated urban development over the last decades has made it necessary to update spatial information rapidly and constantly. Therefore, cities’ three-dimensional models have been widely used as a study base for various urban problems. However, although many efforts have been made to [...] Read more.
The accelerated urban development over the last decades has made it necessary to update spatial information rapidly and constantly. Therefore, cities’ three-dimensional models have been widely used as a study base for various urban problems. However, although many efforts have been made to develop new building extraction methods, reliable and automatic extraction is still a major challenge for the remote sensing and computer vision communities, mainly due to the complexity and variability of urban scenes. This paper presents a method to extract building roof boundaries in the object space by integrating a high-resolution aerial images stereo pair, three-dimensional roof models reconstructed from light detection and ranging (LiDAR) data, and contextual information of the scenes involved. The proposed method focuses on overcoming three types of common problems that can disturb the automatic roof extraction in the urban environment: perspective occlusions caused by high buildings, occlusions caused by vegetation covering the roof, and shadows that are adjacent to the roofs, which can be misinterpreted as roof edges. For this, an improved Snake-based mathematical model is developed considering the radiometric and geometric properties of roofs to represent the roof boundary in the image space. A new approach for calculating the corner response and a shadow compensation factor was added to the model. The created model is then adapted to represent the boundaries in the object space considering a stereo pair of aerial images. Finally, the optimal polyline, representing a selected roof boundary, is obtained by optimizing the proposed Snake-based model using a dynamic programming (DP) approach considering the contextual information of the scene. The results showed that the proposed method works properly in boundary extraction of roofs with occlusion and shadows areas, presenting completeness and correctness average values above 90%, RMSE average values below 0.5 m for E and N components, and below 1 m for H component. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

22 pages, 9615 KiB  
Article
Unsupervised Building Instance Segmentation of Airborne LiDAR Point Clouds for Parallel Reconstruction Analysis
by Yongjun Zhang, Wangshan Yang, Xinyi Liu, Yi Wan, Xianzhang Zhu and Yuhui Tan
Remote Sens. 2021, 13(6), 1136; https://doi.org/10.3390/rs13061136 - 17 Mar 2021
Cited by 18 | Viewed by 3491
Abstract
Efficient building instance segmentation is necessary for many applications such as parallel reconstruction, management and analysis. However, most of the existing instance segmentation methods still suffer from low completeness, low correctness and low quality for building instance segmentation, which are especially obvious for [...] Read more.
Efficient building instance segmentation is necessary for many applications such as parallel reconstruction, management and analysis. However, most of the existing instance segmentation methods still suffer from low completeness, low correctness and low quality for building instance segmentation, which are especially obvious for complex building scenes. This paper proposes a novel unsupervised building instance segmentation (UBIS) method of airborne Light Detection and Ranging (LiDAR) point clouds for parallel reconstruction analysis, which combines a clustering algorithm and a novel model consistency evaluation method. The proposed method first divides building point clouds into building instances by the improved kd tree 2D shared nearest neighbor clustering algorithm (Ikd-2DSNN). Then, the geometric feature of the building instance is obtained using the model consistency evaluation method, which is used to determine whether the building instance is a single building instance or a multi-building instance. Finally, for multiple building instances, the improved kd tree 3D shared nearest neighbor clustering algorithm (Ikd-3DSNN) is used to divide multi-building instances again to improve the accuracy of building instance segmentation. Our experimental results demonstrate that the proposed UBIS method obtained good performances for various buildings in different scenes such as high-rise building, podium buildings and a residential area with detached houses. A comparative analysis confirms that the proposed UBIS method performed better than state-of-the-art methods. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

24 pages, 7248 KiB  
Article
Combined Rule-Based and Hypothesis-Based Method for Building Model Reconstruction from Photogrammetric Point Clouds
by Linfu Xie, Han Hu, Qing Zhu, Xiaoming Li, Shengjun Tang, You Li, Renzhong Guo, Yeting Zhang and Weixi Wang
Remote Sens. 2021, 13(6), 1107; https://doi.org/10.3390/rs13061107 - 14 Mar 2021
Cited by 25 | Viewed by 3572
Abstract
Three-dimensional (3D) building models play an important role in digital cities and have numerous potential applications in environmental studies. In recent years, the photogrammetric point clouds obtained by aerial oblique images have become a major source of data for 3D building reconstruction. Aiming [...] Read more.
Three-dimensional (3D) building models play an important role in digital cities and have numerous potential applications in environmental studies. In recent years, the photogrammetric point clouds obtained by aerial oblique images have become a major source of data for 3D building reconstruction. Aiming at reconstructing a 3D building model at Level of Detail (LoD) 2 and even LoD3 with preferred geometry accuracy and affordable computation expense, in this paper, we propose a novel method for the efficient reconstruction of building models from the photogrammetric point clouds which combines the rule-based and the hypothesis-based method using a two-stage topological recovery process. Given the point clouds of a single building, planar primitives and their corresponding boundaries are extracted and regularized to obtain abstracted building counters. In the first stage, we take advantage of the regularity and adjacency of the building counters to recover parts of the topological relationships between different primitives. Three constraints, namely pairwise constraint, triplet constraint, and nearby constraint, are utilized to form an initial reconstruction with candidate faces in ambiguous areas. In the second stage, the topologies in ambiguous areas are removed and reconstructed by solving an integer linear optimization problem based on the initial constraints while considering data fitting degree. Experiments using real datasets reveal that compared with state-of-the-art methods, the proposed method can efficiently reconstruct 3D building models in seconds with the geometry accuracy in decimeter level. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

13 pages, 11306 KiB  
Article
Deep-Learning-Based Classification of Point Clouds for Bridge Inspection
by Hyunsoo Kim and Changwan Kim
Remote Sens. 2020, 12(22), 3757; https://doi.org/10.3390/rs12223757 - 16 Nov 2020
Cited by 40 | Viewed by 6076
Abstract
Conventional bridge maintenance requires significant time and effort because it involves manual inspection and two-dimensional drawings are used to record any damage. For this reason, a process that identifies the location of the damage in three-dimensional space and classifies the bridge components involved [...] Read more.
Conventional bridge maintenance requires significant time and effort because it involves manual inspection and two-dimensional drawings are used to record any damage. For this reason, a process that identifies the location of the damage in three-dimensional space and classifies the bridge components involved is required. In this study, three deep-learning models—PointNet, PointCNN, and Dynamic Graph Convolutional Neural Network (DGCNN)—were compared to classify the components of bridges. Point cloud data were acquired from three types of bridge (Rahmen, girder, and gravity bridges) to determine the optimal model for use across all three types. Three-fold cross-validation was employed, with overall accuracy and intersection over unions used as the performance measures. The mean interval over unit value of DGCNN is 86.85%, which is higher than 84.29% of Pointnet, 74.68% of PointCNN. The accurate classification of a bridge component based on its relationship with the surrounding components may assist in identifying whether the damage to a bridge affects a structurally important main component. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Graphical abstract

29 pages, 24617 KiB  
Article
Super-Resolution-Based Snake Model—An Unsupervised Method for Large-Scale Building Extraction Using Airborne LiDAR Data and Optical Image
by Thanh Huy Nguyen, Sylvie Daniel, Didier Guériot, Christophe Sintès and Jean-Marc Le Caillec
Remote Sens. 2020, 12(11), 1702; https://doi.org/10.3390/rs12111702 - 26 May 2020
Cited by 20 | Viewed by 4651
Abstract
Automatic extraction of buildings in urban and residential scenes has become a subject of growing interest in the domain of photogrammetry and remote sensing, particularly since the mid-1990s. Active contour model, colloquially known as snake model, has been studied to extract buildings from [...] Read more.
Automatic extraction of buildings in urban and residential scenes has become a subject of growing interest in the domain of photogrammetry and remote sensing, particularly since the mid-1990s. Active contour model, colloquially known as snake model, has been studied to extract buildings from aerial and satellite imagery. However, this task is still very challenging due to the complexity of building size, shape, and its surrounding environment. This complexity leads to a major obstacle for carrying out a reliable large-scale building extraction, since the involved prior information and assumptions on building such as shape, size, and color cannot be generalized over large areas. This paper presents an efficient snake model to overcome such a challenge, called Super-Resolution-based Snake Model (SRSM). The SRSM operates on high-resolution Light Detection and Ranging (LiDAR)-based elevation images—called z-images—generated by a super-resolution process applied to LiDAR data. The involved balloon force model is also improved to shrink or inflate adaptively, instead of inflating continuously. This method is applicable for a large scale such as city scale and even larger, while having a high level of automation and not requiring any prior knowledge nor training data from the urban scenes (hence unsupervised). It achieves high overall accuracy when tested on various datasets. For instance, the proposed SRSM yields an average area-based Quality of 86.57% and object-based Quality of 81.60% on the ISPRS Vaihingen benchmark datasets. Compared to other methods using this benchmark dataset, this level of accuracy is highly desirable even for a supervised method. Similarly desirable outcomes are obtained when carrying out the proposed SRSM on the whole City of Quebec (total area of 656 km2), yielding an area-based Quality of 62.37% and an object-based Quality of 63.21%. Full article
(This article belongs to the Special Issue 3D City Modelling and Change Detection Using Remote Sensing Data)
Show Figures

Figure 1

Back to TopTop