remotesensing-logo

Journal Browser

Journal Browser

Data Fusion for Urban Applications

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 51122

Special Issue Editors


E-Mail Website
Guest Editor
German Aerospace Center (DLR), Remote Sensing Technology Institute, Muenchener Strasse 20, 82234 Wessling-Oberpfaffenhofen, Germany
Interests: remote sensing; image analysis; interpretation; data fusion; simulation

E-Mail Website
Guest Editor
Signal Processing in Earth Observation, Technical University of Munich, 80333 Munich, Germany
Interests: remote sensing; data fusion; machine learning; geospatial data science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remote Sensing applications for urban areas are of major importance as the majority of human population is concentrated in these regions. Looking at reported literature, data from different sensors have been acquired, e.g., in order to characterize the nature of city areas, monitor urban development over time or detect changes after unexpected events. In this context, the unification of information from multimodal sensors in urban applications has always been very welcome, but so far hard to solve. On the one hand, finding appropriate strategies for combining the complementary information is difficult. On the other hand, the assignment of multi-modal information to entities of urban scenes is often not unambiguous. An increase of the spatial resolution of image data does not necessarily help but even tightens the conditions for useful solutions.

This Special Issue is devoted to strategies and methods for fusing multi-modal data in the context of urban remote sensing. As a general guideline, complementary sources should be combined in order to gain improved information about urban areas.   

Submitting authors are encouraged to address one of the following topics in the context of remote sensing data (not exclusively):

  • Enhancement of urban applications through exploitation of complementary information provided by data from multiple sensors, multiple sources and multi-temporal acquisitions;
  • Integration of external prior knowledge into urban remote sensing;
  • Fusion of information from remote sensing and non-typical Earth observation data sources (terrestrial data, data from social media, etc.) for improved understanding of urban problems;
  • 2-D, 3-D and multi-dimensional data fusion for urban analysis;
  • Multi-view fusion for exploiting different perspectives on urban elements;
  • Data fusion for urban tasks conducted on data level, feature level, or decision level;
  • Urban applications on different resolution levels (spatial, spectral, temporal).

Dr. Stefan Auer
PD Dr. Michael Schmitt
Dr. Naoto Yokoya
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Data fusion
  • Image fusion
  • Multi-sensor fusion
  • Multi-source fusion
  • Urban applications
  • City monitoring
  • Change detection
  • Multi-resolution data
  • Multi-temporal data
  • Multi-spectral data
  • Accuracy assessment

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

28 pages, 1754 KiB  
Article
Multilevel Data and Decision Fusion Using Heterogeneous Sensory Data for Autonomous Vehicles
by Henry Alexander Ignatious, Hesham El-Sayed and Parag Kulkarni
Remote Sens. 2023, 15(9), 2256; https://doi.org/10.3390/rs15092256 - 24 Apr 2023
Cited by 8 | Viewed by 2655
Abstract
Autonomous vehicles (AVs) are predicted to change transportation; however, it is still difficult to maintain robust situation awareness in a variety of driving situations. To enhance AV perception, methods to integrate sensor data from the camera, radar, and LiDAR sensors have been proposed. [...] Read more.
Autonomous vehicles (AVs) are predicted to change transportation; however, it is still difficult to maintain robust situation awareness in a variety of driving situations. To enhance AV perception, methods to integrate sensor data from the camera, radar, and LiDAR sensors have been proposed. However, due to rigidity in their fusion implementations, current techniques are not sufficiently robust in challenging driving scenarios (such as inclement weather, poor light, and sensor obstruction). These techniques can be divided into two main groups: (i) early fusion, which is ineffective when sensor data are distorted or noisy, and (ii) late fusion, which is unable to take advantage of characteristics from numerous sensors and hence yields sub-optimal estimates. In this paper, we suggest a flexible selective sensor fusion framework that learns to recognize the present driving environment and fuses the optimum sensor combinations to enhance robustness without sacrificing efficiency to overcome the above-mentioned limitations. The proposed framework dynamically simulates early fusion, late fusion, and mixtures of both, allowing for a quick decision on the best fusion approach. The framework includes versatile modules for pre-processing heterogeneous data such as numeric, alphanumeric, image, and audio data, selecting appropriate features, and efficiently fusing the selected features. Further, versatile object detection and classification models are proposed to detect and categorize objects accurately. Advanced ensembling, gating, and filtering techniques are introduced to select the optimal object detection and classification model. Further, innovative methodologies are proposed to create an accurate context and decision rules. Widely used datasets like KITTI, nuScenes, and RADIATE are used in experimental analysis to evaluate the proposed models. The proposed model performed well in both data-level and decision-level fusion activities and also outperformed other fusion models in terms of accuracy and efficiency. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

18 pages, 12133 KiB  
Article
HAFNet: Hierarchical Attentive Fusion Network for Multispectral Pedestrian Detection
by Peiran Peng, Tingfa Xu, Bo Huang and Jianan Li
Remote Sens. 2023, 15(8), 2041; https://doi.org/10.3390/rs15082041 - 12 Apr 2023
Cited by 6 | Viewed by 2496
Abstract
Multispectral pedestrian detection via visible and thermal image pairs has received widespread attention in recent years. It provides a promising multi-modality solution to address the challenges of pedestrian detection in low-light environments and occlusion situations. Most existing methods directly blend the results of [...] Read more.
Multispectral pedestrian detection via visible and thermal image pairs has received widespread attention in recent years. It provides a promising multi-modality solution to address the challenges of pedestrian detection in low-light environments and occlusion situations. Most existing methods directly blend the results of the two modalities or combine the visible and thermal features via a linear interpolation. However, such fusion strategies tend to extract coarser features corresponding to the positions of different modalities, which may lead to degraded detection performance. To mitigate this, this paper proposes a novel and adaptive cross-modality fusion framework, named Hierarchical Attentive Fusion Network (HAFNet), which fully exploits the multispectral attention knowledge to inspire pedestrian detection in the decision-making process. Concretely, we introduce a Hierarchical Content-dependent Attentive Fusion (HCAF) module to extract top-level features as a guide to pixel-wise blending features of two modalities to enhance the quality of the feature representation and a plug-in multi-modality feature alignment (MFA) block to fine-tune the feature alignment of two modalities. Experiments on the challenging KAIST and CVC-14 datasets demonstrate the superior performance of our method with satisfactory speed. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

20 pages, 8850 KiB  
Article
FusionRCNN: LiDAR-Camera Fusion for Two-Stage 3D Object Detection
by Xinli Xu, Shaocong Dong, Tingfa Xu, Lihe Ding, Jie Wang, Peng Jiang, Liqiang Song and Jianan Li
Remote Sens. 2023, 15(7), 1839; https://doi.org/10.3390/rs15071839 - 30 Mar 2023
Cited by 20 | Viewed by 5900
Abstract
Accurate and reliable perception systems are essential for autonomous driving and robotics. To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors have significantly improved accuracy by adopting a two-stage paradigm that relies solely on LiDAR point clouds for 3D [...] Read more.
Accurate and reliable perception systems are essential for autonomous driving and robotics. To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors have significantly improved accuracy by adopting a two-stage paradigm that relies solely on LiDAR point clouds for 3D proposal refinement. However, the sparsity of point clouds, particularly for faraway points, makes it difficult for the LiDAR-only refinement module to recognize and locate objects accurately. To address this issue, we propose a novel multi-modality two-stage approach called FusionRCNN. This approach effectively and efficiently fuses point clouds and camera images in the Regions of Interest (RoI). The FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from the camera in a unified attention mechanism. Specifically, FusionRCNN first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step. Then, it leverages an intra-modality self-attention to enhance the domain-specific features, followed by a well-designed cross-attention to fuse the information from two modalities. FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors. Remarkably, FusionRCNN improves the strong SECOND baseline by 6.14% mAP on Waymo and outperforms competing two-stage approaches. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

22 pages, 7516 KiB  
Article
Multitemporal Feature-Level Fusion on Hyperspectral and LiDAR Data in the Urban Environment
by Agnieszka Kuras, Maximilian Brell, Kristian Hovde Liland and Ingunn Burud
Remote Sens. 2023, 15(3), 632; https://doi.org/10.3390/rs15030632 - 20 Jan 2023
Cited by 13 | Viewed by 2480
Abstract
Technological innovations and advanced multidisciplinary research increase the demand for multisensor data fusion in Earth observations. Such fusion has great potential, especially in the remote sensing field. One sensor is often insufficient in analyzing urban environments to obtain comprehensive results. Inspired by the [...] Read more.
Technological innovations and advanced multidisciplinary research increase the demand for multisensor data fusion in Earth observations. Such fusion has great potential, especially in the remote sensing field. One sensor is often insufficient in analyzing urban environments to obtain comprehensive results. Inspired by the capabilities of hyperspectral and Light Detection and Ranging (LiDAR) data in multisensor data fusion at the feature level, we present a novel approach to the multitemporal analysis of urban land cover in a case study in Høvik, Norway. Our generic workflow is based on bitemporal datasets; however, it is designed to include datasets from other years. Our framework extracts representative endmembers in an unsupervised way, retrieves abundance maps fed into segmentation algorithms, and detects the main urban land cover classes by implementing 2D ResU-Net for segmentation without parameter regularizations and with effective optimization. Such segmentation optimization is based on updating initial features and providing them for a second iteration of segmentation. We compared segmentation optimization models with and without data augmentation, achieving up to 11% better accuracy after segmentation optimization. In addition, a stable spectral library is automatically generated for each land cover class, allowing local database extension. The main product of the multitemporal analysis is a map update, effectively detecting detailed changes in land cover classes. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

19 pages, 3482 KiB  
Article
AMFuse: Add–Multiply-Based Cross-Modal Fusion Network for Multi-Spectral Semantic Segmentation
by Haijun Liu, Fenglei Chen, Zhihong Zeng and Xiaoheng Tan
Remote Sens. 2022, 14(14), 3368; https://doi.org/10.3390/rs14143368 - 13 Jul 2022
Cited by 6 | Viewed by 2039
Abstract
Multi-spectral semantic segmentation has shown great advantages under poor illumination conditions, especially for remote scene understanding of autonomous vehicles, since the thermal image can provide complementary information for RGB image. However, methods to fuse the information from RGB image and thermal image are [...] Read more.
Multi-spectral semantic segmentation has shown great advantages under poor illumination conditions, especially for remote scene understanding of autonomous vehicles, since the thermal image can provide complementary information for RGB image. However, methods to fuse the information from RGB image and thermal image are still under-explored. In this paper, we propose a simple but effective module, add–multiply fusion (AMFuse) for RGB and thermal information fusion, consisting of two simple math operations—addition and multiplication. The addition operation focuses on extracting cross-modal complementary features, while the multiplication operation concentrates on the cross-modal common features. Moreover, the attention module and atrous spatial pyramid pooling (ASPP) modules are also incorporated into our proposed AMFuse modules, to enhance the multi-scale context information. Finally, in the UNet-style encoder–decoder framework, the ResNet model is adopted as the encoder. As for the decoder part, the multi-scale information obtained from our proposed AMFuse modules is hierarchically merged layer-by-layer to restore the feature map resolution for semantic segmentation. The experiments of RGBT multi-spectral semantic segmentation and salient object detection demonstrate the effectiveness of our proposed AMFuse module for fusing the RGB and thermal information. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Graphical abstract

19 pages, 10834 KiB  
Article
Urban Sprawl and COVID-19 Impact Analysis by Integrating Deep Learning with Google Earth Engine
by Chiara Zarro, Daniele Cerra, Stefan Auer, Silvia Liberata Ullo and Peter Reinartz
Remote Sens. 2022, 14(9), 2038; https://doi.org/10.3390/rs14092038 - 24 Apr 2022
Cited by 5 | Viewed by 4294
Abstract
Timely information on land use, vegetation coverage, and air and water quality, are crucial for monitoring and managing territories, especially for areas in which there is dynamic urban expansion. However, getting accessible, accurate, and reliable information is not an easy task, since the [...] Read more.
Timely information on land use, vegetation coverage, and air and water quality, are crucial for monitoring and managing territories, especially for areas in which there is dynamic urban expansion. However, getting accessible, accurate, and reliable information is not an easy task, since the significant increase in remote sensing data volume poses challenges for the timely processing and analysis of the resulting massive data volume. From this perspective, classical methods for urban monitoring present some limitations and more innovative technologies, such as artificial-intelligence-based algorithms, must be exploited, together with performing cloud platforms and ad hoc pre-processing steps. To this end, this paper presents an approach to the use of cloud-enabled deep-learning technology for urban sprawl detection and monitoring, through the fusion of optical and synthetic aperture radar data, by integrating the Google Earth Engine cloud platform with deep-learning techniques through the use of the open-source TensorFlow library. The model, based on a U-Net architecture, was applied to evaluate urban changes in Phoenix, the second fastest-growing metropolitan area in the United States. The available ancillary information on newly built areas showed good agreement with the produced change detection maps. Moreover, the results were temporally related to the appearance of the SARS-CoV-2 (commonly known as COVID-19) pandemic, showing a decrease in urban expansion during the event. The proposed solution may be employed for the efficient management of dynamic urban areas, providing a decision support system to help policy makers in the measurement of changes in territories and to monitor their impact on phenomena related to urbanization growth and density. The reference data were manually derived by the authors over an area of approximately 216 km2, referring to 2019, based on the visual interpretation of high resolution images, and are openly available. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

23 pages, 11267 KiB  
Article
Impervious Surfaces Mapping at City Scale by Fusion of Radar and Optical Data through a Random Forest Classifier
by Binita Shrestha, Haroon Stephen and Sajjad Ahmad
Remote Sens. 2021, 13(15), 3040; https://doi.org/10.3390/rs13153040 - 3 Aug 2021
Cited by 25 | Viewed by 4065
Abstract
Urbanization increases the amount of impervious surfaces, making accurate information on spatial and temporal expansion trends essential; the challenge is to develop a cost- and labor-effective technique that is compatible with the assessment of multiple geographical locations in developing countries. Several studies have [...] Read more.
Urbanization increases the amount of impervious surfaces, making accurate information on spatial and temporal expansion trends essential; the challenge is to develop a cost- and labor-effective technique that is compatible with the assessment of multiple geographical locations in developing countries. Several studies have identified the potential of remote sensing and multiple source information in impervious surface quantification. Therefore, this study aims to fuse datasets from the Sentinel 1 and 2 Satellites to map the impervious surfaces of nine Pakistani cities and estimate their growth rates from 2016 to 2020 utilizing the random forest algorithm. All bands in the optical and radar images were resampled to 10 m resolution, projected to same coordinate system and geometrically aligned to stack into a single product. The models were then trained, and classifications were validated with land cover samples from Google Earth’s high-resolution images. Overall accuracies of classified maps ranged from 85% to 98% with the resultant quantities showing a strong linear relationship (R-squared value of 0.998) with the Copernicus Global Land Services data. There was up to 9% increase in accuracy and up to 12 % increase in kappa coefficient from the fused data with respect to optical alone. A McNemar test confirmed the superiority of fused data. Finally, the cities had growth rates ranging from 0.5% to 2.5%, with an average of 1.8%. The information obtained can alert urban planners and environmentalists to assess impervious surface impacts in the cities. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

19 pages, 7378 KiB  
Article
Remote Sensing-Based Analysis of Urban Landscape Change in the City of Bucharest, Romania
by Constantin Nistor, Marina Vîrghileanu, Irina Cârlan, Bogdan-Andrei Mihai, Liviu Toma and Bogdan Olariu
Remote Sens. 2021, 13(12), 2323; https://doi.org/10.3390/rs13122323 - 13 Jun 2021
Cited by 9 | Viewed by 4370
Abstract
The paper investigates the urban landscape changes for the last 50 years in Bucharest, the capital city of Romania. Bucharest shows a complex structural transformation driven by the socialist urban policy, followed by an intensive real-estate market development. Our analysis is based on [...] Read more.
The paper investigates the urban landscape changes for the last 50 years in Bucharest, the capital city of Romania. Bucharest shows a complex structural transformation driven by the socialist urban policy, followed by an intensive real-estate market development. Our analysis is based on a diachronic set of high-resolution satellite imagery: declassified CORONA KH-4B from 1968, SPOT-1 from 1989, and multisensor stacked layers from Sentinel-1 SAR together with Sentinel-2MSI from 2018. Three different datasets of land cover/use are extracted for the reference years. Each dataset reveals its own urban structure pattern. The first one illustrates a radiography of the city in the second part of the 20th century, where rural patterns meet the modern ones, while the second one reveals the frame of a city in a full process of transformation with multiple constructions sites, based on the socialist model. The third one presents an image of a cosmopolitan city during an expansion process, with a high degree of landscape heterogeneity. All the datasets are included in a built-up change analysis in order to map and assess the spatial transformations of the city pattern over 5 decades. In order to quantify and map the changes, the Built-up Change Index (BCI) is introduced. The results highlight a particular situation linked to the policy development visions for each decade, with major changes of about 50% for different built-up classes. The GIS analysis illustrates two major landscape transformations: from the old semirural structures with houses surrounded by gardens from 1968, to a compact pattern with large districts of blocks of flats in 1989, and a contemporary city defined by an uncontrolled urban sprawl process in 2018. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

23 pages, 5017 KiB  
Article
An Improved Index for Urban Population Distribution Mapping Based on Nighttime Lights (DMSP-OLS) Data: An Experiment in Riyadh Province, Saudi Arabia
by Mohammed Alahmadi, Shawky Mansour, David Martin and Peter M. Atkinson
Remote Sens. 2021, 13(6), 1171; https://doi.org/10.3390/rs13061171 - 19 Mar 2021
Cited by 22 | Viewed by 4875
Abstract
Knowledge of the spatial pattern of the population is important. Census population data provide insufficient spatial information because they are released only for large geographic areas. Nighttime light (NTL) data have been utilized widely as an effective proxy for population mapping. However, the [...] Read more.
Knowledge of the spatial pattern of the population is important. Census population data provide insufficient spatial information because they are released only for large geographic areas. Nighttime light (NTL) data have been utilized widely as an effective proxy for population mapping. However, the well-reported challenges of pixel overglow and saturation influence the applicability of the Defense Meteorological Program Operational Line-Scan System (DMSP-OLS) for accurate population mapping. This paper integrates three remotely sensed information sources, DMSP-OLS, vegetation, and bare land areas, to develop a novel index called the Vegetation-Bare Adjusted NTL Index (VBANTLI) to overcome the uncertainties in the DMSP-OLS data. The VBANTLI was applied to Riyadh province to downscale governorate-level census population for 2004 and 2010 to a gridded surface of 1 km resolution. The experimental results confirmed that the VBANTLI significantly reduced the overglow and saturation effects compared to widely applied indices such as the Human Settlement Index (HSI), Vegetation Adjusted Normalized Urban Index (VANUI), and radiance-calibrated NTL (RCNTL). The correlation coefficient between the census population and the RCNTL (R = 0.99) and VBANTLI (R = 0.98) was larger than for the HSI (R = 0.14) and VANUI (R = 0.81) products. In addition, Model 5 (VBANTLI) was the most accurate model with R2 and mean relative error (MRE) values of 0.95% and 37%, respectively. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Graphical abstract

26 pages, 12170 KiB  
Article
An Optimized Filtering Method of Massive Interferometric SAR Data for Urban Areas by Online Tensor Decomposition
by Yanan You, Rui Wang and Wenli Zhou
Remote Sens. 2020, 12(16), 2582; https://doi.org/10.3390/rs12162582 - 11 Aug 2020
Cited by 5 | Viewed by 2911
Abstract
The filtering of multi-pass synthetic aperture radar interferometry (InSAR) stack data is a necessary preprocessing step utilized to improve the accuracy of the object-based three-dimensional information inversion in urban area. InSAR stack data is composed of multi-temporal homogeneous data, which is regarded as [...] Read more.
The filtering of multi-pass synthetic aperture radar interferometry (InSAR) stack data is a necessary preprocessing step utilized to improve the accuracy of the object-based three-dimensional information inversion in urban area. InSAR stack data is composed of multi-temporal homogeneous data, which is regarded as a third-order tensor. The InSAR tensor can be filtered by data fusion, i.e., tensor decomposition, and these filters keep balance in the noise elimination and the fringe details preservation, especially with abrupt fringe change, e.g., the edge of urban structures. However, tensor decomposition based on batch processing cannot deal with few newly acquired interferograms filtering directly. The filtering of dynamic InSAR tensor is the inevitable challenge when processing InSAR stack data, where dynamic InSAR tensor denotes the size of InSAR tensor increases continuously due to the acquisition of new interferograms. Therefore, based on the online CANDECAMP/PARAFAC (CP) decomposition, we propose an online filter to fuse data and process the dynamic InSAR tensor, named OLCP-InSAR, which performs well especially for the urban area. In this method, CP rank is utilized to measure the tensor sparsity, which can maintain the structural features of the InSAR tensor. Additionally, CP rank estimation is applied as an important step to improve the robustness of Online CP decomposition - InSAR(OLCP-InSAR). Importing CP rank and outlier’s position as prior information, the filter fuses the noisy interferograms and decomposes the InSAR tensor to acquire the low rank information, i.e., filtered result. Moreover, this method can not only operate on tensor model, but also efficiently filter the new acquired interferogram as matrix model with the assistance of chosen low rank information. Compared with other tensor-based filters, e.g., high order robust principal component analysis (HoRPCA) and Kronecker-basis-representation multi-pass SAR interferometry (KBR-InSAR), and the widespread traditional filters operating on a single interferometric pair, e.g., Goldstein, non-local synthetic aperture radar (NL-SAR), non-local InSAR (NL-InSAR), and InSAR nonlocal block-matching 3-D (InSAR-BM3D), the effectiveness and robustness of OLCP-InSAR are proved in simulated and real InSAR stack data. Especially, OLCP-InSAR can maintain the fringe details at the regular building top with high noise intensity and high outlier ratio. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Figure 1

40 pages, 11784 KiB  
Article
A Survey of Change Detection Methods Based on Remote Sensing Images for Multi-Source and Multi-Objective Scenarios
by Yanan You, Jingyi Cao and Wenli Zhou
Remote Sens. 2020, 12(15), 2460; https://doi.org/10.3390/rs12152460 - 31 Jul 2020
Cited by 89 | Viewed by 12011
Abstract
Quantities of multi-temporal remote sensing (RS) images create favorable conditions for exploring the urban change in the long term. However, diverse multi-source features and change patterns bring challenges to the change detection in urban cases. In order to sort out the development venation [...] Read more.
Quantities of multi-temporal remote sensing (RS) images create favorable conditions for exploring the urban change in the long term. However, diverse multi-source features and change patterns bring challenges to the change detection in urban cases. In order to sort out the development venation of urban change detection, we make an observation of the literatures on change detection in the last five years, which focuses on the disparate multi-source RS images and multi-objective scenarios determined according to scene category. Based on the survey, a general change detection framework, including change information extraction, data fusion, and analysis of multi-objective scenarios modules, is summarized. Owing to the attributes of input RS images affect the technical selection of each module, data characteristics and application domains across different categories of RS images are discussed firstly. On this basis, not only the evolution process and relationship of the representative solutions are elaborated in the module description, through emphasizing the feasibility of fusing diverse data and the manifold application scenarios, we also advocate a complete change detection pipeline. At the end of the paper, we conclude the current development situation and put forward possible research direction of urban change detection, in the hope of providing insights to the following research. Full article
(This article belongs to the Special Issue Data Fusion for Urban Applications)
Show Figures

Graphical abstract

Back to TopTop