Next Article in Journal
Storm-Time Relative Total Electron Content Modelling Using Machine Learning Techniques
Next Article in Special Issue
Spatial Analysis of Flood Hazard Zoning Map Using Novel Hybrid Machine Learning Technique in Assam, India
Previous Article in Journal
Use of Landsat 8 and UAV Images to Assess Changes in Temperature and Evapotranspiration by Economic Trees following Foliar Spraying with Light-Reflecting Compounds
Previous Article in Special Issue
Surface Water Mapping and Flood Monitoring in the Mekong Delta Using Sentinel-1 SAR Time Series and Otsu Threshold
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Urban Flood Detection Using TerraSAR-X and SAR Simulated Reflectivity Maps

by
Shadi Sadat Baghermanesh
1,
Shabnam Jabari
1,* and
Heather McGrath
2
1
Department of Geodesy and Geomatics Engineering, University of New Brunswick, Fredericton, NB E3B5A3, Canada
2
Natural Resources Canada, Ottawa, ON K1A0E4, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 6154; https://doi.org/10.3390/rs14236154
Submission received: 18 September 2022 / Revised: 9 November 2022 / Accepted: 23 November 2022 / Published: 5 December 2022

Abstract

:
Synthetic Aperture Radar (SAR) imagery is a vital tool for flood mapping due to its capability to acquire images day and night in almost any weather and to penetrate through cloud cover. In rural areas, SAR backscatter intensity can be used to detect flooded areas accurately; however, the complexity of urban structures makes flood mapping in urban areas a challenging task. In this study, we examine the synergistic use of SAR simulated reflectivity maps and Polarimetric and Interferometric SAR (PolInSAR) features in the improvement of flood mapping in urban environments. We propose a machine learning model employing simulated and PolInSAR features derived from TerraSAR-X images along with five auxiliary features, namely elevation, slope, aspect, distance from the river, and land-use/land-cover that are well-known to contribute to flood mapping. A total of 2450 data points have been used to build and evaluate the model over four different areas with different vegetation and urban density. The results indicated that by using PolInSAR and SAR simulated reflectivity maps together with five auxiliary features, a classification overall accuracy of 93.1% in urban areas was obtained, representing a 9.6% improvement over using the five auxiliary features alone.

Graphical Abstract

1. Introduction

1.1. General

Flooding is among the most devastating natural disasters on earth and is the most frequent hazard in Canada, mainly due to heavy rainfall or rapid snow melt [1]. With climate change, flooding is expected to increase worldwide, particularly in high latitude regions like Canada [2,3,4]. Thus, the level of destruction caused by floods is high, especially in urban areas where damage to infrastructure leads to irreversible losses.
Remote sensing has shown a great potential to detect flooded areas thanks to optical and Synthetic Aperture Radar (SAR) satellite images [2,5,6,7,8,9,10]. However, flood situations are very likely to be cloudy, a condition in which passive optical remote sensing cannot be used. On the other hand, SAR remote sensing can penetrate cloud cover in nearly all-weather conditions, regardless of the time of day or night, which makes them useful in flood situations.
In recent years, SAR data has been recognized as a powerful tool for flood mapping [1,6,11,12,13]. SAR satellite images such as Sentinel-1A [5,6,14], TerraSAR-X [7,11,12,15], COSMO-SkyMed [13,16], RADARSAT-2 [17], Radarsat Constellation Mission [1], and ALOS-2/PALSAR-2 [18] have been employed to improve flood detection. Furthermore, sensors with a higher spatial resolution (3 m or more) such as TerraSAR-X, RADARSAT-2, and COSMO-Sky Med are well suited for urban areas [7,13,17,19].
Several studies have indicated that SAR imagery can perform better than optical imagery in flood mapping [20,21]. In rural flood mapping, the SAR backscatter intensity is a key factor in distinguishing flooded and non-flooded areas. Thus, SAR-based flood detection in rural areas has been extensively explored [12,15,22,23]. On the contrary, SAR-based flood detection in urban areas is challenging due to various complex urban backscatter patterns, including double bounce, shadow, and layover, which cause misclassification and increase false alarms [5,7,11,13,16,17,18]. Because of this issue, the majority of SAR-based studies in the literature have focused on rural flood detection and left urban flood detection largely unexplored.
A limited number of studies in the literature have focused on using SAR images for urban flood detection. These studies demonstrated that the use of multi-temporal Interferometric SAR (InSAR) coherence together with SAR intensity works efficiently in urban flood mapping [6,7,14,16,24,25,26]. Multi-temporal InSAR coherences have been found to be effective for flood detection even when using medium-resolution SAR images like Sentinel-1A [27]. InSAR coherence is the normalized cross-correlation between two interferometric SAR images, and it is sensitive to any variation in the geometric properties of the objects and thus to flood-induced changes. InSAR coherence is however sensitive to variations of the geometric properties of the scatterers, and it has been suggested to be considered as complementary information along with other data for flood detection [7]. Additionally, the authors of this paper showed that incorporating the multi-temporal InSAR phase improves flood detection in urban areas [28]. InSAR phase is assumed to remain unchanged as long as no changes are made to the spatial arrangement of objects [29]. We, therefore, believe that the InSAR phase can be used as a feature in flood detection.
Polarimetric SAR (PolSAR) decompositions are also a powerful tool to detect complicated urban backscatter patterns as they present the average scattering mechanism which is associated with the physical properties of an object [30]. However, it is not feasible to construct polarimetric decompositions for single polarized images as polarimetric decompositions require fully polarized or at least dual polarized data. Additionally, PolSAR filtering techniques can preserve the polarimetric properties, which have been demonstrated to be affected by any changes to the geophysical properties of the objects (e.g., surface roughness) [31]. Thus, specific scattering mechanisms caused by complex urban structures such as double bounce can be detected by single polarized PolSAR data [30,32,33].
SAR simulated reflectivity maps have been used for identifying complex backscatter patterns in urban areas [34]. SAR simulation produces a reliable approximation of real SAR data in the azimuth-range plane based on object geometry, surface characteristics, or certain weather conditions. It has been proven that SAR simulators, in conjunction with Light Detection and Ranging (LiDAR) data, can resolve layover and shadow, which are sources of geometric distortion in SAR images [6]. Shadowed areas can be misclassified as flooded due to their low backscatter, while layover results in the misclassification of flooded areas as non-flooded due to the layover strong return. However, they have not been thoroughly investigated in SAR-based flood detection [35].
The purpose behind using reflectivity maps is to help reducing the effect of geometric distortions such as shadows and layovers in the SAR image that can be confused with the backscatter patterns in flooded areas. Therefore, we believe that incorporating simulated reflectivity maps can reduce the false alarms in a predictive model for flood mapping. RaySAR simulator is used along with LiDAR data to extract simulated SAR reflectivity maps including all-reflections, double bounce, shadow, and layover. Considering this, urban flood mapping using SAR images is challenging mainly due to: (1) the existence of complex infra-structure, which causes different backscatter patterns, such as double-bounce and triple-bounce; and (2) the presence of large shadow and layover areas in the SAR images, especially in urban areas with steep terrain, which makes double-bounce detection more challenging and may also affect the flooded area’s classification accuracy.
The contribution of this study is to examine the synergistic use of multi-temporal Polarimetric and Interferometric SAR (PolInSAR) and SAR simulated reflectivity maps for flood detection in urban areas. High-resolution TerraSAR-X images are used to generate multi-temporal PolInSAR features for detecting floods in urban areas, including SAR intensity, InSAR coherence, InSAR phase, and PolSAR Boxcar filter images. Moreover, the RaySAR simulator is used along with LiDAR data to extract SAR simulated reflectivity maps, including all-reflections, double bounce, shadow, and layover. A set of auxiliary features namely elevation, slope, aspect, distance from the river, and land use/land cover (LULC) are used in the flood detection model. These features have been shown to effectively contribute to flood mapping [36,37]. We used a Random Forest (RF) algorithm to distinguish the flooded and non-flooded areas. The proposed method is tested on the 2017 Ottawa River flood in four different areas consisting of urban and sub-urban textures.
One of the main limitations of this study is the number of control points for model training. Based on the available medium-resolution optical and drone images, a limited number of labeled flooded data points (i.e., Ground Control Points (GCPs)) have been produced. Accordingly, an equal number of labeled non-flooded data points were generated to ensure a balanced data set, resulting in limited data points for model training. Moreover, GCP distribution is not uniform which may affect the universality of the results; thus, a data point shuffling is used in this study. Esfandiari et al. (2020) showed that the Random Sample Consensus (RANSAC) method can be used in conjunction with a RF classifier to use pseudo training data points and address the data point limitation [33]. Since this issue was already addressed in the literature, we did not incorporate it in this paper.

1.2. Study Area

This study uses the 2017 Ottawa River flood across Ottawa city, the capital city of Canada, and Gatineau, Quebec, as shown in Figure 1A, as a case study. The Area of Interests (AOIs) include four different areas with different LULC, namely Area1, Area2, Area3, and Area4 as shown in Figure 1B. The 2017 spring flood of Ottawa was primarily caused by exceptionally heavy precipitation and melting snow [38].

1.3. Datasets

This study uses five single polarized SSC StripMap TerraSAR-X images, covering the pre-flood to post-flood periods together with a high-resolution Digital Terrain Model (DTM) for SAR simulation and PolInSAR processing. Historical National Aerial Surveillance Program (NASP) images, captured by a non-metric camera mounted on a drone, are then used to generate the GCPs and evaluate the results. The dataset is shown in Table 1.
PolInSAR data used in this study include multi-temporal phase, multi-temporal coherence, and multi-temporal Boxcar filter images as complementary information to multi-temporal intensities. For SAR simulation, we utilized all-reflections, double bounce, shadow, and layover features that we anticipate are effective for flood mapping in urban areas; and the five auxiliary features include elevation, slope, aspect, distance from the river, and LULC. To prepare the elevation layer, we used a high resolution 1 m LiDAR DTM and resampled it to 3 m to correspond with the TerraSAR-X images. The resampled elevation layer was then used to produce the slope and aspect features in ArcGIS Pro. The river network map was employed to generate the distance from river layer using the Euclidean distance tool within the ArcGIS Pro.
GCPs were generated by comparing historical NASP images with 2017 historical Google images. A total number of 2450 GCPs were generated and dispersed as follows: 750 points in Area1, 700 points in Area2, 300 points in Area3, and 700 points in Area4. The number of flooded and non-flooded points is even in all areas. Area3 has fewer points compared to other areas due to the limited number of flooded points that are visible from comparing NASP and historical Google images. The flood peak occurred on 7 May 2017, for which no TerraSAR-X images were available. The closest date to the flood peak on which we have a TerraSAR-X image was 3 May 2017. The amount of water level on 3 May 2017 is the closest to that of 7 May 2017. Thus, the closest water level in the TerraSAR-X image captured on 3 May 2017 makes it the best available candidate for the co-flood image. Moreover, NASP images were not available on 3 May 2017, so images taken on 12 May 2017 were chosen since the water level on 12 May 2017 was similar to that of 3 May 2017 as shown in Figure 2.

2. Methodology

The whole flowchart of the work is shown in Figure 3. To simplify the explanation of the methodology, we divide the work into the following sections: InSAR, PolSAR, SAR simulation, and Random Forest Model, as highlighted in the flowchart. We tested different scenarios combining features from multi-temporal InSAR, PolSAR and SAR Simulation along with auxiliary flood mapping features to find the optimum set that can result in maximum accuracy for flood mapping.

2.1. Interferometric SAR

SAR interferometry utilizes the phase difference between two complex SAR images obtained from slightly different positions and/or at different times. Figure 4 shows the geometry of an along-track satellite InSAR system, where S p and S s indicate the primary and secondary locations of SAR satellites; H is the altitude of satellite while h is the elevation of an arbitrary point. The distance between the two satellites in the plane perpendicular to the orbit is called the interferometer baseline ( B ); its projection to the slant range is the parallel baseline ( B ) and its projection perpendicular to the slant range is the perpendicular baseline ( B ) . The incidence angle (θ) is the angle between the SAR beam and the perpendicular axis to the local topography. Tilt angle (α) refers to the angle between the horizontal plane and interferometer baseline.
SAR interferograms are generated by cross-multiplying the primary SAR image with the complex conjugate of the secondary images [13]. As a result, the interferometric phase is the phase difference between the two images, while the interferometric amplitude is the multiplication of the amplitude of the primary image and the secondary image. Assuming R p   a n d   R s   are range distances from the primary image and the secondary image, the primary ( φ p ) , secondary ( φ s ), and interferogram phase ( Δ φ = φ i n t ) are as follows, where λ is the wavelength:
φ p   = 4 π λ R p
φ s = 4 π λ R S
Δ φ = φ p   φ s = 4 π λ Δ R
The residual interferogram phase then can be written as follows, where φ d e f , deformation phase, is the phase change due to the displacement in line-of-sight direction, φ t o p o is the phase change due to topography, φ a t m is the phase change due to the atmospheric retardation, Δ φ o r b is the residual phase due to orbit errors, and φ n o i s e is the remaining phase noise due to other variables such as thermal noise and coregistration errors [39]:
Δ φ = φ i n t   = φ d e f + φ t o p o + φ a t m + Δ φ o r b + φ n o i s e  
In order to remove the topography phase, interferograms are terrain corrected using a DEM [39]. As a result, Equation (4) can be written as follows, where Δ φ t o p o is the residual phase due to DEM errors.
Δ φ = φ i n t   = φ d e f + φ a t m + Δ φ t o p o + Δ φ o r b + φ n o i s e    
This absolute interferogram phase change is ambiguous and needs to be unwrapped [40]. Unwrapping is the process of removing ambiguity from the wrapped phase. After phase unwrapping, there are still four terms to be corrected in Equation (5) to get the deformation phase. These four terms, i.e., φ a t m ,   Δ φ t o p o ,   Δ φ o r b , and φ n o i s e , include spatially correlated and spatially uncorrelated error terms. The spatially correlated parts are assumed to be temporally uncorrelated; therefore, they can be estimated and removed by first using high-pass filtering in time and then low-pass filtering in space [41]. This results in the deformation phase based on which InSAR phase features were generated, and spatially uncorrelated error can be modeled as noise [39]. InSAR coherence represents the similarity of radar reflection between two SAR images on a pixel-by-pixel basis and provides a quantitative measure of the interferogram noise. Using N independent images, an estimate of coherence can be calculated as shown in Equation (6), where u represents the amplitude of SAR images, and u* indicates its complex conjugate; this equation was used to generate InSAR coherence ( γ ^ ) features used in this study:
γ ^ = i = 1 N u 1 i u 2 i * i = 1 N | u 1 i | 2 i = 1 N | u 2 i | 2

2.2. Polarimetric SAR

The SAR Polarimetry concept involves analyzing the polarization state of an electromagnetic (EM) wave. Different SAR systems transmit and receive EM waves in different polarizations, including horizontal and vertical polarization. Different objects generate different combinations of coherent speckle noise and random scattering effects. As a result, for the classification of SAR data, it is important to calculate the average or dominant scattering mechanism. Consequently, speckle filtering is an indispensable initial step. The speckle noise model of a SAR image can be expressed as follows, where x is the pixel without noise, and s represents the speckle:
y ( i , j ) = x ( i , j )   s ( i , j )
Assuming an infinite homogeneous sample, the speckle-free pixels can be extracted from Equation (7) as below, where y ¯ is the mean of the sample:
x ( i , j ) = y ¯ ( i , j )
Although the number of homogenous pixels is not infinite in reality, the optimal scenario in classical speckle filters is to average the N finite pixels. PolSAR boxcar filtering is an effective way that preserves the phase information while reducing speckle [42]. The Boxcar filter reduces the speckle inherent to SAR images through local averaging, similar to Equation (8). The reason Boxcar filtering was chosen over coherent decompositions is that coherent decompositions cannot be calculated with the single polarized TerraSAR-X data in this study, as such decompositions need fully polarized data [31]. Therefore, Equation (8) was used to produce Boxcar PolSAR features used in this study.

2.3. SAR Simulation

There are three steps to SAR simulation: (1) modeling, (2) sampling by a simulator, and (3) scatterer analysis (S. J. Auer 2011). Modeling provides relevant input information about the objects being simulated. After modeling, each detected signal in the scene is sampled using a ray tracer, which involves deriving its 3D position and amplitude, bounce level information, flags to indicate its specular direction, and intersection points to identify the reflecting surface [43,44,45,46]. In the end, a scatterer analysis is performed to provide SAR simulated reflectivity maps representing multiple bounce returns including single bounce, double bounce, and all-reflections.
One of the main problems in SAR simulation, especially in dense urban areas, is how to address the multipath effect due to the summation of signal rays overlaid in one single pixel as the received radar signal. A common solution to resolve multipath is to recover the signal propagation path. For the simulation of radar signal propagation, information such as sensor parameters, the geometry of the objects, and physical scene characteristics need to be specified [45,46]. The scene geometry parameters are used to define the position of the virtual SAR sensor in the modeled scene.
RaySAR simulator which is used in this study is developed for analyzing local urban scenes under the assumption that the local incidence angle of the radar signal is constant [13]. Therefore, a parallel light source and a virtual orthographic camera are considered to represent a radar transmitter and receiver, respectively. Thus, the far-field coordinates of signals can be directly simulated based on the position of the scene center, light source, and virtual orthographic camera as well as the size of the simulated image [34,44]. The scene center corresponds to the center of the 3D object model. The light source and the virtual orthographic camera (i.e., virtual SAR sensor) are therefore the same.
In this study, we used simplified diffuse and specular reflection models to simulate the radiometry of SAR images with a focus on geometrical correctness. Consequently, we followed [31] and developed a processing chain of simulation that uses digital elevation models as input data. The processing chain is illustrated in Figure 5. High-resolution 1 m Digital Terrain Model (DTM) was downloaded from the Natural Resources Canada (NRCan) website. The modified Digital Surface Model (DSM) and normalized Digital Surface Model (nDSM) were generated using the input DTM. The modified DSM represents the geodetic height with UTM horizontal coordinates, with only building height information embedded; height information related to vegetation has been removed from it using a building footprint provided by Open Street Map (OSM) (https://www.openstreetmap.org). The nDSM is derived by subtracting DTM from the modified DSM.
In this context, three sets of 3D object models including DTM, modified DSM, and nDSM were employed to produce reflectivity maps as indicated in Figure 5a. These simulated reflectivity maps were then geocoded using the geoinformation of the DSM, along with the orbit and projection parameters of the real SAR image. Geocoded simulated reflectivity maps include all-reflections and double bounce features derived from the modified DSM and one all-reflections layer from each DTM and nDSM, as shown in Figure 5b. These simulated reflectivity maps are then post-processed as described in [31] to produce four SAR simulation features representing all-reflections, double bounce, shadow, and layover as shown in Figure 5c.

2.4. Random Forest Model

RF is a supervised machine learning model and a powerful tree-based classifier commonly used for general-purpose classification and regression, especially when the number of variables is large relative to the number of observations [47]. Several studies have shown that RF has outperformed other algorithms such as Maximum Likelihood, Artificial Neural Networks, and Support Vector Machines for flood detection and other applications [48,49]. Thus, we used RF to identify flooded areas from non-flooded ones in this study.
The RF classifier was trained and evaluated with different combinations of features, including the SAR simulation and PolInSAR features, along with other auxiliary features that were found to be significant for flood mapping in [36], including elevation, slope, aspect, distance from the river, and LULC. Bayesian Optimization was used to tune RF hyperparameters. The importance of features is computed in the feature selection process based on their weighted mean square error in the node splitting that is based on a random subset of features for each tree [47]. Moreover, to prepare the data points, an equal number of flooded and non-flooded GCPs are distributed across the floodplain to minimize data imbalance.

3. Results

In this section, PolInSAR and SAR simulation analyses are explained. For each combination of features derived from these analyses, a single model is trained using the data points from all four AOIs, i.e., Area1–4. These RF models were then evaluated separately using the dataset sample from each specific area. Consequently, 75% of the data points (2450 ground truth points) were used to train the model over the whole AOIs, and then 25% of data in each specific area was used for validation. The training dataset was randomly selected to avoid autocorrelation.
For PolInSAR analysis, a consecutive set of SSC single polarized TerraSAR-X images spanning the pre-flood, co-flood, and post-flood periods are used, as listed in Table 1, to create both PolSAR and InSAR features. PolSAR and InSAR analyses were conducted separately using Geomatica Banff. For PolSAR analysis, we first radiometrically calibrated the TerraSAR-X images using sigma naught. Sigma naught or backscatter coefficient refers to the normalized return of radar signal from an object, which is measured as per unit area on the ground. Next, the calibrated complex images were converted into detected data. Then, a 5*5 Boxcar filter was applied to reduce speckle noise by local averaging while preserving polarimetric information. Therefore, we produced five locally averaged TerraSAR-X images for each predefined area as PolSAR features, as listed in Table 2. The PolSAR Boxcar filter images derived from co-flood TerraSAR-X in all four areas are shown in Figure 6.
For InSAR analysis, we used the TerraSAR-X image captured on 3 May 2017 as the primary image, and the other four images as secondary, which resulted in four interferometry pairs. The minimum and maximum temporal baseline of the interferometry pairs used in this study are 12 days and 56 days, respectively. During InSAR processing, the topography-related phase is estimated and removed from the interferograms using a DEM. InSAR coherence and InSAR phase associated with each interferogram are then extracted. As a result, we had four InSAR coherence and four InSAR phase features for each study area as listed in Table 2. Figure 7 illustrates sample InSAR coherence images over four areas.
For SAR simulation analysis, we used the open-source ray tracer RaySAR which is a modified version of POV-Ray implemented in MATLAB [50]. We used a TerraSAR-X StripMap SSC image captured on 3 May 2017 (flood peak), presented in a geographic coordinate system with ellipsoidal correction, as our real SAR image. Ellipsoidal correction is a process in which the SAR image is being resampled and projected from radar coordinate to map coordinate [44,50]. Next, the following parameters were derived from the metadata of the TerraSAR-X image to define the geometry of the scene being simulated: (1) the azimuth angle; (2) the frame mean height; (3) incidence angles of the scene center and its four corners: used for interpolating the incidence angle at the scene center, assumed to be locally constant over the entire scene; (4) East and North pixel spacing which is equal to the azimuth and range pixel spacing in SAR coordinate system.
As the second input data, three sets of 3D object models including DTMs, modified DSMs, and nDSMs based on LiDAR data were used. Using DTMs, modified DSMs that only represent terrain and constructed objects, not vegetation, were generated by using a DTM, buildings footprints, and buildings elevation. nDSM is derived then by subtracting DTM from the modified DSM. The sampling of these object models should be the same as SAR image sampling. Therefore, DEM object models were resampled to 3 m using a bilinear interpolation method. We used the same coordinate system for both data sources, i.e., WGS 84 of UTM, and ellipsoidal heights. To perform ray tracing, object models were converted to the POV-Ray format using AccuTrans. Consequently, the discrete position of the simulated signals, as well as their strength and reflection levels are generated as explained in [43]. These features are then used to create the SAR simulated reflectivity maps. These simulated images that are in the Radar coordinate system, i.e., azimuth-range geometry, are then geocoded to create SAR simulation features as listed in Table 3. Details of the geocoding process can be found in [34].
The geocoded simulation outputs are post-processed to generate four features, namely all-reflections, double bounce, layover, and shadow, as described in [34] for each AOIs. Figure 8 illustrates these four features generated over AOIs. Moreover, the generated flood maps over Area1–4 are shown in Figure 9.
For each AOI, 1731 scenarios were examined as listed in Table 4 and Table 5 and each scenario had a different number of features. To identify the most important features for flood mapping, different combinations of features were tested as different scenarios in a two-step procedure: A category scenarios, and B category scenarios. In A category, features are divided into: (A1) SAR intensity, (A2) InSAR, (A3) PolSAR, (A4) SAR simulation, and (A5) five auxiliary features as the baseline, shown in Table 4. In B categories, selected features from A category were integrated to build up the B category: (B1) PolInSAR, (B2) All SAR, (B3) without SAR simulation, and (B4) All categories. Table 5 shows included features in each B category.
For A categories, we explored a total of 363 scenarios covering all possible k-combinations of features in each category. First, an RF algorithm was trained using all possible k-combinations of features within each category A1–5, as shown in Table 4. Then, the feature importance was estimated by permuting out-of-bag observations among the trees; Consequently, the resulting best features contributing to the top three highest overall accuracies within each category A1–5 were marked as “selected features”, shown in Table 6. These “selected features” in A1–5 categories are then combined to be tested as different B1–4 categories, as listed in Table 5.
For B categories, we combined the selected features from A categories that led to the highest overall accuracies. Since including correlated features can affect the RF model and reduce the overall accuracy, we started training RF with a single feature within each B1–4 category and then added the rest of the features one by one until we used all features within that certain category. The features were then removed one by one until the very first feature we started with are left. Then, the feature importance was estimated by permuting out-of-bag observations among the trees, similar to what we did in the A category. As a result, features are prioritized based on their importance according to the RF algorithm. The most important features associated with the top three highest accuracies achieved in B categories are integrated and listed in Table 7 as selected features in B categories.

4. Discussion

As stated in the previous section, feature selection was performed in a two-step procedure where we tested different scenarios in two categories (i.e., A category and B category) to find the most effective features. Since RF is a binary classifier, the accuracy is evaluated by comparing classification overall accuracy. Consequently, the highest classification overall accuracies achieved in the A category and B category are reported in Table 8 and Table 9. Overall accuracies are computed by dividing the number of data points that were correctly labeled as flooded and non-flooded points to the number of all labeled data points as follows:
O v e r a l l   A c c u r a c y = T P + T N T P + T N + F P + F N
Comparing B category scenarios over different AOIs, as listed in Table 9, shows that the results of Area1, Area2, and Area4 are exhibiting a similar trend, while Area3 has a lower accuracy than the other areas, which is due to the lower data points available in Area3, and its different and more vegetated LULC compared to other areas. Overall, among various features, the most important ones for flood mapping, which produced the highest accuracies in each area, were derived from the incorporation of all A categories, i.e., the integration of SAR intensities, InSAR, PolSAR, SAR simulation, and auxiliary features, as listed in B4 category in Table 9. The confusion matrix associated with the highest accuracy achieved in A categories and B categories over all four AOIs is listed in Table 10 and Table 11. Investigating all scenarios in different categories reveals that the False Positive (FP) rate stands above the False Negative (FN) as shown in Figure 10, which means the proposed model overestimated the flood areas. Although the best practice is to remove all false results, we believe that overestimating flooded areas is better than underestimating as a high FN rate can result in missing damaged areas in post-disaster recovery efforts.
In the A5 category, we used five auxiliary features, i.e., elevation, slope, aspect, distance from the river, and LULC, that are reported in the literature to be effective in flood mapping as our baseline. A5 category reported accuracies ranging from 82.5% to 83.6% in different AOIs. Comparing A5 to B1, where PolInSAR features were involved, showed that using PolInSAR features improved the model by 4.8%, 5.5%, and 7.3% in Area1, Area2, and Area4. Comparing B4 to B3, where PolInSAR features are considered together with five auxiliary features, i.e., all features except for simulation ones, revealed that adding SAR simulated features improved accuracy by 1.3%, 1.3%, and 0.4% in Area1, Area2, and Area4. Table 12 provides a comparison of classification overall accuracy improvement of the proposed models and four existing models commonly used for flood detection in rural and urban areas. Comparing A5 to B2, where all SAR features are used, showed that synergistic use of simulated images and PolInSAR features without including auxiliary features outperformed the baseline by increasing the accuracy by 5%, 5.7%, and 7.5% in Area1, Area2, and Area4, respectively; the average improvement over those areas are shown in Table 12 as 6.1%. When we used PolInSAR and simulation features in combination with the auxiliary features as in the B4 category, we reached even higher accuracy with an increase of 9%, 9.9%, and 10% compared to the baseline in Area1, Area2, and Area4, respectively; the average improvement over those areas are shown in Table 12 as 9.7%. Additionally, comparing B4 to A1, where we used multi-temporal SAR intensities alone, revealed that the proposed method achieved an accuracy improvement of 7.3%, 7.4%, and 6.7%, respectively in Area1, Area2, and Area4, which is a common practice for flood mapping in the literature; the average improvement over those areas are shown in Table 12 as 7.1%.
Exploring all scenarios associated with simulation features, i.e., A4, B2, and B4 categories, we noticed that including shadow features always reduced false negatives and helped the classifier to improve flood detection more effectively. In this regard, the B4 category showed a higher accuracy compared to B3, which demonstrates that simulation features, especially shadow, layover, and double bounce features that were the most important, improved flood detection by 1.3%, 1.3%, and 0.4% in Area1, Area2, and Area4, respectively; comparing B3 to B4 shows that the synergistic use of SAR simulation and PolInSAR outperformed employing PolInSAR features alone by an average of 1% improvement.
The results of this study are consistent with the PolInSAR outcome of our previous study [28] and support the assumption that investigating the trend in the time series of PolInSAR features concerning their LULC class can improve identifying the complex urban backscatter patterns.

5. Conclusions

In this work, we investigated the effect of employing InSAR, PolSAR, and SAR-simulated reflectivity maps in the improvement of urban flood detection using a RF classifier. Examining the synergistic use of multi-temporal PolInSAR features and SAR-simulated reflectivity maps for urban flood detection is the main contribution of this study. These SAR features are employed along with a set of auxiliary features that have been reported in the literature to contribute to flood mapping, including elevation, slope, aspect, distance from the river, and LULC. Multi-temporal SAR intensity, InSAR coherence, and InSAR phase features were generated using InSAR analysis. Additionally, PolSAR analysis was employed to generate PolSAR Boxcar filter features. Simulated reflectivity maps features used in this study included all-reflections maps, double-bounces, shadows, and layovers. Different combinations of these features were examined to select the most effective features in flood detection.
In general, the results showed that employing PolInSAR and SAR simulated features along with auxiliary features in an RF model can improve flood detection in urban areas. The highest accuracy was achieved from a subset of features derived from the performed SAR analyses, i.e., InSAR, PolSAR, and SAR simulation. The results showed that the incorporation of single polarized high-resolution TerraSAR-X images with auxiliary features outperforms the models that merely use the auxiliary features.
The 2017 Ottawa River flood was explored using a set of high-resolution TerraSAR-X images. Four study areas with different LULCs were examined, for which GCPs were sampled by comparing historical NASP images with 2017 historical Google images. We employed multi-temporal SAR intensities, multi-temporal PolInSAR features including InSAR phase, InSAR coherence, and PolSAR Boxcar filter, as well as SAR simulated features including all-reflections, double bounce, shadow, and layover.
Feature selection was performed in a two-step procedure where we tested different scenarios in two categories to find the most effective features. Among all investigated features, the best accuracy of 93.1% was achieved by considering Co-flood and nearest pre-flood and post-flood intensity, all four InSAR coherence, nearest post-flood InSAR phase, co-flood PolSAR Boxcar filtered image, double bounce, shadow, all-reflections, elevation, slope, distance from the river, and LULC. The results revealed that the synergistic use of SAR simulated reflectivity maps and PolInSAR features together with auxiliary features can improve urban flood detection by an average of 9.6% when compared to the baseline. Moreover, without considering any auxiliary features, the exclusive use of PolInSAR and SAR simulated features outperformed the baseline by 7%, which indicates the effectiveness of using SAR features in flood mapping. This study indicates that SAR simulated features are effective in decreasing false positives, which helped the proposed method exceed flood detection accuracy by merely use of PolInSAR features. Furthermore, the proposed method outperformed the exclusive use of SAR intensity by an average of 7.1% accuracy improvement, while SAR intensity is the basis for most existing SAR-based flood mapping methods. Given the promising results of using TerraSAR-X in flood detection, our future work with TerraSAR-X will focus on using fully polarized images as well as on examining other study areas with different terrains and LULC.
It needs to be acknowledged that the proposed model uses a greater number of input features compared to some other flood mapping techniques; however, this is compensated for the improved accuracy. Since accurate flood mapping can contribute to saving lives, reducing damage, and providing accurate damage estimates, the promisingly improved proposed flood mapping method is a good candidate for flood detection in urban areas. Especially knowing that the proposed model is independent of optical imagery and fully operates based on SAR and some static features (e.g., elevation models and slope), the proposed flood mapping method can be used in all weather conditions and at any time of the day which speeds of the flood mapping and consequently reduces the response time. However, it should be noted that although the proposed model worked well in urban and semi-urban areas (i.e., Area1, Area2, and Area4 of the second study area over Ottawa), it did not work as well in a densely vegetated area (i.e., Area3 of the second study area); thus, we recommend that the proposed model be extrapolated to areas with similar LULC.

Author Contributions

Conceptualization, S.S.B., S.J. and H.M.; methodology, S.S.B., S.J. and H.M.; software, S.S.B.; validation, S.S.B., S.J. and H.M.; resources, S.S.B., S.J. and H.M.; writing—original draft preparation, S.S.B.; writing—review and editing, S.S.B., S.J. and H.M.; visualization, S.S.B.; supervision, S.J. and H.M.; funding acquisition, S.J. and H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the New Brunswick Innovation Foundation, grant number RAI2019-026.

Acknowledgments

The authors wish to acknowledge the European Space Agency (ESA) for providing the TerraSAR-X dataset, Natural Resources Canada (NRCan) for their supervision and contribution [NRCan Contribution number: 20220170] and providing DEMs, Government of Canada for providing the public data and water level information, National Aerial Surveillance Program (NASP) for providing historical drone images used for validation, Agriculture and Agri-Food Canada (AAFC) for the land use/land cover maps, Open Street Map (OSM) for providing building footprints, and the Ottawa River Regulation Planning Board for publicly releasing the data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Olthof, I.; Svacina, N. Testing Urban Flood Mapping Approaches from Satellite and In-Situ Data Collected during 2017 and 2019 Events in Eastern Canada. Remote Sens. 2020, 12, 3141. [Google Scholar] [CrossRef]
  2. Willner, S.N.; Otto, C.; Levermann, A. Global Economic Response to River Floods. Nat. Clim. Chang. 2018, 8, 594–598. [Google Scholar] [CrossRef]
  3. Kundzewicz, Z.W.; Kanae, S.; Seneviratne, S.I.; Handmer, J.; Nicholls, N.; Peduzzi, P.; Mechler, R.; Bouwer, L.M.; Arnell, N.; Mach, K. Flood Risk and Climate Change: Global and Regional Perspectives. Hydrol. Sci. J. 2014, 59, 1–28. [Google Scholar] [CrossRef] [Green Version]
  4. Hirabayashi, Y.; Mahendran, R.; Koirala, S.; Konoshima, L.; Yamazaki, D.; Watanabe, S.; Kim, H.; Kanae, S. Global Flood Risk under Climate Change. Nat. Clim. Chang. 2013, 3, 816–821. [Google Scholar] [CrossRef]
  5. Lin, Y.N.; Yun, S.-H.; Bhardwaj, A.; Hill, E.M. Urban Flood Detection with Sentinel-1 Multi-Temporal Synthetic Aperture Radar (SAR) Observations in a Bayesian Framework: A Case Study for Hurricane Matthew. Remote Sens. 2019, 11, 1778. [Google Scholar] [CrossRef]
  6. Chini, M.; Pelich, R.; Pulvirenti, L.; Pierdicca, N.; Hostache, R.; Matgen, P. Sentinel-1 InSAR Coherence to Detect Floodwater in Urban Areas: Houston and Hurricane Harvey as a Test Case. Remote Sens. 2019, 11, 107. [Google Scholar] [CrossRef] [Green Version]
  7. Li, Y.; Martinis, S.; Wieland, M. Urban Flood Mapping with an Active Self-Learning Convolutional Neural Network Based on TerraSAR-X Intensity and Interferometric Coherence. ISPRS J. Photogramm. Remote Sens. 2019, 152, 178–191. [Google Scholar] [CrossRef]
  8. Amitrano, D.; Di Martino, G.; Iodice, A.; Riccio, D.; Ruello, G. Unsupervised Rapid Flood Mapping Using Sentinel-1 GRD SAR Images. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3290–3299. [Google Scholar] [CrossRef]
  9. Wang, Y. Using Landsat 7 TM Data Acquired Days after a Flood Event to Delineate the Maximum Flood Extent on a Coastal Floodplain. Int. J. Remote Sens. 2004, 25, 959–974. [Google Scholar] [CrossRef]
  10. Reinartz, P.; Müller, R.; Suri, S.; Schwind, P.; Schneider, M. Terrasar-x Data for Improving Geometric Accuracy of Optical High and Very High Resolution Satellite Data. Available online: https://www.isprs.org/proceedings/XXXVIII/part1/11/11_01_Paper_17.pdf (accessed on 8 November 2022).
  11. Giustarini, L.; Hostache, R.; Matgen, P.; Schumann, G.J.-P.; Bates, P.D.; Mason, D.C. A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2012, 51, 2417–2430. [Google Scholar] [CrossRef]
  12. Mason, D.C.; Speck, R.; Devereux, B.; Schumann, G.J.-P.; Neal, J.C.; Bates, P.D. Flood Detection in Urban Areas Using TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2009, 48, 882–894. [Google Scholar] [CrossRef] [Green Version]
  13. Refice, A.; Capolongo, D.; Pasquariello, G.; D’Addabbo, A.; Bovenga, F.; Nutricato, R.; Lovergine, F.P.; Pietranera, L. SAR and InSAR for Flood Monitoring: Examples with COSMO-SkyMed Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2711–2722. [Google Scholar] [CrossRef]
  14. Uddin, K.; Matin, M.A.; Meyer, F.J. Operational Flood Mapping Using Multi-Temporal Sentinel-1 SAR Images: A Case Study from Bangladesh. Remote Sens. 2019, 11, 1581. [Google Scholar] [CrossRef] [Green Version]
  15. Martinis, S.; Twele, A.; Voigt, S. Towards Operational near Real-Time Flood Detection Using a Split-Based Automatic Thresholding Procedure on High Resolution TerraSAR-X Data. Nat. Hazards Earth Syst. Sci. 2009, 9, 303–314. [Google Scholar] [CrossRef]
  16. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Guerriero, L.; Ferrazzoli, P. Flood Monitoring Using Multi-Temporal COSMO-SkyMed Data: Image Segmentation and Signature Interpretation. Remote Sens. Environ. 2011, 115, 990–1002. [Google Scholar] [CrossRef]
  17. Tanguy, M.; Chokmani, K.; Bernier, M.; Poulin, J.; Raymond, S. River Flood Mapping in Urban Areas Combining Radarsat-2 Data and Flood Return Period Data. Remote Sens. Environ. 2017, 198, 442–459. [Google Scholar] [CrossRef] [Green Version]
  18. Kwak, Y.; Yun, S.; Iwami, Y. A New Approach for Rapid Urban Flood Mapping Using ALOS-2/PALSAR-2 in 2015 Kinu River Flood, Japan. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1880–1883. [Google Scholar]
  19. Mason, D.C.; Davenport, I.J.; Neal, J.C.; Schumann, G.J.-P.; Bates, P.D. Near Real-Time Flood Detection in Urban and Rural Areas Using High-Resolution Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3041–3052. [Google Scholar] [CrossRef] [Green Version]
  20. Hess, L.L.; Melack, J.M.; Simonett, D.S. Radar Detection of Flooding beneath the Forest Canopy: A Review. Int. J. Remote Sens. 1990, 11, 1313–1325. [Google Scholar] [CrossRef]
  21. Horritt, M.S.; Mason, D.C.; Luckman, A.J. Flood Boundary Delineation from Synthetic Aperture Radar Imagery Using a Statistical Active Contour Model. Int. J. Remote Sens. 2001, 22, 2489–2507. [Google Scholar] [CrossRef]
  22. Shen, X.; Wang, D.; Mao, K.; Anagnostou, E.; Hong, Y. Inundation Extent Mapping by Synthetic Aperture Radar: A Review. Remote Sens. 2019, 11, 879. [Google Scholar] [CrossRef]
  23. Matgen, P.; Hostache, R.; Schumann, G.; Pfister, L.; Hoffmann, L.; Savenije, H.H.G. Towards an Automated SAR-Based Flood Monitoring System: Lessons Learned from Two Case Studies. Phys. Chem. Earth Parts A/B/C 2011, 36, 241–252. [Google Scholar] [CrossRef]
  24. Ohki, M.; Tadono, T.; Itoh, T.; Ishii, K.; Yamanokuchi, T.; Watanabe, M.; Shimada, M. Flood Area Detection Using PALSAR-2 Amplitude and Coherence Data: The Case of the 2015 Heavy Rainfall in Japan. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2288–2298. [Google Scholar] [CrossRef]
  25. Chaabani, C.; Chini, M.; Abdelfattah, R.; Hostache, R.; Chokmani, K. Flood Mapping in a Complex Environment Using Bistatic TanDEM-X/TerraSAR-X InSAR Coherence. Remote Sens. 2018, 10, 1873. [Google Scholar] [CrossRef] [Green Version]
  26. Pelich, R.; Chini, M.; Hostache, R.; Matgen, P.; Pulvirenti, L.; Pierdicca, N. Mapping Floods in Urban Areas from Dual-Polarization InSAR Coherence Data. IEEE Geosci. Remote Sens. Lett. 2021, 19, 4018405. [Google Scholar] [CrossRef]
  27. Zhao, J.; Li, Y.; Matgen, P.; Pelich, R.; Hostache, R.; Wagner, W.; Chini, M. Urban-Aware U-Net for Large-Scale Urban Flood Mapping Using Multitemporal Sentinel-1 Intensity and Interferometric Coherence. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4209121. [Google Scholar] [CrossRef]
  28. Baghermanesh, S.S.; Jabari, S.; McGrath, H. Urban Flood Detection Using Sentinel1-A Images. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, Brussels, Belgium, 11–16 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 527–530. [Google Scholar]
  29. Ferretti, A.; Monti-Guarnieri, A.; Prati, C.; Rocca, F. InSAR Principles: Guidelines for SAR Interferometry Processing and Interpretation; ESA: Paris, France, 2007.
  30. Ustuner, M.; Balik Sanli, F. Polarimetric Target Decompositions and Light Gradient Boosting Machine for Crop Classification: A Comparative Evaluation. ISPRS Int. J. Geo-Inf. 2019, 8, 97. [Google Scholar] [CrossRef] [Green Version]
  31. Han, Y.; Shao, Y. Full Polarimetric SAR Classification Based on Yamaguchi Decomposition Model and Scattering Parameters. In Proceedings of the 2010 IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, 10–12 December 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 2, pp. 1104–1108. [Google Scholar]
  32. Charbonneau, F.J.; Brisco, B.; Raney, R.K.; McNairn, H.; Liu, C.; Vachon, P.W.; Shang, J.; DeAbreu, R.; Champagne, C.; Merzouki, A. Compact Polarimetry Overview and Applications Assessment. Can. J. Remote Sens. 2010, 36, S298–S315. [Google Scholar] [CrossRef]
  33. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Gill, E. Full and Simulated Compact Polarimetry Sar Responses to Canadian Wetlands: Separability Analysis and Classification. Remote Sens. 2019, 11, 516. [Google Scholar] [CrossRef] [Green Version]
  34. Tao, J.; Auer, S.; Palubinskas, G.; Reinartz, P.; Bamler, R. Automatic SAR Simulation Technique for Object Identification in Complex Urban Scenarios. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 994–1003. [Google Scholar] [CrossRef]
  35. Mason, D.C.; Giustarini, L.; Garcia-Pintado, J.; Cloke, H.L. Detection of Flooded Urban Areas in High Resolution Synthetic Aperture Radar Images Using Double Scattering. Int. J. Appl. Earth Obs. Geoinf. 2014, 28, 150–159. [Google Scholar] [CrossRef]
  36. Esfandiari, M.; Jabari, S.; McGrath, H.; Coleman, D. Flood mapping using random forest and identifying the essential conditioning factors; a case study in fredericton, new brunswick, canada. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 5, 609–615. [Google Scholar] [CrossRef]
  37. Kia, M.B.; Pirasteh, S.; Pradhan, B.; Mahmud, A.R.; Sulaiman, W.N.A.; Moradi, A. An Artificial Neural Network Model for Flood Simulation Using GIS: Johor River Basin, Malaysia. Environ. Earth Sci. 2012, 67, 251–264. [Google Scholar] [CrossRef]
  38. Ottawa River Regulation Planning Board. Summary of the 2017 Spring Flood; Ottawa River Regulation Planning Board: Ottawa, ON, Cannad, 2018.
  39. Hooper, A.J. Persistent Scatter Radar Interferometry for Crustal Deformation Studies and Modeling of Volcanic Deformation; Stanford University: Stanford, CA, USA, 2006. [Google Scholar]
  40. Yu, H.; Lan, Y.; Yuan, Z.; Xu, J.; Lee, H. Phase Unwrapping in InSAR: A Review. IEEE Geosci. Remote Sens. Mag. 2019, 7, 40–58. [Google Scholar] [CrossRef]
  41. Ferretti, A.; Prati, C.; Rocca, F. Multibaseline Phase Unwrapping for InSAR Topography Estimation. Nuovo Cim. C 2001, 24, 159–176. [Google Scholar]
  42. Bouchemakh, L.; Smara, Y.; Boutarfa, S.; Hamadache, Z. A Comparative Study of Speckle Filtering in Polarimetric Radar SAR Images. In Proceedings of the 2008 3rd International Conference on Information and Communication Technologies: From Theory to Applications, Damascus, Syria, 7–11 April 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–6. [Google Scholar]
  43. Auer, S.J. 3D Synthetic Aperture Radar Simulation forInterpreting Complex Urban Re Scenarios. Doctoral Dissertation, Technische Universität München, Munich, Germany, 2011. [Google Scholar]
  44. Auer, S.; Hinz, S.; Bamler, R. Ray-Tracing Simulation Techniques for Understanding High-Resolution SAR Images. IEEE Trans. Geosci. Remote Sens. 2009, 48, 1445–1456. [Google Scholar] [CrossRef] [Green Version]
  45. Whitted, T. An Improved Illumination Model for Shaded Display. ACM Siggraph 2005 Courses 2005, 4-es. Available online: https://dl.acm.org/doi/abs/10.1145/1198555.1198743?casa_token=ZbzPioz44b0AAAAA:zzFPIwPE7A6sxS2AzuxUfGNyV9l6H7x7XcDKqkTSQivavwXtxsA63_HC8H8EAIGBPfO9hbUrS5BeYMQ (accessed on 8 November 2022).
  46. Glassner, A.S. An Introduction to Ray Tracing; Morgan Kaufmann: Burlington, MA, USA, 1989. [Google Scholar]
  47. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  48. Huo, J.; Shi, T.; Chang, J. Comparison of Random Forest and SVM for Electrical Short-Term Load Forecast with Different Data Sources. In Proceedings of the 2016 7th IEEE International conference on software engineering and service science (ICSESS), Beijing, China, 26–28 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1077–1080. [Google Scholar]
  49. Rodriguez-Galiano, V.; Sanchez-Castillo, M.; Chica-Olmo, M.; Chica-Rivas, M. Machine Learning Predictive Models for Mineral Prospectivity: An Evaluation of Neural Networks, Random Forest, Regression Trees and Support Vector Machines. Ore Geol. Rev. 2015, 71, 804–818. [Google Scholar] [CrossRef]
  50. Auer, S.; Bamler, R.; Reinartz, P. RaySAR-3D SAR Simulator: Now Open Source. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 6730–6733. [Google Scholar]
Figure 1. (A) an overview map of AOIs (B) GCP distribution over four selected AOIs, including (a) Area1, (b) Area2, (c) Area3, and (d) Area4. Green circles show non-flooded areas, while red ones indicate flooded areas.
Figure 1. (A) an overview map of AOIs (B) GCP distribution over four selected AOIs, including (a) Area1, (b) Area2, (c) Area3, and (d) Area4. Green circles show non-flooded areas, while red ones indicate flooded areas.
Remotesensing 14 06154 g001aRemotesensing 14 06154 g001b
Figure 2. Water level comparison on 3 May 2017, 7 May 2017, and 12 May 2017; data extracted from the Water Survey of Canada Hydrometric Station 02KF005.
Figure 2. Water level comparison on 3 May 2017, 7 May 2017, and 12 May 2017; data extracted from the Water Survey of Canada Hydrometric Station 02KF005.
Remotesensing 14 06154 g002
Figure 3. Research Method Framework.
Figure 3. Research Method Framework.
Remotesensing 14 06154 g003
Figure 4. The geometry of an InSAR system.
Figure 4. The geometry of an InSAR system.
Remotesensing 14 06154 g004
Figure 5. Processing chain of SAR simulation over four selected areas. Part (a) shows how to generate simulated reflectivity maps, and part (b) lists the geocoded results derived from part (a). The modified DSM in each area results in two geocoded features, double bounce, and all-reflections as shown in part (b); DTM and nDSM result in an all-reflections feature as shown in part (b). Part (c) represents the final SAR simulation features after being post-processed.
Figure 5. Processing chain of SAR simulation over four selected areas. Part (a) shows how to generate simulated reflectivity maps, and part (b) lists the geocoded results derived from part (a). The modified DSM in each area results in two geocoded features, double bounce, and all-reflections as shown in part (b); DTM and nDSM result in an all-reflections feature as shown in part (b). Part (c) represents the final SAR simulation features after being post-processed.
Remotesensing 14 06154 g005
Figure 6. PolSAR Boxcar filtered images derived from the co-flood TerraSAR-X image over (a) Area1, (b) Area2, (c) Area3, and (d) Area4.
Figure 6. PolSAR Boxcar filtered images derived from the co-flood TerraSAR-X image over (a) Area1, (b) Area2, (c) Area3, and (d) Area4.
Remotesensing 14 06154 g006
Figure 7. InSAR Coherence images derived from the co-flood TerraSAR-X image over (a) Area1, (b) Area2, (c) Area3, and (d) Area4.
Figure 7. InSAR Coherence images derived from the co-flood TerraSAR-X image over (a) Area1, (b) Area2, (c) Area3, and (d) Area4.
Remotesensing 14 06154 g007
Figure 8. Simulated reflectivity maps over Area4: (a) All-reflections, (b) Double bounce, (c) Shadow, and (d) Layover.
Figure 8. Simulated reflectivity maps over Area4: (a) All-reflections, (b) Double bounce, (c) Shadow, and (d) Layover.
Remotesensing 14 06154 g008
Figure 9. Generated flood maps over (a) Area1, (b) Area2, (c) Area3, and (d) Area4. Blue areas indicate water bodies and the river, while black and white areas show non-flooded and flooded areas, respectively.
Figure 9. Generated flood maps over (a) Area1, (b) Area2, (c) Area3, and (d) Area4. Blue areas indicate water bodies and the river, while black and white areas show non-flooded and flooded areas, respectively.
Remotesensing 14 06154 g009
Figure 10. False positive rates and false negative rates over Area1–4: Triangle signs indicate false positives while rectangle signs indicate false negatives.
Figure 10. False positive rates and false negative rates over Area1–4: Triangle signs indicate false positives while rectangle signs indicate false negatives.
Remotesensing 14 06154 g010
Table 1. Dataset.
Table 1. Dataset.
DataGround Sampling DistanceDate
Pre-FloodCo-FloodPost-Flood
TerraSAR-X (The data was granted based on the proposal approval from the European Space Agency. Accessed on 7 August 2020)3 m31 March 2017
22 April 2017 (nearest)
3 May 201725 May 2017 (nearest)
27 June 2017
DEM (https://open.canada.ca/data/en/dataset/0fe65119-e96e-4a57-8bfe-9d9245fba06b)1 m--2020
NASP UAV images (https://pscanada.maps.arcgis.com/apps/MapSeries/index.html?appid=fd5c6a7e5e5f4fb7909f67e40e781e06)---12 May 2017
River Network (The data was downloaded from https://open.ottawa.ca/)----
Water level (https://wateroffice.ec.gc.ca/report/historical_e.html?stn=02KF005&dataType=Daily&parameterType=Level&year=2017&mode=Graph)----
LULC (https://www.openstreetmap.org/#map=2/71.3/-96.8)----
Table 2. Generated features from PolInSAR processing over each area.
Table 2. Generated features from PolInSAR processing over each area.
Analysis (Total Number of Features)FeaturesNumber of Features
InSAR (8)InSAR coherence
InSAR phase
4
4
PolSAR (5)Boxcar filter5
PolInSAR (18)InSAR coherence
InSAR phase
Boxcar filter
SAR intensities
4
4
5
5
Table 3. SAR simulation outputs.
Table 3. SAR simulation outputs.
Input Object ModelsSAR Simulated Reflectivity Maps from RaySARGeocoded Simulation Features Based on Reflectivity Maps
DTMAll-reflectionsAll-reflections, Double Bounce, Layover, Shadow
Modified DSMAll-reflections + Double Bounce
nDSMAll-reflections
Table 4. Features used in different scenarios of A Categories.
Table 4. Features used in different scenarios of A Categories.
A Categories
(Number of Features)
Feature TypesName of FeaturesAll Possible K-Combinations
A1- SAR Intensity
(5)
SAR intensitiesPre-, Nearest pre-, co-, nearest post-, and post-flood intensities k = 1 5 ( 5 k ) = 31
A2- InSAR
(8)
InSAR phases,
InSAR coherences
Pre-, nearest pre-, nearest post-, and post-flood InSAR coherences/phases k = 1 8 ( 8 k ) = 255
A3- PolSAR
(5)
Boxcar filtered imagesPre-, Nearest pre-, co-, nearest post-, and post-flood Boxcar filtering k = 1 5 ( 5 k ) = 31
A4- Simulation
(4)
Reflectivity mapsAll-reflections, double bounce, layover, shadow k = 1 4 ( 4 k ) = 15
A5- Baseline
(5)
Auxiliary featuresElevation, slope, aspect,
distance from the river, LULC
k = 1 5 ( 5 k ) = 31
Total of 363 scenarios
Table 5. Features used in different scenarios of B Categories.
Table 5. Features used in different scenarios of B Categories.
B Categories
(Number of Features (k))
Included Selected Features from A CategoryTotal Examined Combinations
= (2k − 1) × k
B1- PolInSAR (9)A1 + A2 + A3(2 × 9 − 1 = 17) × 9 =153
B2- All SAR (12)A1 + A2 + A3 + A4(2 × 12 − 1 = 23) × 12 = 276
B3- Without Simulation (14)A1 + A2 + A3 + A5(2 × 14 − 1 = 27) × 14 = 378
B4- All Categories (17)A1 + A2 + A3 + A4 + A5(2 × 17 − 1 = 33) × 17 = 561
Total of 1368 scenarios
Table 6. Selected features from A Categories.
Table 6. Selected features from A Categories.
A CategoriesSelected Features (Number of Features)
A1- SAR IntensityCo-flood and nearest pre-flood and post-flood intensity (3)
A2- InSARAll four InSAR coherences,
nearest post-flood InSAR phase (5)
A3- PolSARCo-flood PolSAR Boxcar filtered image (1)
A4- SimulationDouble bounce, Shadow, All-reflections (3)
A5- AuxiliaryElevation, Slope, Aspect, Distance from the River, LULC (5)
Table 7. Selected features in B category.
Table 7. Selected features in B category.
B CategoriesSelected Features in B Categories
IntensitiesInSARPolSARSimulationAuxiliary
B1-
PolInSAR
Co-,
nearest pre-,
and post-flood intensities
All four InSAR coherences,
nearest post-flood InSAR phase
Co-flood Boxcar filtered image--
B2-
All SAR
Co-,
nearest pre-,
and post-flood intensities
All four InSAR coherences, nearest post-flood InSAR phaseCo-flood Boxcar filtered imageShadow,
Double Bounce, All-reflections
-
B3-
All without Simulation
Co-,
nearest pre-,
and post-flood intensities
All four InSAR coherences, nearest post-flood InSAR phaseCo-flood Boxcar filtered image,-All Auxiliary Features
B4-
All Categories
Co-,
nearest pre-,
and post-flood intensities
All four InSAR coherences, nearest post-flood InSAR phaseCo-flood Boxcar filtered image,Shadow,
Double bounce, All-reflections
All Auxiliary Features
Table 8. Acquired classification accuracies of A Categories: “All” columns indicate the accuracies where all features were included while “selected features” columns show the best accuracies achieved in each category using the selected features.
Table 8. Acquired classification accuracies of A Categories: “All” columns indicate the accuracies where all features were included while “selected features” columns show the best accuracies achieved in each category using the selected features.
A CategoriesClassification Overall Accuracies
Area1Area2Area3Area4
All
Features
Selected FeaturesAll
Features
Selected FeaturesAll
Features
Selected FeaturesAll
Features
Selected Features
A1- Intensity84.885.385.8586.160.36186.186.5
A2- InSAR82.282.582.782.960.360.683.683.7
A3- PolSAR86.986.986.286.263.263.287.187.1
A4- Simulation53.453.453.853.852.452.454.554.5
A5- Baseline (auxiliary)83.183.683.583.682.582.582.782.8
Table 9. Highest classification accuracies achieved in B category.
Table 9. Highest classification accuracies achieved in B category.
B CategoriesClassification Overall Accuracies
Area1Area2Area3Area4
B1- PolInSAR88.489.161.790.1
B2- All SAR88.689.361.790.3
B3- All without Simulation91.392.282.192.8
B4- All Features92.693.583.893.2
Table 10. The performance measurements True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) associated with the highest accuracies achieved using selected features in A category for all four AOIs as highlighted in Table 8.
Table 10. The performance measurements True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) associated with the highest accuracies achieved using selected features in A category for all four AOIs as highlighted in Table 8.
CategoryTP (%)TNFPFNTPTNFPFN
Area1Area2
A192.478.221.87.692.879.420.67.2
A293.471.628.46.693.572.327.76.5
A397.876242.299.473270.6
A49.897390.29.897.82.290.2
A592.874.425.67.287.979.320.712.1
CategoryArea3Area4
A143.178.921.156.992.280.819.27.8
A261.659.640.438.493.973.526.56.1
A375.351.148.924.789.984.315.710.1
A44.8100095.210.898.21.889.2
A591.673.426.68.487.777.922.112.3
Table 11. The associated performance measurements True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) associated with the highest accuracies achieved using selected features in B category for all four AOIs as listed in Table 9.
Table 11. The associated performance measurements True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) associated with the highest accuracies achieved using selected features in B category for all four AOIs as listed in Table 9.
CategoryTP (%)TNFPFNTPTNFPFN
Area1Area2
B195.481.418.64.696.981.318.73.1
B29582.217.8597.181.518.52.9
B393.988.711.36.192.991.58.57.1
B492.992.37.77.195.491.68.44.6
CategoryArea3Area4
B164.858.541.535.297.183.116.92.9
B263.859.640.436.297.183.516.52.9
B392.671.628.47.494.790.99.15.3
B487.979.720.312.193.393.16.96.7
Table 12. Average classification overall accuracy improvement over Area1, Area2, and Area4. This table compares the proposed models accuracy to the existing models accuracy: e.g., it shows that the proposed B4 model improved flood detection by an average of 7.1%, 10%, and 6.3% compared to the SAR based models used in existing studies, and by an average of 9.7% compared to the non-SAR based model, i.e., by using auxiliary features.
Table 12. Average classification overall accuracy improvement over Area1, Area2, and Area4. This table compares the proposed models accuracy to the existing models accuracy: e.g., it shows that the proposed B4 model improved flood detection by an average of 7.1%, 10%, and 6.3% compared to the SAR based models used in existing studies, and by an average of 9.7% compared to the non-SAR based model, i.e., by using auxiliary features.
Proposed\ExistingSAR-Based MethodsNon-SAR-Based Method
A1: IntensityA2: InSARA3: PolSARA5: Auxiliary
B4: Incorporated PolInSAR, SAR Simulation, and Auxiliary Features7.1%10%6.3%9.7%
B2: PolInSAR and SAR Simulation3.4%6.5%2.7%6.1%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Baghermanesh, S.S.; Jabari, S.; McGrath, H. Urban Flood Detection Using TerraSAR-X and SAR Simulated Reflectivity Maps. Remote Sens. 2022, 14, 6154. https://doi.org/10.3390/rs14236154

AMA Style

Baghermanesh SS, Jabari S, McGrath H. Urban Flood Detection Using TerraSAR-X and SAR Simulated Reflectivity Maps. Remote Sensing. 2022; 14(23):6154. https://doi.org/10.3390/rs14236154

Chicago/Turabian Style

Baghermanesh, Shadi Sadat, Shabnam Jabari, and Heather McGrath. 2022. "Urban Flood Detection Using TerraSAR-X and SAR Simulated Reflectivity Maps" Remote Sensing 14, no. 23: 6154. https://doi.org/10.3390/rs14236154

APA Style

Baghermanesh, S. S., Jabari, S., & McGrath, H. (2022). Urban Flood Detection Using TerraSAR-X and SAR Simulated Reflectivity Maps. Remote Sensing, 14(23), 6154. https://doi.org/10.3390/rs14236154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop