Next Article in Journal
Editorial for the Special Issue “Remote Sensing of Atmospheric Conditions for Wind Energy Applications”
Next Article in Special Issue
Glacier Monitoring Using Frequency Domain Offset Tracking Applied to Sentinel-1 Images: A Product Performance Comparison
Previous Article in Journal
Spatial–Spectral Fusion Based on Conditional Random Fields for the Fine Classification of Crops in UAV-Borne Hyperspectral Remote Sensing Imagery
Previous Article in Special Issue
Repeat Glacier Collapses and Surges in the Amney Machen Mountain Range, Tibet, Possibly Triggered by a Developing Rock-Slope Instability
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Fully Automated Mapping of the Flood Extent on SAR Images Using a Supervised Classifier

by
Abdelhakim Benoudjit
* and
Raffaella Guida
Surrey Space Centre, University of Surrey, Guildford GU2 7XH, UK
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(7), 779; https://doi.org/10.3390/rs11070779
Submission received: 28 February 2019 / Revised: 21 March 2019 / Accepted: 27 March 2019 / Published: 1 April 2019
(This article belongs to the Special Issue Imaging Floods and Glacier Geohazards with Remote Sensing)

Abstract

:
When a populated area is inundated, the availability of a flood extent map becomes vital to assist the local authorities to plan rescue operations and evacuate the premises promptly. This paper proposes a novel automatic way to rapidly map the flood extent using a supervised classifier. The methodology described in this paper is fully automated since the training of the supervised classifier is made starting from water and land masks derived from the Normalized Difference Water Index (NDWI), and without any intervention from the human operator. Both a pre-event Synthetic Aperture Radar (SAR) image and an optical Sentinel-2 image are needed to train the supervised classifier to identify the inundation on the flooded SAR image. The entire flood mapping process, which consists of preprocessing the images, the extraction of the training dataset, and finally the classification, was assessed on flood events which occurred in Tewkesbury (England) in 2007 and in Myanmar in 2015, and were captured by TerraSAR-X and Sentinel-1, respectively. This algorithm was found to offer overall a good compromise between computation time and precision of the classification, making it suitable for emergency situations. In fact, the inundation maps produced for the previous two flood events were in agreement with the ground truths for over 90% of the pixels in the SAR images. Besides, the latter process took less than 5 min to finish the flood mapping from a SAR image of more than 41 million pixels for the dataset capturing the flood in Tewkesbury, and around 2 min and 40 s for an image of 19 million pixels of the flood in Myanmar.

1. Introduction

According to a recent report from an important insurance company, flooding was at the same time the costliest and the deadliest natural disaster in 2016, with considerable human fatalities [1]. In fact, more than half of the natural hazards in 2016 were hydrological, which were the most devastating type of disaster financially with 59 billion USD worth of damages. This latter category of disasters is dominated by floods with 164 floods occurrences against 13 landslides. During the same year, floods caused the greatest loss of life among all the disasters, with 4731 deaths [2]. Therefore, there is a growing need for a quantification of the impact of the flood to help response authorities to mitigate the damages and prioritize at the time of the emergency, as well as supporting insurance companies in working out an assessment of the losses sustained by each property. A thorough understanding of the potential flood risk can also assist development agencies to build resilient communities.
In the event of flooding, a clear cloud-free image acquired instantaneously is necessary to have a synoptic view of the affected area. In this context, remotely sensed images are suitable to map inundations, particularly when harsh climatic conditions are encountered and the access to the affected site is impractical [3]. Moreover, satellite-borne Synthetic Aperture Radar (SAR) sensors have been extensively used in the last decade to monitor many flooding events by taking advantage of their ability to operate independently of the sunlight, and in cloudy conditions which are common during inundations. Thanks to the important number of SAR satellites in orbit, the user has a wide choice of datasets which come in different wavelengths and resolutions. For the time being, X-band sensors like TerraSAR-X and COSMO-SkyMed provide the highest spatial resolution among all SAR sensors [4]. With this metric-resolution configuration, floods could be detected even in complex scenarios, such as urban settlements where streets are relatively narrow [5]. Furthermore, the systematic acquisition plan of the Sentinel-1 C-band satellite increases the likelihood to find a reference image in the Sentinel Hub archive. A deeper understanding of the flood hazard is achieved by extracting the extent and depth flood features as well as assessing the velocity of the floodwater, which will help to efficiently manage the inundation risk.
Operational flood mapping aims to reduce the delay between the acquisition of the satellite image and the diffusion of the flood extent map produced from it to the civil protection authorities for instantaneous relief efforts. This objective is achieved with a fully automated flood detection service [6]. The flood mapping service developed in [6] is an improvement of [7], where the workflow was adapted to operational situations covered by TerraSAR-X. Briefly, the service is triggered when the TerraSAR-X product is downloaded to the FTP server, and at the end of the process the flood extent map is available to visualize online via a Web interface. It should be mentioned that the radar pulse from COSMO-SkyMed, which operates in the X-band like TerraSAR-X, was found to be attenuated by the precipitation due to its relatively short wavelength [8]. Besides, due to the time delay between the tasking of TerraSAR-X during an emergency flood situation and the actual acquisition of the image, the peak of the inundation might be missed since this satellite needs at least 2.5 days to access the requested site [9]. In this case, the systematic acquisition mode of the Sentinel-1 constellation would be of great help. In [10], the flood mapping service proposed in [6] was modified to process Sentinel-1 SAR images. In particular, [6] was improved by adding a post-processing step which consists in eliminating from the flood map areas higher than the nearest drainage network, using a thresholding strategy. The process in [10] is in principal completely unsupervised yet the latter threshold was determined empirically. The Sentinel-1 images were automatically downloaded and preprocessed using the Sentinel Application Platform (SNAP), and then the classification carried on in a similar way to [7]. It was found in [10] that among the polarizations offered by the Sentinel-1 sensor, VV-polarized SAR images result in more accurate flood maps than cross-polarized (VH) products. In the same context, [11] suggested that HH polarization realizes the highest accuracy in terms of flood mapping. In the same prospect of an operational mapping of the flood, [12] proposed to detect the flood in vegetated, forested and built-up areas, besides the normal low backscatter flood (open water), using a Fuzzy logic approach which has permitted to combine data stemming from different sources (a DEM, a Land Cover Map). The threshold values are retrieved from three selected backscattering models (for agricultural, forested, and urban areas) applied with varying radar parameters to a number of land covers. To keep the process simple, the threshold values are calculated by considering only a few specific flood scenarios. As a result, it will be challenging to map the flood when, for instance, the plant characterisitics change and the backscattering model’s preconditions become unsatisfied. Although, this issue has been addressed by allowing the user to adjust manually the values of the default thresholds, this leads to a lack of automation in the process as a consequence. An automated flood mapping method based on the approximation of the Probability Density Function (PDF) of the backscatter of the water was proposed by [13]. The threshold value is then defined as the point where the PDF of the backscatter and the gamma distribution modelling the water, and having backscatter values lower than this threshold, start to diverge. The Thresholding is followed with a region growing and a pixel-based change detection. The authors in [14] addressed the challenging task of the urban flood mapping in an unsupervised way, by improving the process introduced in [13] with a more objective estimation of the region growing’s tolerance criterion. In the context of image segmentation, the region growing cannot be generalized as its cost function is not defined a priori, but is set empirically according to the specific application instead. Furthermore, it fails in practice when the edges of the object to detect are too smooth [15]. Another way the issue of the automation of the flood mapping was addressed in the literature is with a service running regularly as in [16]. In this project, the multiyear ENVISAT ASAR dataset is tiled into splits of 1° longitude by 1° latitude, and the training step is carried out by making use of the SRTM-derived water mask (SWBD) and the features extracted from the tile, which consist mainly of the backscatter and the incidence angle. Subsequently, pixels from nonlabeled images are classified using Bayes’ theorem to get a probability map of water and land, after estimating the probability distributions for each class from trained histograms. Nevertheless, the low spatial resolution of the water mask which is crucial to the training phase, could impact negatively the precision of the classification, especially for smaller rivers.
Supervised and unsupervised learning methods were already applied, in a few studies, to tackle SAR flood extent mapping problems. The work in [17] used a self-organizing map (SOM), which is essentially an unsupervised artificial neural network, to segment then classify a flooded SAR image. SOM being originally a dimensionality reduction technique, a moving window centered around SAR image pixels forms a vector of neighbouring pixels that are passed as input into the neural network to train it. At the end of the learning process, the central pixel of each sliding window is mapped onto one of the neurons of a 2D grid, with multiple image pixels possibly being assigned to the same neuron. This results in the flooded SAR image being segmented, with each neuron representing a cluster. However, the eventual classification of the neurons on the grid into water and non-water was performed with the help of ground truth pixels extracted manually. The authors in [18] presented several semi-automatic and manual methods to map inundations, by exploiting free satellite multispectral and SAR data. Similarly to the current paper, the authors took advantage of water and vegetation indices like the Normalized Difference Vegetation Index (NDVI) and the Modified Normalized Difference Water Index (MNDWI), although it was the variation in these indices that was expected to reveal the presence of flooding. In another experiment, a supervised classifier was also investigated for the same purpose by choosing samples manually from different types of land cover. However, the latter two methods were applied separately and on multispectral optical images that could suffer from the cloud cover. When the flood mapping was carried out on SAR images, the threshold was manually adjusted either on a single flooded image or on the log ratio between a pair of images captured before and after the flood. With the aim of mapping urban flooding in [19], first an active contour model (snake) is employed to detect the flood in rural areas. Then, a supervised Bayesian classification is carried out on adjacent flooded urban areas, where the training data for the flood and the non-flood classes is chosen from the previously obtained rural flood map and from urban areas situated higher than the rural water level on the LiDAR DSM (Digital Surface Model), respectively. This method is semi-automated since the initialization of the snake and the selection of the training dataset are both done manually.
This study will focus on the mapping of the flood extent characteristic by proposing a fully automated classifier trained on a dataset retrieved from a pre-flood SAR image with the help of an optical Sentinel-2 image. The availability of an optical image allows to derive a water-mask without any human intervention from the Normalized Difference Water Index (NDWI), which, when multiplied by the pre-flood SAR image, on a pixel basis, permits to build a training dataset of backscatter values for the water and non-water classes. The labelled training dataset is thereby extracted from the optical and the pre-flood SAR images in an automated fashion. The preprocessing of the dataset and the classification are invoked from an online web application to map the extent of the inundation present on a post-flood SAR image. This application is intended mainly for emergency situations. As a consequence, it is of extreme importance to extract the flood map as quickly as possible and in an unsupervised way. The current paper is structured as follows. In the next section (Section 2), two datasets acquired with X-band TerraSAR-X and C-band Sentinel-1 of the inundations in Tewkesbury in 2007 and in Myanmar in 2015, respectively, are presented. The SAR images depicting these flood events will serve later on to assess the algorithm introduced relatively to a validation dataset. Afterwards, the theory behind the supervised classifier used specifically to cluster the flooded SAR image into two classes and the post-processing utilized for refining the classified flood map, as well as the entire automated flood mapping process, are explained in detail in Section 3. The results of the flood mapping using the proposed method are validated and discussed in the following section (Section 4). Eventually, this paper closes with a conclusion about the strengths and the constraints of this algorithm.

2. Case Studies and Datasets

2.1. Tewkesbury 2007

This case study concerns the town of Tewkesbury (South-West England), which was flooded in July 2007. The town of Tewkesbury is situated at the junction of the Severn and the Avon rivers, and consequently the damages caused by the inundation there were considerable with the water propagating to the town center. Both a pre-flood TerraSAR-X image (Figure 1a) and a Sentinel-2 optical image of the studied area shown in Figure 1c with the ground truth superimposed on it, are required to train the supervised algorithm. The trained classifier will be subsequently used to classify a post-flood TerraSAR-X image (Figure 1b) of the flooded town of Tewkesbury into inundated and non-inundated pixels. This SAR image captured the flood in Tewkesbury on the 25 July 2007, while TerraSAR-X was still in its commissioning phase. The dataset used a pre-flood SAR image taken actually a year after the flooded image on the 22 July 2008 in dry conditions. The availability of a pair of SAR images acquired in the same configuration (3 m-resolution Stripmap and HH-polarized, as reported in Table 1) over the same area and processed as Single Look Slant Range Complex (SSC) products, makes this dataset suitable to conduct change detection analysis. The cloud-free Sentinel-2 optical image was acquired in the same season and the same month as the pair of SAR images, but eight years later than the pre-flood SAR image (19 July 2016).
To confirm that no change in the land cover occurred between the time the pre-flood SAR image was taken and the acquisition of the Sentinel-2 optical image, an optical image captured by the Thematic Mapper (TM) sensor of Landsat-5 one month before the pre-flood SAR image and in the same season (8 June 2008), was compared visually with the latter Sentinel-2 image. Overall, the Landsat-5 and the Sentinel-2 images showed consistency in terms of water bodies, at least for the subset studied. For future flood events, thanks to the 5-days revisit time of the Sentinel-2 constellation currently in orbit, it should be easier to find a cloud-free optical image acquired in the same season and the same year as the reference SAR image to avoid any potential change in the land cover. In the current case study, the previous Landsat-5 product could not be considered for the subsequent flood mapping methodology due to its coarse 30 m spatial resolution.

2.2. Myanmar 2015

Flooding hit Myanmar during the monsoon season between July and September 2015. The area studied in this paper is situated in the South-East of the country, where the Salween River burst its banks in the month of August of the same year. Sentinel-1 took a SAR image during the flooding on the 6th of August 2015 (Figure 2b). Moreover, thanks to the systematic acquisition plan of Sentinel-1, a pre-flood SAR image taken on the 19th of March 2015 was also provided in the same acquisition parameters as the flooded SAR image (Figure 2a), to capture the same area in normal conditions. Both 20 m-resolution SAR images were acquired in VV polarization and processed as Ground Range Detected (GRD) products (Table 2). A cloud-free optical image was acquired a few years after the flooding by Sentinel-2 (Figure 2c) in the same season as the dry SAR image, to avoid inconsistencies between this pair in terms of presence or dryness of water bodies. Thanks to the open data policy of the Copernicus programme, the Sentinel-1 and Sentinel-2 images in this dataset are distributed online for free. Conversely, the commercial images from TerraSAR-X used in the previous case study can only be made freely available, in a limited number, for scientific purposes after a research proposal has been accepted. This last point introduces a constraint in the access to the SAR data following a flood disaster, and shows the advantages satellites belonging to the Copernicus programme have over commercial ones in the same context.

3. Methodology

3.1. Stochastic Gradient Descent

The Gradient Descent (GD) is an iterative optimization algorithm which aims to find the minimum of a loss (cost) function [22]. It is based on the rationale that the cost function is minimized by moving in the opposite direction to its gradient. The learning rate η in Equation (1) serves in this particular case to regulate the steps taken down the slope (i.e., the negative of the gradient):
ω i + 1 = ω i η ω i L ( ω i )
where:
  • ω i + 1 : The model parameters to estimate,
  • ω i : The model parameters estimated in the previous iteration,
  • η : The learning rate,
  • L: The loss (cost) function.
The learning phase in the standard Gradient Descent (called also the Batch Gradient Descent, BGD) requires that the derivative is calculated for all the samples in the training dataset in every iteration. The Batch Gradient Descent is consequently computationally intensive especially when the training set is too large [22]. The Stochastic Gradient Descent (SGD) used in this paper is another variant of the Gradient Descent, which is trained instead on a single randomly chosen training sample at a time. This online learning faculty of SGD makes it more scalable and quicker to train by allowing it to be unconstrained in terms of the execution time by the size of the training dataset [23]. SGD was in fact adopted in this study because in an operational context the objective is to produce the flood map as quickly as possible, and therefore, satellite images being generally quite large, the classifier chosen needs to fit rapidly to the training dataset regardless of the number of samples in it. The cost function used to train the classifier in this paper is the Hinge loss function given by:
L ( x j , y j ) = m a x ( 0 , 1 y j · ( ω x j + b ) )
where:
  • ω and b: The predicted model parameters,
  • x j : The input sample.
  • y j : The target class.
This function acts as a classification metric which appraises the linear model predicted with SGD in every iteration of the learning phase, and modifies its two parameters ( ω , b ) accordingly using Equation (1). In the case of the SGD classifier, ω corresponds to the weight assigned to the backscatter feature in the decision function, and b is its intercept. An interesting property of the Hinge loss function is that it punishes both misclassified samples and those who were correctly classified but with a low confidence, in order to maximize the margin between the classes.
A regularization term is also added to the loss function L in Equation (1) to help the predicted model to generalize to unlabeled data. The idea is to penalize complex models prone to overfitting, which are characterized by larger values for the parameters ω i . The equations for the regularization functions commonly used are:
L 1 = i = 1 m | ω i |
L 2 = i = 1 m ω i 2
The optimal values for the hyperparameters (model’s parameters) in Table 3, which are the number of iterations of the SGD, the loss function, the regularization term and its coefficient α , can be determined by cross-validation where the classifier is trained on one part of the training dataset with different values of these hyperparameters, and then validated on the rest of this same dataset. The set of hyperparameters values producing the best accuracy scores during the cross-validation can be used for the subsequent training on the whole learning dataset. Because the training of the classifier and the test are performed in this case on two distinct datasets (the pre-flood and the post-flood SAR images respectively), no improvement in terms of accuracy was observed with the cross-validation. Furthermore, for the sake of a quicker computation time, the default hyperparameters values of the SGD were kept and are reported in Table 3, except for the number of iterations which was increased to a 1000 iterations.

3.2. Graph Cuts

A graph G = V , E is defined by its vertices (nodes) V linked by directed edges E , where each edge connecting two nodes has a weight. Two terminal nodes, a source s and a sink t, are added to the two extremities of the previous directed and weighted graph G to form a flow network. In the context of a binary image segmentation, the image pixels are represented by the non-terminal graph nodes, while the terminal nodes (s and t) are the labels to assign to each pixel [24]. With this in mind, an s-t cut divides the flow network into two subsets S and T , in such a way that the source s belongs to S and the sink t to T . This kind of cut can be seen as a binary classification or a labeling task, where each non-terminal node (pixel) is assigned either to S or T , depending on whether the pixel belongs to the foreground or the background for instance. For edges too, two types can be distinguished in flow networks. N-links join together pixel nodes located in the same neighborhood on the image, whereas t-links connect each pixel node to both terminals (s and t). The weight of a t-link allows to penalize a node that was mislabeled compared to a prior knowledge, and that of an n-link ensures a smooth and consistent labeling by encouraging pixels in the same neighborhood to be in the same class, after the graph is split. The weights of n-links and t-links can be defined mathematically by the energy terms E s m o o t h ( L ) and E d a t a ( L ) respectively, which appear in the energy function to minimize [24]:
E ( L ) = E d a t a ( L ) + E s m o o t h ( L )
Graph cut methods intend to find the labeling L that minimizes the energy function E in the previous equation. When each energy term is replaced with its respective expression, the energy function becomes [24]:
E ( L ) = p P D p ( L p ) + ( p , q ) N V p , q ( L p , L q )
where:
  • D p : The data penalty term for a pixel p,
  • V p , q : The smoothness or regularization term between neighboring pixels p and q,
  • P : The set of image pixels p,
  • N : The set of pairs of adjacent pixels ( p , q ) ,
  • L p : The labeling to assign to a pixel p.
The network flow in Figure 3 shows two pixel nodes p 0 and p 1 , besides the two terminal nodes s and t, and illustrates how each edge connecting a pair of nodes is assigned the adequate energy term.
Graphs cuts based energy minimization techniques were already used in the past to restore noisy binary images in [25], who demonstrated that simulated annealing could get stuck in a local minimum during the minimization of the energy. The Bokov-Kolmogorov min-cut/max-flow algorithm (BKA), used in the current study, has proved effective in many image processing applications like image segmentation, stereo matching, and image restoration [24]. It extends the idea in [25] by proposing a graph cut algorithm that performs quickly and is even able to generalize to N-dimensional image segmentation problems [26]. Analogously to when they were first proposed in [25], graph cuts will be integrated in this paper’s methodology as a post-processing to enforce the spatial continuity between pixels assigned to the same class. In other words, the objective of graph cuts as a post-processing in this case is to take into account the spatial proximity between the pixels in the binary flood map, by reclassifying the noisy pixels resulting from the previous pixel-based classification.

3.3. Flood Extent Mapping

The process chain shown in the flowchart in Figure 4 consists in a supervised classifier trained automatically to build the model that maps the floodwater on the post-flood SAR image. Briefly, a labelled training dataset of water and land classes is gathered in an automated way from a Sentinel-2 optical image and a pre-flood SAR image, and is next fed into the classification algorithm during the learning phase. Although the classifier is normally supervised, in the sense that a labelled training dataset needs to be provided beforehand to train it, the selection and the labelling of the training samples is carried out automatically in this paper. Ultimately, the trained classifier will be ready to discriminate the pixels individually in the flooded SAR image into flood and non-flood pixels. Each phase of the process will be explained in more detail below.

3.3.1. Preprocessing

The SAR images taken prior and after the flooding were radiometrically calibrated and speckle-filtered with a 5 × 5 Gamma Map filter to avoid having a noisy flood map later. Afterwards, masks of shadow and layover were extracted automatically from the SAR images with the help of the free SRTM DEM. Pixels where these geometric distortions appear will be systematically disregarded during the training and classified as unflooded after that. The shadow and layover pixels detected on the pair of SAR images showing the studied area in Myanmar appear in a homogeneous black color on the mountains located to the right of the river (Figure 2a,b). The SAR images with masked shadow and layover were eventually converted to dB and projected to the ground-range with a terrain-correction (TC) using the same DEM. The subsetting operation was left till the very end of the preprocessing to get perfectly rectangular images after the terrain-correction.
The Sentinel-2 optical image used to calculate the NDWI (Normalized Difference Water Index) has to be atmospherically corrected by processing it to a Level-2A product on the user side. This preprocessing step aims to remove the effect of the atmosphere (i.e., clouds, aerosols, gases…) from optical satellite images, and is necessary before computing the NDWI. Recently, the Sentinel Hub started distributing online products already processed as Level-2A Bottom-Of-Atmosphere (BOA) Sentinel-2 products.
The pre-flood SAR image needed also to be collocated with the Sentinel-2 image using the Sentinel Application Platform (SNAP) Python API, prior to the extraction of the SAR training samples from it.

3.3.2. Extraction of the Training Dataset

The Normalized Difference Water Index (NDWI) was initially presented in [27] to detect water bodies from multispectral optical images. The NDWI mathematical expression is given by:
N D W I = G r e e n N I R G r e e n + N I R
where, similarly to [28]:
  • G r e e n : Band 3 in the Sentinel-2 product,
  • N I R : Band 8 in the Sentinel-2 product.
It is based on the idea that water has at the same time a high reflectance in the green band and a low one in the near-infrared (NIR) one, while other types of land cover to disregard (soil and vegetation) appear brighter in the latter band [29]. The NDWI was originally calculated from multispectral images captured by Landsat’s Multispectral Scanner (MSS) [27], but was estimated in further works from Landsat’s ETM+ bands [29], high-resolution Quickbird ones [30], and even from Sentinel-2 images [28]. In rural areas, pixels with a positive NDWI were expected to correspond to water. However, this could lead to false alarms in urban areas where houses rooftops for instance were found to have positive NDWI values, although lower than that of water. Consequently, a higher threshold was determined in [30] (Equation (8)) by the same author who proposed the NDWI, in order to identify the water surfaces in swimming pools which constitute a common place for mosquitoes to lay their eggs. The NDWI threshold could nevertheless fail to detect water surfaces that are concealed by protruding vegetation or shadowed by trees or buildings.
C l a s s = W a t e r , if N D W I 0 . 3 L a n d , otherwise
Other water indices were also proposed in the literature like the Modified Normalized Difference Water Index (MNDWI) [29], which was calculated by replacing the near-infrared (NIR) band in Equation (7) with the shortwave infrared (SWIR) one. The MNDWI was proposed to prevent the obtained water mask from including false positives from urban areas, based on the observation that the spectral response from built-up land was higher in the SWIR band compared to the green band. Therefore, built-up areas would be expected to result in negative MNDWI values, on the contrary of the NDWI ones. However, the SWIR band used to calculate the MNDWI has a 20 m spatial resolution in Sentinel-2 products, as opposed to the 10 m spatial resolution of NIR and visible bands (Table 4). As a result, the calculation of the MNDWI from Sentinel-2 bands should be preceded either by a downsampling of the green band to 20 m-resolution, or by a sharpening of the SWIR band to 10 m-resolution [28]. Besides, according to a few experiments carried out on the Sentinel-2 datasets used in this paper, the MNDWI still required a threshold greater than zero to identify the water, although it should be lower than the NDWI one according to [29]. Lastly, most high-resolution optical satellite sensors currently in orbit have a NIR band but lack a SWIR band.
In the current study, the NDWI index is calculated using Equation (7) from the green and near-infrared (NIR) bands of the Sentinel-2 Bottom-Of-Atmosphere (BOA) level-2A optical image, which was collocated with the preprocessed pre-flood SAR one in the previous step. Then, by applying the threshold in Equation (8) to the NDWI, a water mask is produced. The land mask is simply the logical negation of the water mask. Both the NDWI-derived water and land masks produced in the previous step were separately multiplied with the pre-flood SAR image present in the same stacked product, to extract from the latter image the pixels belonging to water and land classes, respectively. The previous step depends on the accurate collocation between the optical and the pre-flood SAR image, so that the location of one class (water or land) in the former product matches its location in the latter one. Afterwards, an equal number of samples (1000 samples/class) was taken randomly from the extracted water and land pixels, and was used as a training dataset. Having training classes with the same number of samples helps to avoid favoring the majority class in the subsequent step. One pre-requisite is that the training samples are randomly shuffled prior to the learning phase [23], to avoid that the classifier recognizes in the post-flood SAR image the last class it was trained on better than the first one.

3.3.3. Classification of the Post-Flood SAR Image

The learning dataset serves to train the supervised classifier to recognize water and land in the post-flood SAR image using algorithms such as the Stochastic Gradient Descent (SGD) included in the Scikit-learn library. Thanks to the online learning ability of the SGD, recalled in Section 3.1, the classification is performed very quickly even when the training dataset is quite large. It is not necessary in this case that the training and test datasets are normally distributed with zero mean and unit variance, since there is only one feature (the backscatter in dB). Eventually, the trained classifier was employed to segment the post-flood SAR image into water and non-water (land) classes. In summary, the classifier is trained on water and land pixels from the pre-flood SAR image, and used to categorize the post-flood SAR image pixels into the same two classes. The classification is thereby based on the assumption that the flood has the same low backscatter signature on the SAR image as permanent water bodies and rivers. This supposition can be confirmed visually, as their similar low radar return makes them easily distinguishable from the rest of the land cover. Therefore, the trained classifier does not differentiate floodwater from permanent water bodies on the flooded SAR image, since both of them are classified as water. The water map produced is specific only to the post-flood SAR image in the sense that potential changes in rivers morphologies, relatively to the date of acquisition of the pre-flood SAR image, are captured by the latter thematic map.

3.3.4. Post-Processing of the Flood Map

The Bokov–Kolmogorov min-cut/max-flow algorithm (BKA) [24] used during the post-processing of the flood map requires to set the values of both the penalty and the regularization energy terms. Knowing that the input flood map image has binary values ( p i ), the penalty (data) term is null when the pixel is assigned the same label it got after the previous classification, otherwise it is equal to one:
D = p i , if x i = 0 1 p i , if x i = 1
where:
  • p i : The value of the pixel i in the input image,
  • x i : The label assigned to the pixel i.
For the smoothness term, an 8-connectivity neighborhood was considered so that each pixel is connected with an n-link to the 8 pixels around it. The weights of n-links were set empirically to a constant ( V = 1 ) that matches the weights of the t-links (Equation (9)) in terms of order of magnitude ( D { 0 , 1 } ):
V = 1
The idea behind this regularization term is that a strong edge with a high weight connecting a pair of adjacent nodes will not be broken during the cut, and these adjacent pixels will therefore be encouraged to take the same label [31]. In a way, the smoothness term balances out the penalty one during the minimization of the total energy. This post-processing of the flood map allows to take into account the spatial contiguity between the image pixels in order to remove the noise created by the previous pixel-based classification. In fact, the higher the value of the regularization term, the smoother the segmentation is.

3.3.5. Implementation

The flood extent mapping algorithm can be called from a web application which was built with the Django web framework. The preprocessing of the SAR images calls the requested SNAP operators using its Python API. As for the SGD classification algorithm, it is implemented in Scikit-learn, which is a machine learning library written in Python. The implementation of the max-flow algorithm (BKA) utilized for the post-processing of the flood map is the one provided by a python wrapper (PyMaxflow) available in [32], which is itself based on [24]. The flood mapping is followed by the visualization of the produced flood map raster served by GeoServer on an Open Street Map rendered using Mapbox. The developed application is cost-effective since all its dependencies are open-source libraries (SNAP, Python, Scikit-learn, Django, PyMaxflow). It is also cross-platform and can be deployed on a server and queried remotely, thanks to its ability to be called from the internet browser.

4. Results and Discussion

4.1. Accuracy Metrics

The overall accuracy of the classification was calculated for the resulting flood maps relatively to the ground truth associated with them, according to the equation given in [33]. Besides the overall accuracy, two other accuracy metrics are commonly used in remote sensing to assess the classification of each class separately. These are the producer’s accuracy and the user’s accuracy which are the complements of the omission error and the commission error, respectively [34]. In addition to these accuracy metrics, the area flooded was also easily calculated by multiplying the number of pixels classified as flooded by the pixel spacing of the SAR image:
a r e a = n · s
where:
  • a r e a : The flooded area in km 2 ,
  • n: The number of pixels flooded,
  • s: The pixel area in m 2 , which is equal to the squared pixel spacing.

4.2. Tewkesbury 2007

The extent of the flooding mapped with the SGD in the town of Teweskbury is shown in Figure 1d. The flood map obtained was validated against a ground truth vector (the red mask in Figure 1c), which was collected by the Environment Agency using in situ surveys and aerial photography [20] in the same town of Tewkesbury at least a day before the acquisition of the post-flood SAR image, between the 20th and the 24th of July 2007. The ground truth vector had to be rasterized and confined to the area of interest first, then the flood map produced in this paper was compared with it on a pixel level. After an assessment of the results obtained with the SGD classifier (Table 5), the accuracy of the classification was found to be around 77%. Moreover, from the producer’s accuracy of the flood class (61.18%) in the same table, it can be inferred that there is an underestimation of the inundation and consequently an overestimation of the non-flooded areas. The under-estimation of the floodwater is also clear from the flooded area presented in km 2 in Table 6. One type of land cover possibly responsible for the missed classifications are flooded urban areas (clearly flooded in aerial images in [19]) characterized by an increase in the backscatter, while the classifier was not trained to recognize their signatures. However, it should be noted that the flooded SAR image was acquired on a different day (25 July 2007) than the ground truth (20–24 July 2007), and therefore a flood recession should not be ruled out between the two dates.

4.3. Myanmar 2015

For the Myanmar 2015 dataset, the validation was performed against flood maps obtained from the same Sentinel-1 dataset by the United Nations Institute for Training and Research (UNITAR) [21]. The preprocessing step, consisting of masking out geometric distortions SAR images suffer from, allowed to get rid of parts of the topographic shadows which otherwise could be misclassified as water due to the similar dark appearance on the SAR image. The results in Table 7 were superior to those realized on the previous dataset, reaching an accuracy over 90%. Due to the larger pixel spacing of Sentinel-1 SAR images (10 m in Table 2), the missed classification translates into a larger difference between the area inundated in km 2 on the flood map and on the ground truth in Table 8.

4.4. Classification of Urban and Non-Urban Areas in Tewkesbury 2007

The validation of the results for Tewkesbury 2007 went one step further and the flood maps were assessed separately in the urban and non-urban areas. With this in mind, the urban mask was obtained by thresholding the 25 m-resolution land cover map for the UK which was downloaded from [35], while the non-urban mask was simply taken as the logical negation of the latter mask. The same process described in the previous sections was applied on the SAR image as before, but the validation was carried out on the two types of land use separately.
Table 9 and Table 10 give an assessment of the results obtained by the SGD classifier in the urban and the non-urban regions of the SAR image in Figure 1b, respectively. As expected, the classification is slightly more accurate in non-urban areas compared to urban areas (almost 77.68% against 74.7%, respectively). Nevertheless, in terms of False Negatives the classification in urban settlements did a lot worse than in non-urban ones (close to 5% of producer’s accuracy against 68%, respectively). As stated in the previous section, this might be explained by the fact that the classifier missed flooded urban settlements characterized by an increase in the backscatter, while the training dataset identifies only dark open-water on the pre-flood SAR image. Alternatively, the same reason also referred to in the previous section regarding the different acquisition dates of the flooded SAR image and the ground truth could possibly be responsible for the high missed detection in urban areas.

4.5. Maps of False Negatives and False Positives

The maps of False Negatives (FNs) and False Positives (FPs) for Tewkesbury and Myanmar datasets (Figure 5 and Figure 6, respectively) were produced in QGIS from the flood and the ground truth binary maps using this equation:
E r r o r = F N , if ( p r e d i c t v a l i d ) · m a s k = 1 F P , if ( p r e d i c t v a l i d ) · m a s k = 1
where:
  • E r r o r : The type of misclassification in the masked area,
  • F N : A false negative,
  • F P : A false positive,
  • p r e d i c t : The predictions,
  • v a l i d : The ground truth,
  • m a s k : The urban or non-urban mask, if applicable.
For the Tewkesbury dataset, the FNs and FPs were extracted separately for urban and non-urban areas thanks to the availability of the UK land cover map [35]. The yellow field inside the red triangle in Figure 5d was clearly not flooded on the post-flood SAR image of Tewkesbury (Figure 1b). This means that its inclusion with the missed classifications was probably due to a flood recession in this area. There were also some False Negatives in the urban regions of the same image (Figure 5d) appearing in yellow, but the resolution of the SAR image was not high enough to detect the flood in narrow roads. As for the False Positives appearing in magenta inside the red circle in Figure 5c, this rural area is clearly flooded since it appears in dark due to the specular reflection from the water. One possible explanation for it being considered a False Positive is that because the ground truth and the SAR image were not acquired on the same date, different areas were flooded on the two dates of acquisition. Other false alarms appearing in magenta in the same image (Figure 5c) might come from low grow grass or bare fields which exhibit the same low backscatter that characterizes water on radar imagery, whereas in the lower-left corner of the image the shadow from the trees could have resulted in some false positives.
Regarding the Myanmar 2015 flood map, the misclassification caused by False Negatives (yellow pixels in Figure 6) is generally located on the boundaries of the flooded zones. It is suspected that the speckle-filter smoothed out the boundaries of the flooding on the post-event SAR image, which led the pixels constituting them to be wrongly classified as dry. The persistent dark topographic shadow which exists even after masking out the geometric distortions, creates false positives visible in magenta inside the two red circles in Figure 6. This might be explained by the relatively coarse resolution (around 30 m) of the DEM employed during the extraction of the shadow and layover from the SAR images.

4.6. Computation Time

Each operation performed during the flood mapping process was profiled individually, by measuring the time it takes to run. The time to upload the optical and SAR images to process to the Web server as well as the time to load the web page are included in the execution time. The upload time is not considerable when working locally, however it very much depends on the internet speed and the size of the images if the flood mapping Web application is deployed remotely. The computation times for the preprocessing of the SAR images, the generation of the training dataset, and the classification plus the post-processing are shown in Table 11 and Table 12 when these operations were executed on the Myanmar 2015 and the Tewkesbury 2007 datasets, respectively. Overall, the processing time of the whole flood mapping chain for the Myanmar 2015 dataset is around 2 min and 40 s, considering that the input SAR subsets have over 19 millions pixels each prior to the preprocessing. As for the Tewkesbury 2007 dataset, the total processing time is nearly 1 min for SAR subsets of 5 million pixels (Table 12).

4.7. Comparison with the Literature

The results in [7,10] and [36] were chosen to benchmark the flood mapping chain presented in this paper, since these methods were also tested on the Tewkesbury 2007 dataset. However, the performances of this paper’s method in terms of computation time cannot be objectively compared relatively to these methodologies, since they were run on machines with different characteristics and on image subsets of different dimensions. In absolute terms, this paper’s method finished the whole processing in under 5 min (Table 13) on a large subset of Tewkesbury 2007 SAR dataset of more than 41 million pixels (Figure 7), which is more than 8 times larger than the subset used in the previous section (Section 4.6). Most of the execution time (more than 3 min) was in fact dedicated to the preprocessing of the pair of SAR images, while the post-processing with graph cuts also required a considerable amount of time to improve the flood map (over 1 min) because the image was quite large. The current approach is thus suitable to operate in emergency situations, when it is necessary to have a quick overview of the flood situation. In fact, the time taken to produce the flood map is in the same order of magnitude as the targeted response time for critical calls in the UK. According to the legislation, the Ambulance and the Fire & Rescue Services are expected to arrive at the location of the reported incident in 8 and 10 min, respectively, 75% of the time [37]. By way of comparison, a method with a similar purpose of mapping the inundation in near real-time in [7] took around 2.67 min on a smaller subset of 5.4 million pixels. Furthermore, for [10] who adapted the process in [7] to Sentinel-1 SAR images, the whole chain took 45 min to map the flood on the entire Ground Range Detected (GRD) image, but included the downloading of the SAR images which is normally dependent on the internet speed. Accordingly, the process described in the current study extends the literature on operational flood mapping from SAR images with a novel method suitable mainly for emergency situations.
The precision of the classification was assessed by running this algorithm on a workstation equipped with a 6-core 2.67GHz Intel Xeon X5650 CPU and 24.0 GB of RAM. The overall accuracy (90.66% in Table 13) was found to be comparable to other studies in the literature of operational flood mapping (95.44% in [7]). The accuracy results were also compared by land use (urban and rural) to [36], who performed the flood mapping on a subset of the Teweskbury 2007 dataset of roughly the same size. The accuracy obtained in rural areas with the algorithm described in this paper (90.81% in Table 13) is quite close to what [36] achieved on a similar subset (89%). As for the urban areas, the accuracy in [36] dropped from 75% to 57%, depending on whether the shadow and the layover where the flood cannot be detected were masked or not. The accuracies for rural and urban areas achieved in this paper are not too far from one another as can be seen in Table 13 (90.81% for non-urban areas and 88.97% for urban ones), considering that the DEM is too coarse to detect the geometric distortions in Tewkesbury’s town center and that the surface is quite flat.

4.8. Training Dataset

Besides the water and non-water classes, the automated process proposed in this paper gives the possibility to train the classifier on other classes, if additional training subsets can be extracted in an unsupervised way. However, it may prove difficult to collect training datasets, for example, for flooded urban or vegetated areas at the time of the disaster. In a separate experiment, the flood mapping classifier was effectively trained with a dry vegetation class retrieved similarly to the water class, by thresholding the Normalized Difference Vegetation Index (NDVI) according to [38]. The vegetation threshold ( N D V I 0 . 2 ) normally corresponds to the NDVI values exhibited by shrubs and grassland. The previous heterogeneous non-water subset was thus split again into vegetation and non-vegetation. But the accuracy results for the large subset of Tewkesbury in particular decreased by almost 10%, after considering the vegetation as a separate class. For this reason, only water and non-water classes were maintained in the previous tests.
Finally, it was found that when the water is not well represented in the pre-flood SAR image and the optical image, the classification using the method in this paper might later fail to efficiently recognize the flooding on the post-event SAR image. Another detail to point out concerns the muddy water present at the mouth of the River Severn near the city of Bristol, which was not completely detected by the NDWI threshold. The river mouth is located on the same Sentinel-2 optical image but to the southwest of the subset taken around Tewkesbury.

5. Conclusions

The novel method proposed in this paper proved capable of a quick and automatic mapping of the extent of the inundation, which can assist response authorities to prioritize during rescue operations. The rapidity of execution is a very important factor, especially when the main purpose of the flood maps is to support relief efforts at the time of the disaster.
This approach was evaluated on two flood events captured by SAR sensors in different wavelengths. The first SAR pair was taken in X-band by TerraSAR-X, while the other one was captured by the Sentinel-1 C-band sensor. Thanks to the availability of a ground truth in both cases, it was possible to assess the algorithm proposed and discuss the results obtained. The flood classifier showed satisfying results, however it might fail if the water bodies and rivers are too small to appear clearly in the pre-flood SAR image and the Sentinel-2 product. In this particular case, the classifier cannot be trained efficiently to recognize the water class. Moreover, when the optical and the pre-event SAR images are acquired a few years from each other, inconsistencies in terms of the presence and absence of water bodies might exist between these two images, even if they were both taken during the same season. The dates of acquisition of this pair of images is therefore a crucial parameter worth considering, to avoid having pixels belonging to one class being labeled in the other one in the training dataset. In this case, a change in the land cover between the pre-flood SAR image and the Sentinel-2 optical image is responsible for the mislabeling, when for instance the date of acquisition of the latter image is almost a decade after the pre-flood SAR image (e.g., the Tewkesbury 2007 dataset). Cloud coverage could also lead to a similar wrong labeling of the learning dataset, particularly when parts of the water bodies are hidden by clouds. Nevertheless, the NDWI threshold utilized for extracting the water mask from the optical bands appeared to generally provide a good compromise between over- and under-estimations, although a thorough validation of this threshold on other datasets, preferably from different climates, could become required in the future.
This process chain assumes the availability of a reference non-flooded SAR image and a dry optical image, besides the flooded SAR image. However, this does not constitute a serious constraint since areas at risk of a flooding can ensure that these reference images are at their disposal beforehand and regularly updated. Furthermore, the systematic acquisition mode of the Satellites launched for the Copernicus Programme (Sentinel-1 and Sentinel-2) increases the chances of finding suitable reference images (SAR and optical products) in the archive. Like the DEM utilized to mask out the geometric distortions (shadow and layover), Sentinel-1 and Sentinel-2 images are distributed for free, in contrast to the images acquired by commercial satellite missions such as TerraSAR-X and COSMO-SkyMed, and cover most of the Earth’s surface.
In the future, the proposed method could benefit from recent commercial optical sensors, capable of delivering metric or even sub-metric resolution images, to address the abovementioned issue concerning narrow river channels that cannot be distinguished with the 10 m-resolution optical bands of Sentinel-2. Moreover, it is expected that SAR images of a higher spatial resolution could improve the accuracy of the classification in flooded urban areas. With this in mind, both TerraSAR-X with its staring spotlight mode and its future successor TerraSAR-X NG [39] can take SAR images with up to 25 cm resolution. Finally, centimetric-resolution airborne LiDAR DEMs with finer horizontal and vertical accuracies compared to the SRTM DEM used, should result in a more precise terrain-correction and a more efficient masking out of the shadowed pixels from the SAR images.

Author Contributions

The first author, A.B., was responsible of developing the methodology, implementing the software, validating the results, and writing the original draft. The research conducted was supervised by the second author, R.G., who provided comments all along the project about the process and the draft.

Funding

This research was funded by Surrey Satellite Technology Ltd (SSTL). The APC was funded by Surrey Space Centre, University of Surrey.

Acknowledgments

The authors would like to thank the UK Environment Agency and UNITAR (United Nations Institute for Training and Research) for producing the validation data for Tewkesbury and Myanmar flood events respectively, which are both available online free of any charge. The authors would also like to thank the German Aerospace Centre for providing the TerraSAR-X datasets within the LAN3308 proposal.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. AON. Global Climate and Catastrophe. In 2016 Annual Report; AON: London, UK, 2017. [Google Scholar]
  2. Guha-Sapir, D.; Hoyois, P.; Wallemacq, P.; Below, R. Annual Disaster Statistical Review 2016: The Numbers and Trends. Brussels: CRED, 31 October 2017. [Google Scholar]
  3. Refice, A.; Capolongo, D.; Lepera, A.; Pasquariello, G.; Pietranera, L.; Volpe, F.; D’Addabbo, A.; Bovenga, F. SAR And Insar For Flood Monitoring: Examples With Cosmo/Skymed Data. IEEE Int. Geosci. Remote Sens. Symp. 2013, 7, 703–706. [Google Scholar]
  4. Voormansik, K.; Praks, J.; Antropov, O.; Jagomagi, J.; Zalite, K. Flood mapping with terraSAR-X in forested regions in estonia. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 562–577. [Google Scholar] [CrossRef]
  5. Mason, D.; Giustarini, L.; Garcia-Pintado, J.; Cloke, H. Detection of flooded urban areas in high resolution Synthetic Aperture Radar images using double scattering. Int. J. Appl. Earth Observ. Geoinf. 2014, 28, 150–159. [Google Scholar] [CrossRef] [Green Version]
  6. Martinis, S.; Kersten, J.; Twele, A. A fully automated TerraSAR-X based flood service. ISPRS J. Photogramm. Remote.Sens. 2015, 104, 203–212. [Google Scholar] [CrossRef]
  7. Martinis, S.; Twele, A.; Voigt, S. Towards operational near real-time flood detection using a split-based automatic thresholding procedure on high resolution TerraSAR-X data. Nat. Hazards Earth Syst. Sci. 2009, 9, 303–314. [Google Scholar] [CrossRef] [Green Version]
  8. Pulvirenti, L.; Marzano, F.S.; Pierdicca, N.; Mori, S.; Chini, M. Discrimination of water surfaces, heavy rainfall, and wet snow using COSMO-SkyMed observations of severe weather events. IEEE Trans. Geosci. Remote Sens. 2014, 52, 858–869. [Google Scholar] [CrossRef]
  9. Herrera-Cruz, V.; Koudogbo, F. TerraSAR-X rapid mapping for flood events. In Proceedings of the International Society for Photogrammetry and Remote Sensing (Earth Imaging for Geospatial Information), Hannover, Germany, 2–5 June 2009; pp. 170–175. [Google Scholar]
  10. Twele, A.; Cao, W.; Plank, S.; Martinis, S. Sentinel-1-based flood mapping: A fully automated processing chain. Int. J. Remote Sens. 2016, 37, 2990–3004. [Google Scholar] [CrossRef]
  11. Henry, J.; Chastanet, P.; Fellah, K.; Desnos, Y. Envisat multi-polarized ASAR data for flood mapping. Int. J. Remote Sens. 2006, 1921–1929. [Google Scholar] [CrossRef]
  12. Pulvirenti, L.; Pierdicca, N.; Chini, M.; Guerriero, L. An algorithm for operational flood mapping from Synthetic Aperture Radar (SAR) data using fuzzy logic. Nat. Hazards Earth Syst. Sci. 2011, 11, 529–540. [Google Scholar] [CrossRef] [Green Version]
  13. Matgen, P.; Hostache, R.; Schumann, G.; Pfister, L.; Hoffmann, L.; Savenije, H. Towards an automated SAR-based flood monitoring system: Lessons learned from two case studies. Phys. Chem. Earth Parts A/B/C 2011, 36, 241–252. [Google Scholar] [CrossRef]
  14. Giustarini, L.; Hostache, R.; Matgen, P.; Schumann, G.J.; Bates, P.D.; Mason, D.C. A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2417–2430. [Google Scholar] [CrossRef]
  15. Boykov, Y.; Funka-Lea, G. Graph cuts and efficient ND image segmentation. Int. J. Comput. Vis. 2006, 70, 109–131. [Google Scholar] [CrossRef]
  16. Westerhoff, R.S.; Kleuskens, M.P.H.; Winsemius, H.C.; Huizinga, H.J.; Brakenridge, G.R.; Bishop, C. Automated global water mapping based on wide-swath orbital synthetic-aperture radar. Hydrol. Earth Syst. Sci. 2013, 17, 651–663. [Google Scholar] [CrossRef] [Green Version]
  17. Skakun, S. A neural network approach to flood mapping using satellite imagery. Comput. Inform. 2012, 29, 1013–1024. [Google Scholar]
  18. Notti, D.; Giordan, D.; Caló, F.; Pepe, A.; Zucca, F.; Galve, J. Potential and Limitations of Open Satellite Data for Flood Mapping. Remote Sens. 2018, 10, 1673. [Google Scholar] [CrossRef]
  19. Mason, D.; Speck, R.; Devereux, B.; Schumann, G.P.; Neal, J.; Bates, P. Flood Detection in Urban Areas Using TerraSAR-X. IEEE Trans. Geosci. Remote Sens. 2010, 48, 882–894. [Google Scholar] [CrossRef]
  20. UK Environment Agency. Recorded Flood Outlines for Past Inundation Events. 2017. Available online: http://environment.data.gov.uk/ds/catalogue/#/catalogue (accessed on 1 November 2017).
  21. UNITAR. Flood Waters Over Kayin and Mon State, Myanmar. 2015. Available online: www.unitar.org/unosat/node/44/2268 (accessed on 23 April 2018).
  22. Géron, A. Hands-On Machine Learning with Scikit-Learn and TensorFlow, 1st ed.; O’Reilly Media: Sebastopol, CA, USA, 2017. [Google Scholar]
  23. Bottou, L. Stochastic Gradient Descent Tricks; Springer: Brelin, Germany, 2012; Volume 7700, pp. 430–445. [Google Scholar]
  24. Boykov, Y.; Kolmogorov, V. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1124–1137. [Google Scholar] [CrossRef] [PubMed]
  25. Greig, D.M.; Porteous, B.T.; Seheult, A.H. Exact maximum a posteriori estimation for binary images. J. R. Stat. Soci. Ser. B (Methodological) 1989, 51, 271–279. [Google Scholar] [CrossRef]
  26. Boykov, Y.Y.; Jolly, M.P. Interactive graph cuts for optimal boundary & region segmentation of objects in ND images. In Proceedings of the ICCV 2001, Eighth IEEE International Conference on Computer Vision, Vancouver, BC, Canada, 7–14 July 2001. [Google Scholar]
  27. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  28. Du, Y.; Zhang, Y.; Ling, F.; Wang, Q.; Li, W.; Li, X. Water bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sens. 2016, 8, 354. [Google Scholar] [CrossRef]
  29. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  30. McFeeters, S.K. Using the Normalized Difference Water Index (NDWI) within a Geographic Information System to Detect Swimming Pools for Mosquito Abatement: A Practical Approach. Remote Sens. 2013, 5, 3544–3561. [Google Scholar] [CrossRef] [Green Version]
  31. Boykov, Y.; Veksler, O. Graph Cuts in Vision and Graphics: Theories and Applications; Springer: Boston, MA, USA, 2006; pp. 79–96. [Google Scholar]
  32. PyMaxflow. Python Library for Graph Construction and Maxflow Computation. 2019. Available online: https://github.com/pmneila/PyMaxflow (accessed on 6 February 2019).
  33. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 3rd ed.; Pearson: London, UK, 2005; p. 507. [Google Scholar]
  34. Banko, G. A Review of Assessing the Accuracy of Classifications of Remotely Sensed Data and of Methods Including Remote Sensing Data in Forest Inventory; International Institute for Applied Systems Analysis: Laxenburg, Austria, 1998. [Google Scholar]
  35. Rowland, C.; Morton, R.; Carrasco, L.; McShane, G.; O’Neil, A.; Wood, C. Land Cover Map 2015 (25 m raster, GB). 2017. Available online: https://doi.org/10.5285/bb15e200-9349-403c-bda9-b430093807c7 (accessed on 1 April 2019).
  36. Mason, D.C.; Davenport, I.J.; Neal, J.C.; Schumann, G.J.P.; Bates, P.D. Near Real-Time Flood Detection in Urban and Rural Areas Using High-Resolution Synthetic Aperture Radar Images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3041–3052. [Google Scholar] [CrossRef]
  37. Coles, D.; Yu, D.; Wilby, R.L.; Green, D.; Herring, Z. Beyond ‘flood hotspots’: Modelling emergency service accessibility during flooding in York, UK. J. Hydrol. 2017, 546, 419–436. [Google Scholar] [CrossRef] [Green Version]
  38. Bhandari, A.; Kumar, A.; Singh, G. Feature extraction using Normalized Difference Vegetation Index (NDVI): A case study of Jabalpur city. Procedia Technol. 2012, 6, 612–621. [Google Scholar] [CrossRef]
  39. Janoth, J.; Gantert, S.; Schrage, T.; Kaptein, A. From TerraSAR-X towards TerraSAR next generation. In Proceedings of the 10th European Conference on Synthetic Aperture Radar, Berlin, Germany, 3–5 June 2014; pp. 1–4. [Google Scholar]
Figure 1. Subset of the town of Tewkesbury (South-West of England) on (a) the pre-flood TerraSAR-X image (22 July 2008), © DLR (2008) (b) the post-flood TerraSAR-X image (25 July 2007), © DLR (2007) (c) the Environment Agency’s ground truth in red (20–24 July 2007) [20] superimposed on the Sentinel-2 optical image (19 July 2016), © Copernicus data (2016) (d) the resulting flood map obtained (white: flood, black: no-flood).
Figure 1. Subset of the town of Tewkesbury (South-West of England) on (a) the pre-flood TerraSAR-X image (22 July 2008), © DLR (2008) (b) the post-flood TerraSAR-X image (25 July 2007), © DLR (2007) (c) the Environment Agency’s ground truth in red (20–24 July 2007) [20] superimposed on the Sentinel-2 optical image (19 July 2016), © Copernicus data (2016) (d) the resulting flood map obtained (white: flood, black: no-flood).
Remotesensing 11 00779 g001
Figure 2. Subset of the city of Mawlamyine (southeastern Myanmar) on (a) the pre-flood Sentinel-1 Synthetic Aperture Radar (SAR) image (15 March 2015), © Copernicus data (2015) (b) the post-flood Sentinel-1 SAR (06 August 2015), © Copernicus data (2015) (c) the United Nations’s ground truth vector in red (06 August 2015) [21] superimposed on the Sentinel-2 optical image (04 March 2018), © Copernicus data (2018) (d) the resulting flood map obtained (white: flood, black: no-flood).
Figure 2. Subset of the city of Mawlamyine (southeastern Myanmar) on (a) the pre-flood Sentinel-1 Synthetic Aperture Radar (SAR) image (15 March 2015), © Copernicus data (2015) (b) the post-flood Sentinel-1 SAR (06 August 2015), © Copernicus data (2015) (c) the United Nations’s ground truth vector in red (06 August 2015) [21] superimposed on the Sentinel-2 optical image (04 March 2018), © Copernicus data (2018) (d) the resulting flood map obtained (white: flood, black: no-flood).
Remotesensing 11 00779 g002
Figure 3. A flow network with a source node s, a sink node t, two non-terminal pixel nodes p 0 and p 1 , and the energies used as the edges’ weights.
Figure 3. A flow network with a source node s, a sink node t, two non-terminal pixel nodes p 0 and p 1 , and the energies used as the edges’ weights.
Remotesensing 11 00779 g003
Figure 4. Flowchart of the automatic flood mapping process including the preprocessing, the extraction of the training dataset, and the classification.
Figure 4. Flowchart of the automatic flood mapping process including the preprocessing, the extraction of the training dataset, and the classification.
Remotesensing 11 00779 g004
Figure 5. Superimposed on the post-flood SAR image are: (a) the flood map produced from Tewkesbury 2007 TerraSAR-X image in cyan superimposed on the ground truth in red (b) The urban mask extracted from the UK Land Cover map [35] in blue and the non-urban mask (negation of the urban mask) in green (c) The false negatives (FN) in yellow and the false positives (FP) in magenta, in non-urban areas (d) The false negatives (FN) in yellow and the false positives (FP) in magenta, in urban areas.
Figure 5. Superimposed on the post-flood SAR image are: (a) the flood map produced from Tewkesbury 2007 TerraSAR-X image in cyan superimposed on the ground truth in red (b) The urban mask extracted from the UK Land Cover map [35] in blue and the non-urban mask (negation of the urban mask) in green (c) The false negatives (FN) in yellow and the false positives (FP) in magenta, in non-urban areas (d) The false negatives (FN) in yellow and the false positives (FP) in magenta, in urban areas.
Remotesensing 11 00779 g005
Figure 6. Superimposed on the post-flood SAR image are: (a) the flood map produced from Myanmar 2015 Sentinel-1 image in cyan superimposed on the ground truth in red (b) The false negatives (FN) in yellow and the false positives (FP) in magenta.
Figure 6. Superimposed on the post-flood SAR image are: (a) the flood map produced from Myanmar 2015 Sentinel-1 image in cyan superimposed on the ground truth in red (b) The false negatives (FN) in yellow and the false positives (FP) in magenta.
Remotesensing 11 00779 g006
Figure 7. The flood map produced from a large subset of the the Tewkesbury 2007 dataset. The subset assessed in Section 4.2 is inside the red rectangle.
Figure 7. The flood map produced from a large subset of the the Tewkesbury 2007 dataset. The subset assessed in Section 4.2 is inside the red rectangle.
Remotesensing 11 00779 g007
Table 1. Radar parameters of the Tewkesbury 2007 Synthetic Aperture Radar (SAR) images.
Table 1. Radar parameters of the Tewkesbury 2007 Synthetic Aperture Radar (SAR) images.
SensorPolarizationPixel Spacing after Terrain-CorrectionType of Product
TerraSAR-XHH2.2 mSSC (complex)
Table 2. Radar parameters of the Myanmar 2015 SAR images.
Table 2. Radar parameters of the Myanmar 2015 SAR images.
SensorPolarizationPixel SpacingType of Product
Sentinel-1VV10.0 mGRD (detected)
Table 3. The values of the hyperparameters used during the training of the Stochastic Gradient Descent (SGD) classifier.
Table 3. The values of the hyperparameters used during the training of the Stochastic Gradient Descent (SGD) classifier.
Number of IterationsLoss FunctionRegularization TermAlpha
1000Hinge lossL20.0001
Table 4. The Sentinel-2 bands used to calculate the Normalized Difference Water Index (NDWI) and their resolutions.
Table 4. The Sentinel-2 bands used to calculate the Normalized Difference Water Index (NDWI) and their resolutions.
Band NameBand NumberResolution [m]
GreenBand 310
NIRBand 810
Table 5. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the Tewkesbury 2007 dataset.
Table 5. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the Tewkesbury 2007 dataset.
ClassProducer’s AccuracyUser’s AccuracyOverall Accuracy
Non-flood94.15%69.19%77.03%
Flood61.18%91.87%
Table 6. Area suffering from the flooding in km 2 on the obtained flood map and on the ground truth for the Tewkesbury 2007 dataset.
Table 6. Area suffering from the flooding in km 2 on the obtained flood map and on the ground truth for the Tewkesbury 2007 dataset.
ProductFlooded Area
Flood map7.58 Km2
Ground truth11.39 Km2
Table 7. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the Myanmar 2015 dataset.
Table 7. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the Myanmar 2015 dataset.
ClassProducer’s AccuracyUser’s AccuracyOverall Accuracy
Non-flood98.96%91.97%93.47%
Flood82.06%97.43%
Table 8. Area suffering from the flooding in km 2 on the obtained flood map and on the ground truth for the Myanmar 2015 dataset.
Table 8. Area suffering from the flooding in km 2 on the obtained flood map and on the ground truth for the Myanmar 2015 dataset.
ProductFlooded Area
Flood map407.43 Km2
Ground truth483.74 Km2
Table 9. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the urban areas of Tewkesbury 2007.
Table 9. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the urban areas of Tewkesbury 2007.
ClassProducer’s AccuracyUser’s AccuracyOverall Accuracy
Non-flood99.1%74.87%74.7%
Flood4.88%65.6%
Table 10. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the non-urban areas of Tewkesbury 2007.
Table 10. The producer’s and user’s accuracies for the flood and non-flood classes and the overall accuracy of the classification for the non-urban areas of Tewkesbury 2007.
ClassProducer’s AccuracyUser’s AccuracyOverall Accuracy
Non-flood91.67%66.46%77.68%
Flood68.0%92.19%
Table 11. The execution time for each operation of the flood mapping process for the Myanmar 2015 dataset.
Table 11. The execution time for each operation of the flood mapping process for the Myanmar 2015 dataset.
OperationExecution Time [seconds]
Uploading SAR pair & optical image0.87
Preprocessing SAR pair96.82
Building training dataset27.61
Training, classification, and post-processing35.59
Total160.89
Table 12. The execution time for each operation of the flood mapping process for the the Tewkesbury 2007 dataset.
Table 12. The execution time for each operation of the flood mapping process for the the Tewkesbury 2007 dataset.
OperationExecution Time [seconds]
Uploading SAR pair & optical image1.58
Preprocessing SAR pair36.91
Building training dataset7.97
Training, classification, and post-processing12.1
Total58.56
Table 13. Comparison between the results obtained with this paper’s method on the large Tewkesbury 2007 subset in Figure 7, and those reported in [36] on the same dataset with a similar subset size.
Table 13. Comparison between the results obtained with this paper’s method on the large Tewkesbury 2007 subset in Figure 7, and those reported in [36] on the same dataset with a similar subset size.
PaperOverall AccuracyUrban/non-Urban AccuracyExecution Time [minutes]
This paper90.66%Non-urban areas: 90.81%4.74
Urban areas: 88.97%
[36]-Non-urban areas: 89%19.1
Urban areas:
75% if the shadow and layover are ignored,
57% otherwise

Share and Cite

MDPI and ACS Style

Benoudjit, A.; Guida, R. A Novel Fully Automated Mapping of the Flood Extent on SAR Images Using a Supervised Classifier. Remote Sens. 2019, 11, 779. https://doi.org/10.3390/rs11070779

AMA Style

Benoudjit A, Guida R. A Novel Fully Automated Mapping of the Flood Extent on SAR Images Using a Supervised Classifier. Remote Sensing. 2019; 11(7):779. https://doi.org/10.3390/rs11070779

Chicago/Turabian Style

Benoudjit, Abdelhakim, and Raffaella Guida. 2019. "A Novel Fully Automated Mapping of the Flood Extent on SAR Images Using a Supervised Classifier" Remote Sensing 11, no. 7: 779. https://doi.org/10.3390/rs11070779

APA Style

Benoudjit, A., & Guida, R. (2019). A Novel Fully Automated Mapping of the Flood Extent on SAR Images Using a Supervised Classifier. Remote Sensing, 11(7), 779. https://doi.org/10.3390/rs11070779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop