Next Article in Journal
Coseismic Source Model of the February 2023 Mw 6.8 Tajikistan Earthquake from Sentinel-1A InSAR Observations and Its Associated Earthquake Hazard
Next Article in Special Issue
Self-Incremental Learning for Rapid Identification of Collapsed Buildings Triggered by Natural Disasters
Previous Article in Journal
Spatial and Temporal Characteristics of Drought Events in Southwest China over the Past 120 Years
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sentinel-1 SAR Images and Deep Learning for Water Body Mapping

by
Fernando Pech-May
1,
Raúl Aquino-Santos
2,* and
Jorge Delgadillo-Partida
2
1
Department of Computer Science, TecNM: Instituto Tecnológico Superior de los Ríos, Balancán 86930, Mexico
2
Universidad Tecnológica de Manzanillo, Las Humedades s/n Col. Salagua, Manzanillo 28869, Mexico
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(12), 3009; https://doi.org/10.3390/rs15123009
Submission received: 26 April 2023 / Revised: 26 May 2023 / Accepted: 5 June 2023 / Published: 8 June 2023

Abstract

:
Floods occur throughout the world and are becoming increasingly frequent and dangerous. This is due to different factors, among which climate change and land use stand out. In Mexico, they occur every year in different areas. Tabasco is a periodically flooded region, causing losses and negative consequences for the rural, urban, livestock, agricultural, and service industries. Consequently, it is necessary to create strategies to intervene effectively in the affected areas. Different strategies and techniques have been developed to mitigate the damage caused by this phenomenon. Satellite programs provide a large amount of data on the Earth’s surface and geospatial information processing tools useful for environmental and forest monitoring, climate change impacts, risk analysis, and natural disasters. This paper presents a strategy for the classification of flooded areas using satellite images obtained from synthetic aperture radar, as well as the U-Net neural network and ArcGIS platform. The study area is located in Los Rios, a region of Tabasco, Mexico. The results show that U-Net performs well despite the limited number of training samples. As the training data and epochs increase, its precision increases.

Graphical Abstract

1. Introduction

Natural disasters are becoming more frequent and of greater intensity and severity. They occur throughout the world, causing severe harm to the population. According to the Centre for Research on the Epidemiology of Disasters (CRED), floods are the most common and destructive phenomenon [1]. In 2021, 432 catastrophes were registered, of which 223 were floods causing more than 4000 deaths (see Figure 1) [2].
In recent years, floods have caused human losses and severe damage to the world’s economies. They cause damage to infrastructure in different sectors, such as agriculture and livestock, and cause poverty among vulnerable populations. Likewise, they can damage vital infrastructure and the transportation system, which can complicate the rescue operation, in terms of sending aid to those affected, identifying affected areas and their severity, and conducting damage assessment, among others. In this sense, it is essential to analyze, segment, and map floods to calculate the extent of water in the flooded area and identify its spatial distribution.
According to the United Nations Office for Disaster Risk Reduction (UNDRR) [3], more than 45% of the world’s population has been affected by floods, including India, China, Afghanistan, Germany, and Western Europe [1].
In Mexico, floods are constant in different areas. The southern region of Mexico has been affected by frequent floods [4]. These events originate in the rainy season, which begins in May and ends in November, having repercussions such as a rise in river levels and the spillage of their flows onto tracts of land dedicated to productive activities or in areas with urban settlements. The two most severe floods were those of 2007 (https://www.gob.mx/cenapred/articulos/domingo-28-de-octubre-2007-mega-inundacion-en-tabasco?idiom=es, accessed on 26 Decembrer 2022) and 2020 (https://elpais.com/mexico/2020-11-23/tabasco-una-tragedia-bajo-el-agua.html, accessed on 12 April 2021). According to official data from CEPAL [5], the damages caused by the 2007 floods amounted to USD 3,000,000.00: 31.77% for the productive sector, 26.9% for agriculture, and 0.5% for the environment. The 2020 floods caused damages to 800,000 people, 200,400 houses, and 2000 km of land, and losses of almost USD 1,000,000.00.
The leading cause of flooding is due to the abundance of water in the area and the impact of dams on the hydrology of the region, altering the natural flow of rivers, which can cause flash floods and flooding, affecting the drinking water, health, and livelihoods of hundreds of thousands of people each year. Some causes of severe floods are [6] (1) the difficulty of the soil to infiltrate water quickly, inducing surface runoff of most of the water volume in the plain; (2) changes in land use and the morphological conditions of the land and deforestation of the rainforest for livestock, industrial use, and urban expansion; (3) geological instability; and (4) poor land planning and mismanagement of natural resources.
It is important to note that dams play an important role in diverting and regulating the water flow. However, if they are not correctly managed during the accumulation of water due to extreme rainfall, dams can overflow and cause catastrophic flooding in the surrounding area. The poor design and delays of the hydraulic works initiated in 2003 in Southern Mexico impeded the passage of river channels and contributed to further flood damage.
On the other hand, the development of new satellite platforms, tools, and sensors, which are increasingly advanced, provides much terrestrial information (images). These have made it possible to gradually improve flood predictions and evacuation services through flood mapping. Despite this, they are still unreliable or are not fast enough to handle real situations during floods. However, new strategies continue to be proposed to improve prediction.
Remote sensing can collect a large amount of data from the ground and capture beneficial information [7]. Earth observation satellite programs have allowed numerous investigations focused on detecting and mapping floods, soil analysis, and monitoring natural damage. The data acquired by satellites have different properties, such as (1) spatial resolution, which determines the area of the Earth’s surface covered by each pixel of the image; (2) spectral resolution, which represents the electromagnetic spectrum captured by the remote sensor, and the number and width of regions; and (3) temporal resolution, which determines for how long satellite information can be obtained from the exact location with the same satellite and radiometric resolution [8]. Remote sensing provides optical or synthetic aperture radar (SAR) images.
The optical images are of high resolution and are multispectral, and they are correlated with the open water surface. However, they can be affected by the presence of clouds during precipitation. This makes it impossible to acquire spotless images.
SAR sensors can penetrate clouds and obtain images in any weather condition. This is because the sensors operate at longer lengths and are independent of solar radiation. This makes them ideal for the monitoring and mapping of floods and estimating the damage caused.
Among the Earth observation satellite programs is Copernicus. It has an excellent capacity to acquire remote data with a high temporal and spatial resolution, which are valuable for the mapping of floods. Copernicus comprises six satellites developed for different purposes: Sentinel-1, which provides SAR images helpful in observing land and oceans; Sentinel-2, which provides multispectral terrestrial optical images; Sentinel-3 and -6, for marine observation; and Sentinel-4 and -5 for air quality monitoring [9,10,11].
On the other hand, deep learning (DL) techniques have emerged in recent years. Semantic segmentation algorithms based on convolutional neural networks (CNN) have gained wide acceptance due to their excellent results and ease of training [12]. Consequently, they are the most widely used for the analysis of satellite images. This has allowed the development of strategies using satellite images for various purposes, such as land cover classification, the extraction of water bodies, flood mapping, etc [10,13,14].
CNNs comprise multiple processing layers that result from performing spatial convolutions, usually followed by trigger units. On the other hand, recurrent neural networks (RNN) [15] are also being used in remote sensing since they can handle data sequences (for example, time series) in such a way that the output of the previous time step is fed as input to the current step.
This research presents a strategy for the mapping of flooded areas using the U-Net semantic segmentation architecture, Sentinel-1 SAR satellite images, and the ArcGIS geographic information system. The study area belongs to the state of Tabasco, Mexico. The document is structured as follows: Section 2 describes the related works; Section 3 describes the materials and methods used in the research; Section 4 presents the results of the experiments; and, finally, Section 5 contains the conclusions derived from the study.

2. Related Works

The literature has different approaches to analyzing and mapping floods and water bodies. Many approaches use optical (multispectral) imaging and other SAR. Others combine SAR and optical data. Optical sensors measure the radiation of the visible spectrum through the short infrared spectrum, which makes them suitable in distinguishing water bodies from dry surfaces. However, each data type has different characteristics, capabilities, and precision.
On the other hand, SAR sensors are based on energy reflectance, which makes them capable of acquiring images in all weather conditions, and at day or night. However, they are unable to differentiate between water and water-like surfaces. It should be noted that deep neural networks such as CNN and RNN are the most used for flood monitoring and mapping [13,16,17,18]. Likewise, supervised, unsupervised, and contrastive algorithms have been used.
Threshold approaches using SAR imaging perform well in mapping flooding and water bodies [19,20,21,22]. This is due to the low level of electromagnetic reflectance of SAR. However, the results may be less effective when disturbances occur in the images. Another drawback is that a characteristic bimodal distribution is not shown if the water occupies a small fraction of the image. For this, the selection of an appropriate representative backscattering coefficient threshold from the radiometric histogram [23,24] to discriminate between water and land pixels is necessary. In [25], the authors used the pixel-based thresholding method and combined SAR and optical images to generate time series. They used images from 2016 and 2018: 777 from Sentinel-1, 515 from Sentinel-2, and 57 from Landsat-8. The results showed the flooding patterns and the damage caused in the study area.
Liang et al. [26] analyzed the delineation of water bodies using SAR images. Their thresholding method applied a global threshold to delineate the water pixels and group non-water pixels. Then, they applied local thresholds to subsets of land pixels and adjusted the Gamma distribution. Twele et al. [9] developed a processing chain to map flooded areas combining the threshold, HAND index, and classification based on fuzzy logic.
Traditional machine learning approaches to water body analysis generally use optical imaging [27,28,29,30]. Spectral indices are applied to the images and developed to monitor vegetation. The indices are generally based on the interactions between vegetation and electromagnetic energy in the short-wave infrared (SWIR) and near-infrared (NIR) spectrum bands [31,32]. These indices apply to images with different resolutions, such as LANDSAT, SPOT, and SENTINEL [33]. However, to map water bodies and soil vegetation, the Normalized Difference Vegetation Index (NDVI) [34] and the Normalized Difference Water Index (NDWI) [35] are mainly used. Optical sensors are highly correlated with open water surfaces. Despite this, they cannot penetrate clouds, which limits their use in rainy or cloudy weather. Consequently, acquiring high-resolution, cloudless multispectral images is impossible. Deroliya et al. [36] present an approach for flood risk mapping considering geomorphic descriptors. They use three algorithms: decision tree (DT), random forest (RF), and gradient-boosted decision trees (GBDT). Zhou et al. [37] use the support vector machine (SVM). Merela [38] and Schmitt [23] use random forest (RF) for the analysis of water bodies. Pech-May et al. [29] analyze the behavior and land cover of water bodies during floods in the rainy season using multispectral images and RF, SVM, and classification and regression tree (CART) algorithms.
Deep learning (DL) for terrestrial observation has emerged in recent years. Due to its good results, it is being used to analyze the land surface, climate change, changes in water bodies, and crop flooding, among others. DL algorithms can learn from appropriate feature representations for classification tasks through spatial learning (CNN) and sequential learning (RNN). These approaches have presented better results compared to other techniques. However, they suffer from some problems. The CNNs suffer inductive biases, while RNNs are affected by the disappearance of the gradient [39]. For supervised DL algorithms to obtain satisfactory results, they require extensive datasets for their training [40,41]. Due to this need, different datasets of labeled satellite images have been created.
Some datasets of images related to floods are Sen1Floods [42], which contains Sentinel-1 Sentinel-2 images of 11 manually and weakly labeled flood events; UNOSAT [43], with Sentinel-1 SAR labeled images of 15 flood events; OMBRIA [44], with labeled images from Sentinel-1 and Sentinel-2 of 23 floods; and SEN12-FLOOD [45], with labeled images from Sentinel-1 and Sentinel-2. These datasets are used in different flood analysis proposals [19,40,44,46,47]. Despite this, we do not have a large dataset of labeled images, and acquiring one would take a long time. Some approaches propose to use loosely labeled datasets [42,48]. Other approaches use contrastive learning to avoid reliance on labeled data [41,49,50,51].
Zhao et al. [52] used convolutional networks and SAR images to classify buildings, vegetation, roads, and water bodies using TerraSAR images [53]. Other approaches, such as those of Katyar et al. [54] and Ziyao et al. [55], use U-Net [56]. U-Net uses hop connections between different blocks of each stage to preserve the acquired feature maps. At the same time, SegNet [57] reuses the encoder clustering indices for nonlinear upsampling, thus improving the results in flood detection.
On the other hand, Scepanovic et al. [58] created a land cover mapping system with five classes. They applied several semantic segmentation models, such as U-Net, DeepLabV3+ [59], PSPNet [60], BiSeNet [61], SegNet, FCDenseNet [62], and FRRN-B [63].
Konapala et al. [64] presented a strategy for flood identification from SAR images. In [65], they used Sentinel-1 and Sentinel-2 to identify flooded areas. Likewise, Yu Li et al. [66] conducted a study in which they analyzed hurricanes. In [67,68,69], they proposed approaches incorporating recursive and convolutional operations to treat spatiotemporal data. Finally, some RNN approaches have been proposed for the analysis of water bodies and land cover using Sentinel images [70,71]. Table 1 lists work using different approaches (models) and images in different study areas.

3. Materials and Methods

The proposed strategy for flood detection and mapping consists of four main phases. Figure 2 shows the methodology with each of the activities of each phase. Each stage is explained below.

3.1. Study Area

Tabasco is in the southeast of Mexico, on the coast of the Gulf of Mexico. Its territorial extension is 24,661 km 2 , representing 1.3% of the country. Two regions are recognized in the area, Grijalva and Usumacinta, which contain two subregions (swamps and rivers). Together, they form one of the largest river systems in the world in terms of volume. In addition, the state’s average rainfall is three times higher than Mexico’s and represents almost 40% of the country’s freshwater. The abundance of water and the impact of the dams on the hydrology of the region alter the natural flow of the rivers, causing flash floods and floods, which affect the drinking water, health, and livelihoods of thousands of Tabasqueños [83]. Therefore, flooding is expected in the region. However, in the fall of 2020, several river fronts and hurricanes caused the worst flooding in decades, causing widespread human and economic losses. The study area is located in the Ríos subregion (see Figure 3). It is in the easternmost part of the state, on the border with Campeche and the Republic of Guatemala. It is named for the many rivers that cross it, including the Usumacinta River, the largest in the country, and the San Pedro Mártir River. The municipalities that make up this subregion are Tenosique, Emiliano Zapata, and Balancán. Its surface covers approximately 6000 km 2 , representing 24.67% of the state.

3.1.1. Study Period

A period was established to carry out the training sample collection. According to the National Water Commission (CONAGUA, Mexico) [84], the two maximum rainfall periods of the year are separated by a heatwave. The first maximum occurs in June and the second in September and November; 72% of the total rainfall in the state is concentrated in this period. On the other hand, the rains throughout the year can be classified into different seasons. Table 2 shows the annual distribution of rainfall in the state of Tabasco. Table 3 shows the available Sentinel-1 SAR images. Those with a gray background were selected for model training. The images correspond to the months of September–November 2020.

3.1.2. Image Acquisition

The study used SAR images with identical polarization in the return wave (HH) obtained from the Sentinel-1 satellite using the Copernicus Open Access Hub platform (https://scihub.copernicus.eu/, accessed on 10 January 2023). These images were found within a tile that included the states of Campeche, Chiapas, and Tabasco (see Figure 4). Sentinel-1 SAR sensors can acquire images regardless of time, day or night, and weather conditions. This feature makes SAR imaging the best remote sensing system for tropical regions with high cloud cover. Moreover, the high sensitivity of the bands (1–30 m wavelength) to humidity makes it possible to differentiate SAR data between water and another type of land cover. Another critical factor is the angle of incidence: when it increases, the backscatter decreases, which means that when the same surface is observed at different angles, the backscatter will be different. Finally, one polarization may be more important depending on the flooding site, conditions, and soil morphology. Since the study area had a large amount of vegetation, it was decided to use HH polarization since it had greater penetration through the canopy.
The images used corresponded to the periods of November 2020 and September 2022; this was because, during these periods, there were medium-scale floods in the study area. Therefore, the availability of the scenes in Table 4 was considered. In addition, the estimated flood map generated by the National Civil Protection System (SINAPROC) was used for the training samples.

3.1.3. Preprocessing of SAR Images

Among the challenges of SAR imaging is processing. This is due to the geometry of its acquisition, which generates geometric and radiometric deformation effects such as slant range distortion layover and foreshortening [85]. Warping effects can affect the backscatter values of images. To correct this, we used the Sentinel Application Platform (SNAP) [86], a modular application of ESA for the treatment of satellite images. The preprocessing steps applied were as follows:
  • Radiometric correction. To correct the distortions of the radar signals caused by alterations in the movement of the sensor or instrument onboard the satellite. It should be noted that the intensity of the image pixels can be directly related to the backscatter signal captured by the sensor. Uncalibrated SAR images are helpful for qualitative use but must be calibrated for quantitative use. Figure 5a,b show an example of an image before and after correction.
  • Speckle filter application. SAR images have inherent textures and dots that degrade image quality and make it challenging to interpret features. These points are caused by constructive and destructive random interference from coherent but out-of-phase return waves scattered by elements within each resolution cell. To provide a solution, speckle filtering was applied, with non-Gaussian multiplicative noise, which indicated that the pixel values did not follow a normal distribution. Consequently, the 7 × 7 Lee [87] filter was used to standardize the image and reduce this problem (see Figure 5c).
  • Geometric calibration. SAR images may be distorted due to topographic changes in the scene and the inclination of the satellite sensor, which makes it necessary to reposition it. Data that do not point directly to the nadir position of the sensor will have some distortion. The digital elevation model (DEM) of the Shuttle Radar Topography Mission (SRTM) (http://www2.jpl.nasa.gov/srtm/, accessed on 5 April 2022) was used for the geometric correction. Figure 5d shows the rearrangement of the SAR image of the study area.
  • RGB layer generation. An RGB mask of the SAR image was created to detect pixels where water bodies, vegetation, and flooded areas occurred. This method is based on the differences between the images before and after the event. It results in a multitemporal image in which a band is assigned to each primary color to form an RGB composite image. The RGB layer allows the highlighting of relevant features and facilitates visual interpretation, while binary layers allow precise segmentation and accurate evaluation of the results. For example, for the RGB layer, the HV/VV/VH combination can be used to highlight the texture and intensity of the backscatter signal at different polarizations. The maps obtained reflect flooded areas in blue, permanent water in black, and other soil types in yellow. Figure 5e shows the result of the image with the RGB layer.
  • Binary layer. A threshold was used to separate water pixels from other soil types. The histogram of the filtered backscattering coefficient of the previously treated images was analyzed for this. The minimum backscattering values were extracted since these corresponded to the pixels with the presence of water. In this way, a more accurate threshold value can be obtained between flooded and non-flooded areas. This layer is helpful in evaluating and validating results, as it allows a direct comparison with reference data. RGB and binary layers can be used in different approaches, such as land cover change analysis and monitoring changes in water bodies. Figure 5f shows the binary layer obtained from thresholding. Areas with shades of red indicate the presence of water, while other deck objects are ignored. The purpose of this layer is to obtain the training samples that will be used in the deep learning model. Some benefits were obtained by comparing and analyzing the binary layer with the SINAPROC 2020 flood map, such as validation and verification. This is because the SINAPROC map is a reliable data source to validate and verify the accuracy of the generated binary layer. It also allowed us to understand the temporal and spatial context, as it provided information on the specific period in which the floods occurred in 2020. This allowed us to contextualize the generated binary layer regarding time and geographic location.

3.2. Training

The training of deep learning models requires the conversion of geographic information systems (GIS) into a format that can be used to classify images. Creating good training examples is essential when training a deep learning model or any image classification model. SAR images from September to November 2020, when floods occurred in Tabasco, were used for the training. These images were used as samples to provide the visual information needed to train the deep learning model. The ArcGIS Pro platform [88] was used to train the model.

3.2.1. Training Sample Collection

Images preprocessed with the binary layer were used to create training samples. Figure 6 shows some of the training samples captured in the SAR images.
It is important to mention a significant characteristic of water bodies and floods in SAR satellite images: radar signals are sensitive to the structure of objects on the Earth’s surface. Several main dispersion mechanisms exist, such as mirror, double bounce, and volume. On the smooth ground, such as a calm water body, mirror scattering or specular reflection from the surface dominates. On the contrary, volume backscatter dominates in heterogeneous terrain or rough surfaces, such as areas of dense vegetation. In urban areas and flooded vegetation, double bounce backscatter dominates; they form right angles in the direction of the radar and the signal bounces twice, reflecting most of the energy back to the radar. The homogeneity and heterogeneity of the surface structures of objects are manifested in images as smooth or rough surfaces, such that image areas appear bright or dark. When a water body is calm, the behavior concerning the radar signal is called a specular reflector—that is, the radar signal that hits the lake is reflected in the opposite direction to the satellite, away from the sensor. For this reason, when the antenna does not perceive a strong return signal, water bodies appear dark on the radar image. The dark tone contrast makes possible a separation between land and water cover [89].
Considering the above, the collection of samples was obtained, obtaining a set of 1036 samples distributed in the different scenarios seen in the satellite images. Once the samples were established, the training data were exported in ArcGIS (Export Training Data). The output of this process comprised sets of small images of the sample sites (image chips), labels (labels) in XML format, metadata files, parameters, and statistics of the captured samples.

3.2.2. Classification Model Training

The U-Net algorithm can learn specific features in images by combining low- and high-level features, making it highly suitable in segmenting and classifying objects in satellite images. Convolutional neural networks of the U-Net type were used, which, despite being one of the simplest models, offer more accurate or adjusted results than other models. The accuracy is due to its ability to handle small datasets, and it has been used in various image processing approaches in remote sensing. Furthermore, the segmentation and classification of objects in satellite imagery are essential for various applications, such as urban planning, natural resource management, and the detection of changes in the environment. The model performs a downsampling process to reduce the input image to a small feature matrix. Then, the decoder constructs the output using the input features and recombines spatial information from the input image.
Figure 7 shows the U-Net structure for SAR image segmentation. It consists of two paths: encoder and decoder. The encoder is a pre-trained classification network (ResNet) where convolution blocks followed by max-pool downsampling are applied to encode the input image into feature representations at several levels. Each block is a convolution operation and follows a ReLU activation function. The red arrows indicate a 2 × 2 max-pooling layer.
The decoder reconstructs the feature maps learned by the encoder over the pixel space (higher resolution) to obtain a dense classification. The green arrows indicate upsampling, which uses the upsampling layer at each step to obtain a high-resolution feature map. Finally, the gray arrows indicate the concatenation connections, which merge the attention feature map and the corresponding top feature map.
The Train Deep Learning Model geoprocessing tool is used for model training. This tool allows the generation of a model based on deep learning using the collection of samples (image chips and labels) captured in the training process as input data. For the capture of image chips and their labeling, a relevant region of interest in the study area was chosen for flood detection. Using spatial analysis tools, Sentinel-1 image chips were cropped and extracted from the region of interest. These chips could be overlapped to ensure complete coverage of the study area. A suitable resolution for the image chips was also selected to ensure the representation of relevant features for flood detection. Finally, inclusion and exclusion criteria were applied during labeling to ensure the quality and accuracy of the labels. These criteria were based on agreement with reference data and consistency with historical information (RGB and binary layer images).
To carry out the training of the model, a series of parameters were adjusted in the ArcGIS platform.
  • Epochs. The maximum number of cycles or iterations back and forth of all training samples through the neural network. Different values were taken: 25, 50, 75, and 100 epochs (see Table 5).
  • Batch size. The number of samples to be processed at the same time. It depends on the hardware and the number of processors or GPUs available. A value of 8 was taken.
  • Chip size. A value equal to the size of the sample site images or image chips. The larger the chip size, the more information can be displayed and processed. In our case, the value corresponded to 256 pixels.
The parameters were selected and adjusted according to the needs of the approach and the available hardware. Some potential effects on the selection and adjustment are as follows. (1) Epochs: an insufficient number may result in a model with little learning of the patterns and features relevant in the data, and an excessive number of epochs may lead to overfitting of the model. (2) Batch size: a small batch may lead to more frequent weight updates but with higher variability, which would affect the stability of the training; a large batch may lead to less frequent but more stable updates. In this sense, the optimal batch size depends on the amount of training data and computational resources available and the complexity of the model. (3) Chip size: a suitable size should consider the spatial resolution of the Sentinel-1 imagery and the scale of the features relevant to flood detection. A small size can lead to the loss of essential details and a lack of spatial context. An enormous chip size can reduce the model’s ability to capture fine details and affect the accuracy of pixel-level flood detection.
ResNet-34 [90] was used as a backbone or residual network, consisting of 34 pre-trained layers with more than 1 million images from the ImageNet dataset [91]. Of the dataset, 10% was used to validate the model. The number of training samples used to validate the model during learning was specified. ArcGIS provides a checkbox (which was disabled) to stop the training process when the learning curve starts to flatten. This is to avoid the premature or incomplete termination of the training process. Finally, a definition output was generated with the trained model and aspects such as (1) the learning rate, which is automatically adjusted with an optimal value and the weights of the model in the backpropagation of the data by the neural network during the training process [92]; (2) the training and validation loss function, which indicates how well the model fits the training and validation data [93], and (3) the average accuracy score for the percentage of correct model detection from the results obtained with the internal validation samples.

3.2.3. Model Validation and Optimization

Image classification is a GPU-intensive process that can take time, depending on the computer’s hardware. Once the deep learning model is created, it can be used repeatedly to determine the presence of flooding in an area of interest. For this, the created model was used to classify the flooded areas in the same geographical area as in the training data. Figure 8 shows an output example; each pixel comprising the satellite image corresponds to one of the previous classes created. We show the parameters used and loaded to classify the SAR image for 2022.

4. Results Obtained

The results of the training of the final model are described below in terms of the learning rate, training and validation loss, and the estimated precision of the model in the task for which it was trained.
  • Learning rate. A number that controls the rate at which model weights are updated during training. It determines the speed at which the model learns. Table 6 shows each training period’s initial and final values (default value 0.01 ).
  • Training and validation loss. Training loss measures the model error in the training data, i.e., how well the model fits the data. The lower the value, the better the performance. Validation loss measures the model error in the validation data, i.e., how well the model generalizes to new data that it has not seen before. The smaller the value, the better the model will perform on the validation data. According to the established training parameters, the validation loss was calculated for 10% of the total samples used. Figure 9 shows the graphs of the training loss function and validation of the deep learning models trained with different times and training samples.
  • Precision. It refers to the percentage of times that the model makes a correct prediction concerning the total number of predictions made. Thus, it measures the proportion of times that the model correctly labels an instance of a data sample. Figure 10 shows the image chips taken as samples concerning the classifications made by the models. This is to compare the results and precision of the models. In Table 7, the average precision of each model is presented, as well as other quality validation parameters.
Figure 9. Training and validation losses in the trained model. (a) 25 epochs, (b) 50 epochs, (c) 75 epochs, and (d) 100 epochs.
Figure 9. Training and validation losses in the trained model. (a) 25 epochs, (b) 50 epochs, (c) 75 epochs, and (d) 100 epochs.
Remotesensing 15 03009 g009
Figure 10. Automatic comparison of the training samples and the classifications generated by the trained model. (a) 25 epochs, (b) 50 epochs, (c) 75 epochs, and (d) 100 epochs.
Figure 10. Automatic comparison of the training samples and the classifications generated by the trained model. (a) 25 epochs, (b) 50 epochs, (c) 75 epochs, and (d) 100 epochs.
Remotesensing 15 03009 g010
Table 6. Initial and final values of the learning rate parameter at different times in which the model was trained.
Table 6. Initial and final values of the learning rate parameter at different times in which the model was trained.
EpochsLearning Rate
InitialEnd
250.0000052480.000052480
500.0000158480.000158489
750.0000229090.000229099
1000.0000063090.000063096

Model Evaluation

The result of the analysis of the images to detect flooded regions can be classified by the pixels determined as water or non-water regions. In this sense, it is necessary to calculate the accuracy in classifying the pixels. For this, we use (1) precision (see Equation (1), to determine how many of the predicted water pixels match the labeled water pixels; (2) recall (see Equation (2), which takes into account false negatives to penalize the model; and (3) F 1 , which is a harmonic measure of precision and recall. Table 7 shows the validation parameters mentioned above.
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
where TP (true positive) is the set of pixels that are correctly classified as water; FP (false positive) is the set of pixels from non-flooded areas classified as flooded; FN (false negative) is the set of pixels from flooded areas classified as non-flooded.
Since precision and recall must be high, F 1 is a compensation metric for over- and under-segmentation. The formula for F 1 is shown in Equation (3).
F 1 = 2 × P r e c i s i o n × R e c a l l P r e c i s i o n + R e c a l l
Table 7. Quality evaluations of the trained neural model.
Table 7. Quality evaluations of the trained neural model.
Chips: 256Epochs: 25
EvaluationClass: flood
Precision82%
Recall40%
F153%
Chips: 566Epochs: 50
Precision81%
Recall78%
F179%
Chips: 716Epochs: 75
Precision74%
Recall72%
F173%
Chips: 1036Epochs: 100
Precision94%
Recall92%
F193%
It should be noted that the model was trained using a dataset of SAR images and flood labels. The model parameters were optimized to achieve the best possible performance. The latest version of the model was executed in a large part of the Tabasco territory (mainly in the Ríos de Tabasco area), applying the previously optimized parameters to evaluate its ability to identify the pixels of the SAR image with the presence of floods. Although the model results were slightly affected due to regional and climatic characteristics, the great potential of the model for the detection, classification, and mapping of areas with flood presence was observed (see Figure 11).

5. Conclusions

Deep learning methods to obtain high precision for pixel classification require a rigorous training process. This process is achieved by integrating many samples and interactions in the neural network to ensure an adequate representation of the patterns present in the image. As shown in Table 7, training the model with 256 samples (chips) and 25 epochs resulted in precision of 82%, recall of 40%, and F1 of 53%. However, when training the model with 1036 samples and 100 epochs, the precision was 94%, recall 92%, and F1 93%.
It is important to note that the training process is not limited to integrating samples and iterations. It also requires many tests and evaluations to determine the effects of the different training and classification parameters on model performance. This allows for adjustment and optimizes the generated models, obtaining better results in pixel classification. It is essential to have powerful hardware resources to implement and train deep learning models that automate different cartographic tasks. This is because DL models require a large volume of data to train, and processing these data involves many computations and complex mathematical operations. By having powerful hardware resources, such as a high-capacity GPU or many processing cores, the training time can be reduced and a more significant amount of data can be processed, which improves the accuracy of the modeled tasks. In other words, the processing power can be improved and training times accelerated, enabling the better performance of DL models and higher accuracy in water body mapping tasks.
On the other hand, repeatedly using the same DL model in the same study area may be associated with limitations and errors. This is because floods can alter the topography and terrain characteristics. Therefore, if the model is trained with pre-flood images and used to classify post-flood images, it may not capture terrain changes and new features that may emerge. Another issue is that floods can vary in magnitude and extent over time. If the model is trained on historical data and applied to more recent imagery, there may be significant differences in flood conditions. This can lead to a lack of model adaptability and decreased classification accuracy. On the other hand, if the training data used for the model have biases or limitations, such as limited coverage of flood events or a lack of diversity in lighting conditions and scale, the model may not be able to generalize, producing incorrect or biased results.
The results presented in this manuscript show that the information obtained from SAR images (Sentinel-1) is of great importance in monitoring emergencies and natural disasters. These images can be obtained under adverse weather or atmospheric conditions, such as rain, drizzle, and cloud cover. However, SAR images may contain some errors that can influence flood detection, such as (1) image artifacts such as false edges or discontinuities caused by the acquisition process and image processing; (2) terrain topography that can affect the backscattering of SAR waves and generate false positives or false negatives, which can influence flood detection using SAR imagery. Likewise, optical images (Sentinel-2) help to obtain information about terrestrial dynamics, but they are only somewhat effective under adverse conditions because the presence of clouds or noise affects the results. The combination of these technologies and tools made it possible to determine flooded areas and obtain an estimate of the territorial extension affected by floods. They can also be used by institutions dedicated to disaster prevention, risk mapping, and relief and resilience in vulnerable communities.

Author Contributions

Conceptualization, J.D.-P. and R.A.-S.; methodology, F.P.-M.; Writing—original draft preparation, F.P.-M.; writing—review and editing, R.A.-S. All authors have read and agreed to the published version of the manuscript.

Funding

Thanks to National Technology of Mexico (TecNM). Council reference: ITESLRIO/PGP-01.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Centre for Research on the Epidemiology of Disasters (CRED). 2021 Disasters in Numbers; Technical Report; CRED: Bangalore, India, 2021. [Google Scholar]
  2. Guha-Sapir, D.; Below, R.; Hoyois, P. EM-DAT: The CRED/OFDA International Disaster Database. 2023. Available online: https://www.emdat.be/ (accessed on 4 March 2023).
  3. Wallemacq, P.; House, R. Economic Losses, Poverty and Disasters (1998–2017); Technical Report; Centre for Research on the Epidemiology of Disasters United Nations Office for Disaster Risk Reduction: Brussels, Belgium, 2018. [Google Scholar]
  4. Paz, J.; Jiménez, F.; Sánchez, B. Urge Manejo del Agua en Tabasco; Technical Report; Universidad Nacional Autónoma de México y Asociación Mexicana de Ciencias para el Desarrollo Regional A.C.: Ciudad de México, Mexico, 2018. [Google Scholar]
  5. CEPAL. Tabasco: Características e Impacto Socioeconómico de las Inundaciones Provocadas a Finales de Octubre y a Comienzos de Noviembre de 2007 por el Frente Frío Número 4; Technical Report; CEPAL: Ciudad de México, Mexico, 2008. [Google Scholar]
  6. Perevochtchikova, M.; Torre, J. Causas de un desastre: Inundaciones del 2007 en Tabasco, México. J. Lat. Am. Geogr. 2010, 9, 73–98. [Google Scholar] [CrossRef]
  7. Schumann, G.J.P.; Moller, D.K. Microwave remote sensing of flood inundation. Phys. Chem. Earth Parts A/B/C 2015, 83–84, 84–95. [Google Scholar] [CrossRef]
  8. Lalitha, V.; Latha, B. A review on remote sensing imagery augmentation using deep learning. Mater. Today Proc. 2022, 62, 4772–4778. [Google Scholar] [CrossRef]
  9. Twele, A.; Cao, W.; Plank, S.; Martinis, S. Sentinel-1-based flood mapping: A fully automated processing chain. Int. J. Remote Sens. 2016, 37, 2990–3004. [Google Scholar] [CrossRef]
  10. Chini, M.; Pelich, R.; Pulvirenti, L.; Pierdicca, N.; Hostache, R.; Matgen, P. Sentinel-1 InSAR Coherence to Detect Floodwater in Urban Areas: Houston and Hurricane Harvey as a Test Case. Remote Sens. 2019, 11, 107. [Google Scholar] [CrossRef] [Green Version]
  11. Singh, K.K.; Singh, A. Identification of flooded area from satellite images using Hybrid Kohonen Fuzzy C-Means sigma classifier. Egypt. J. Remote Sens. Space Sci. 2017, 20, 147–155. [Google Scholar] [CrossRef] [Green Version]
  12. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  13. Rosentreter, J.; Hagensieker, R.; Waske, B. Towards large-scale mapping of local climate zones using multitemporal Sentinel 2 data and convolutional neural networks. Remote Sens. Environ. 2020, 237, 111472. [Google Scholar] [CrossRef]
  14. Martinis, S.; Groth, S.; Wieland, M.; Knopp, L.; Rättich, M. Towards a global seasonal and permanent reference water product from Sentinel-1/2 data for improved flood mapping. Remote Sens. Environ. 2022, 278, 113077. [Google Scholar] [CrossRef]
  15. Ndikumana, E.; Ho Tong Minh, D.; Baghdadi, N.; Courault, D.; Hossard, L. Deep Recurrent Neural Network for Agricultural Classification using multitemporal SAR Sentinel-1 for Camargue, France. Remote Sens. 2018, 10, 1217. [Google Scholar] [CrossRef] [Green Version]
  16. Yapıcı, M.M.; Tekerek, A.; Topaloğlu, N. Literature Review of Deep Learning Research Areas. Gazi Mühendislik Bilimleri Dergisi 2019, 5, 188–215. [Google Scholar] [CrossRef]
  17. Bourenane, H.; Bouhadad, Y.; Tas, M. Liquefaction hazard mapping in the city of Boumerdès, Northern Algeria. Bull. Eng. Geol. Environ. 2017, 77, 1473–1489. [Google Scholar] [CrossRef] [Green Version]
  18. Yariyan, P.; Janizadeh, S.; Phong, T.; Nguyen, H.D.; Costache, R.; Le, H.; Pham, B.T.; Pradhan, B.; Tiefenbacher, J.P. Improvement of Best First Decision Trees Using Bagging and Dagging Ensembles for Flood Probability Mapping. Water Resour. Manag. Int. J. Publ. Eur. Water Resour. Assoc. (EWRA) 2020, 34, 3037–3053. [Google Scholar] [CrossRef]
  19. Tsyganskaya, V.; Martinis, S.; Marzahn, P.; Ludwig, R. SAR-based detection of flooded vegetation—A review of characteristics and approaches. Int. J. Remote Sens. 2018, 39, 2255–2293. [Google Scholar] [CrossRef]
  20. Shen, X.; Wang, D.; Mao, K.; Anagnostou, E.; Hong, Y. Inundation Extent Mapping by Synthetic Aperture Radar: A Review. Remote Sens. 2019, 11, 879. [Google Scholar] [CrossRef] [Green Version]
  21. Martinis, S.; Kersten, J.; Twele, A. A fully automated TerraSAR-X based flood service. ISPRS J. Photogramm. Remote Sens. 2015, 104, 203–212. [Google Scholar] [CrossRef]
  22. Tarpanelli, A.; Brocca, L.; Melone, F.; Moramarco, T. Hydraulic modelling calibration in small rivers by using coarse resolution synthetic aperture radar imagery. Hydrol. Process. 2013, 27, 1321–1330. [Google Scholar] [CrossRef]
  23. Schumann, G.; Henry, J.; Hoffmann, L.; Pfister, L.; Pappenberger, F.; Matgen, P. Demonstrating the high potential of remote sensing in hydraulic modelling and flood risk management. In Proceedings of the Annual Conference of the Remote Sensing and Photogrammetry Society with the NERC Earth Observation Conference, Portsmouth, UK, 6–9 September 2005; pp. 6–9. [Google Scholar]
  24. Schumann, G.; Di Baldassarre, G.; Alsdorf, D.; Bates, P. Near real-time flood wave approximation on large rivers from space: Application to the River Po, Italy. Water Resour. Res 2010, 46, 7672. [Google Scholar] [CrossRef]
  25. Dinh, D.A.; Elmahrad, B.; Leinenkugel, P.; Newton, A. Time series of flood mapping in the Mekong Delta using high resolution satellite images. IOP Conf. Ser. Earth Environ. Sci. 2019, 266, 012011. [Google Scholar] [CrossRef]
  26. Jiang, X.; Liang, S.; He, X.; Ziegler, A.D.; Lin, P.; Pan, M.; Wang, D.; Zou, J.; Hao, D.; Mao, G.; et al. Rapid and large-scale mapping of flood inundation via integrating spaceborne synthetic aperture radar imagery with unsupervised deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 178, 36–50. [Google Scholar] [CrossRef]
  27. Gou, Z. Urban Road Flooding Detection System based on SVM Algorithm. In Proceedings of the ICMLCA 2021: 2nd International Conference on Machine Learning and Computer Application, Shenyang, China, 17–19 December 2021; pp. 1–8. [Google Scholar]
  28. Tanim, A.H.; McRae, C.B.; Tavakol-Davani, H.; Goharian, E. Flood Detection in Urban Areas Using Satellite Imagery and Machine Learning. Water 2022, 14, 1140. [Google Scholar] [CrossRef]
  29. Pech-May, F.; Aquino-Santos, R.; Rios-Toledo, G.; Posadas-Durán, J.P.F. Mapping of Land Cover with Optical Images, Supervised Algorithms, and Google Earth Engine. Sensors 2022, 22, 4729. [Google Scholar] [CrossRef] [PubMed]
  30. Kunverji, K.; Shah, K.; Shah, N. A Flood Prediction System Developed Using Various Machine Learning Algorithms. In Proceedings of the 4th International Conference on Advances in Science & Technology (ICAST2021), Virtual, 5–8 October 2021. [Google Scholar] [CrossRef]
  31. Alexander, C. Normalised difference spectral indices and urban land cover as indicators of land surface temperature (LST). Int. J. Appl. Earth Obs. Geoinf. 2020, 86, 102013. [Google Scholar] [CrossRef]
  32. Kumar, V.; Sharma, A.; Bhardwaj, R.; Thukral, A.K. Comparison of different reflectance indices for vegetation analysis using Landsat-TM data. Remote Sens. Appl. Soc. Environ. 2018, 12, 70–77. [Google Scholar] [CrossRef]
  33. Campbell, J.; Wynne, R. Introduction to Remote Sensing, 5th ed.; Guilford Publications: New York, NY, USA, 2011. [Google Scholar]
  34. Rouse, J.W., Jr.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with Erts. In Proceedings of the Third ERTS Symposium, NASA, Washington, DC, USA, 10–14 December 1974; Volume 351, pp. 309–317. [Google Scholar]
  35. Gao, B.-C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  36. Deroliya, P.; Ghosh, M.; Mohanty, M.P.; Ghosh, S.; Rao, K.D.; Karmakar, S. A novel flood risk mapping approach with machine learning considering geomorphic and socio-economic vulnerability dimensions. Sci. Total Environ. 2022, 851, 158002. [Google Scholar] [CrossRef]
  37. Zhou, Y.; Luo, J.; Shen, Z.; Hu, X.; Yang, H. Multiscale Water Body Extraction in Urban Environments From Satellite Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4301–4312. [Google Scholar] [CrossRef]
  38. Tulbure, M.G.; Broich, M.; Stehman, S.V.; Kommareddy, A. Surface water extent dynamics from three decades of seasonally continuous Landsat time series at subcontinental scale in a semi-arid region. Remote Sens. Environ. 2016, 178, 142–157. [Google Scholar] [CrossRef]
  39. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 12 February 2021).
  40. Bentivoglio, R.; Isufi, E.; Jonkman, S.N.; Taormina, R. Deep learning methods for flood mapping: A review of existing applications and future research directions. Hydrol. Earth Syst. Sci. 2022, 26, 4345–4378. [Google Scholar] [CrossRef]
  41. Patel, C.P.; Sharma, S.; Gulshan, V. Evaluating Self and Semi-Supervised Methods for Remote Sensing Segmentation Tasks. arXiv 2021, arXiv:2111.10079. [Google Scholar]
  42. Bonafilia, D.; Tellman, B.; Anderson, T.; Issenberg, E. Sen1Floods11: A georeferenced dataset to train and test deep learning flood algorithms for Sentinel-1. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 835–845. [Google Scholar] [CrossRef]
  43. UNOSAT. UNOSAT Flood Dataset. 2019. Available online: http://floods.unosat.org/geoportal/catalog/main/home.page (accessed on 18 June 2022).
  44. Drakonakis, G.I.; Tsagkatakis, G.; Fotiadou, K.; Tsakalides, P. OmbriaNet-Supervised Flood Mapping via Convolutional Neural Networks Using Multitemporal Sentinel-1 and Sentinel-2 Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2341–2356. [Google Scholar] [CrossRef]
  45. Rambour, C.; Audebert, N.; Koeniguer, E.; Le Saux, B.; Crucianu, M.; Datcu, M. SEN12-FLOOD: A SAR and Multispectral Dataset for Flood Detection; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  46. Rambour, C.; Audebert, N.; Koeniguer, E.; Le Saux, B.; Crucianu, M.; Datcu, M. FLOOD DETECTION IN TIME SERIES OF OPTICAL AND SAR IMAGES. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 1343–1346. [Google Scholar] [CrossRef]
  47. Mateo-Garcia, G.; Veitch-Michaelis, J.; Smith, L.; Oprea, S.; Schumann, G.; Gal, Y.; Baydin, A.; Backes, D. Towards global flood mapping onboard low cost satellites with machine learning. Sci. Rep. 2021, 11, e7249. [Google Scholar] [CrossRef] [PubMed]
  48. Bai, Y.; Wu, W.; Yang, Z.; Yu, J.; Zhao, B.; Liu, X.; Yang, H.; Mas, E.; Koshimura, S. Enhancement of Detecting Permanent Water and Temporary Water in Flood Disasters by Fusing Sentinel-1 and Sentinel-2 Imagery Using Deep Learning Algorithms: Demonstration of Sen1Floods11 Benchmark Datasets. Remote Sens. 2021, 13, 2220. [Google Scholar] [CrossRef]
  49. Zhong, H.; Chen, C.; Jin, Z.; Hua, X. Deep Robust Clustering by Contrastive Learning. arXiv 2020, arXiv:2008.03030. [Google Scholar]
  50. Huang, M.; Jin, S. Rapid Flood Mapping and Evaluation with a Supervised Classifier and Change Detection in Shouguang Using Sentinel-1 SAR and Sentinel-2 Optical Data. Remote Sens. 2020, 12, 2073. [Google Scholar] [CrossRef]
  51. Jung, H.; Oh, Y.; Jeong, S.; Lee, C.; Jeon, T. Contrastive Self-Supervised Learning With Smoothed Representation for Remote Sensing. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  52. Zhao, J.; Guo, W.; Cui, S.; Zhang, Z.; Yu, W. Convolutional Neural Network for SAR image classification at patch level. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 945–948. [Google Scholar] [CrossRef]
  53. Betbeder, J.; Rapinel, S.; Corpetti, T.; Pottier, E.; Corgne, S.; Hubert-Moy, L. Multitemporal classification of TerraSAR-X data for wetland vegetation mapping. J. Appl. Remote Sens. 2014, 8, 083648. [Google Scholar] [CrossRef]
  54. Katiyar, V.; Tamkuan, N.; Nagai, M. Near-Real-Time Flood Mapping Using Off-the-Shelf Models with SAR Imagery and Deep Learning. Remote Sens. 2021, 13, 2334. [Google Scholar] [CrossRef]
  55. Xing, Z.; Yang, S.; Zan, X.; Dong, X.; Yao, Y.; Liu, Z.; Zhang, X. Flood vulnerability assessment of urban buildings based on integrating high-resolution remote sensing and street view images. Sustain. Cities Soc. 2023, 92, 104467. [Google Scholar] [CrossRef]
  56. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  57. Moradi Sizkouhi, A.; Aghaei, M.; Esmailifar, S.M. A deep convolutional encoder-decoder architecture for autonomous fault detection of PV plants using multi-copters. Sol. Energy 2021, 223, 217–228. [Google Scholar] [CrossRef]
  58. Scepanovic, S.; Antropov, O.; Laurila, P.; Rauste, Y.; Ignatenko, V.; Praks, J. Wide-Area Land Cover Mapping With Sentinel-1 Imagery Using Deep Learning Semantic Segmentation Models. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 10357–10374. [Google Scholar] [CrossRef]
  59. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
  60. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 6230–6239. [Google Scholar]
  61. Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. BiSeNet: Bilateral Segmentation Network for Real-Time Semantic Segmentation. In Computer Vision—ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 334–349. [Google Scholar]
  62. Jégou, S.; Drozdzal, M.; Vazquez, D.; Romero, A.; Bengio, Y. The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 11–19. [Google Scholar]
  63. Pohlen, T.; Hermans, A.; Mathias, M.; Leibe, B. Full-resolution residual networks for semantic segmentation in street scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4151–4160. [Google Scholar]
  64. Konapala, G.; Kumar, S.V.; Khalique Ahmad, S. Exploring Sentinel-1 and Sentinel-2 diversity for flood inundation mapping using deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 180, 163–173. [Google Scholar] [CrossRef]
  65. Rudner, T.G.J.; Rußwurm, M.; Fil, J.; Pelich, R.; Bischke, B.; Kopačková, V.; Biliński, P. Multi3Net: Segmenting Flooded Buildings via Fusion of Multiresolution, Multisensor, and Multitemporal Satellite Imagery. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 29–31 January 2019; Volume 33, pp. 702–709. [Google Scholar] [CrossRef] [Green Version]
  66. Li, Y.; Martinis, S.; Wieland, M. Urban flood mapping with an active self-learning convolutional neural network based on TerraSAR-X intensity and interferometric coherence. ISPRS J. Photogramm. Remote Sens. 2019, 152, 178–191. [Google Scholar] [CrossRef]
  67. Shi, X.; Chen, Z.; Wang, H.; Yeung, D.; Wong, W.; Woo, W. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. In Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, QC, Canada, 7–12 December 2015; pp. 802–810. [Google Scholar]
  68. Marc, R.; Marco, K. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders. ISPRS Int. J. Geo-Inf. 2018, 7, 129. [Google Scholar] [CrossRef] [Green Version]
  69. Volpi, M.; Tuia, D. Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 881–893. [Google Scholar] [CrossRef] [Green Version]
  70. Bengio, Y.; Courville, A.C.; Vincent, P. Representation Learning: A Review and New Perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef] [Green Version]
  71. Ienco, D.; Gaetano, R.; Interdonato, R.; Ose, K.; Ho Tong Minh, D. Combining Sentinel-1 and Sentinel-2 Time Series via RNN for Object-Based Land Cover Classification. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 4881–4884. [Google Scholar] [CrossRef] [Green Version]
  72. Billah, M.; Islam, A.S.; Mamoon, W.B.; Rahman, M.R. Random forest classifications for landuse mapping to assess rapid flood damage using Sentinel-1 and Sentinel-2 data. Remote Sens. Appl. Soc. Environ. 2023, 30, 100947. [Google Scholar] [CrossRef]
  73. Cazals, C.; Rapinel, S.; Frison, P.L.; Bonis, A.; Mercier, G.; Mallet, C.; Corgne, S.; Rudant, J.P. Mapping and Characterization of Hydrological Dynamics in a Coastal Marsh Using High Temporal Resolution Sentinel-1A Images. Remote Sens. 2016, 8, 570. [Google Scholar] [CrossRef] [Green Version]
  74. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  75. Nemni, E.; Bullock, J.; Belabbes, S.; Bromley, L. Fully Convolutional Neural Network for Rapid Flood Segmentation in Synthetic Aperture Radar Imagery. Remote Sens. 2020, 12, 2532. [Google Scholar] [CrossRef]
  76. Bullock, J.; Cuesta-Lázaro, C.; Quera-Bofarull, A. XNet: A convolutional neural network (CNN) implementation for medical x-ray image segmentation suitable for small datasets. In Proceedings of the Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, San Diego, CA, USA, 16–21 February 2019; Volume 10953, pp. 453–463. [Google Scholar]
  77. Ngo, P.T.T.; Hoang, N.D.; Pradhan, B.; Nguyen, Q.K.; Tran, X.T.; Nguyen, Q.M.; Nguyen, V.N.; Samui, P.; Tien Bui, D. A Novel Hybrid Swarm Optimized Multilayer Neural Network for Spatial Prediction of Flash Floods in Tropical Areas Using Sentinel-1 SAR Imagery and Geospatial Data. Sensors 2018, 18, 3704. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  78. Sarker, C.; Mejias, L.; Maire, F.; Woodley, A. Flood Mapping with Convolutional Neural Networks Using Spatio-Contextual Pixel Information. Remote Sens. 2019, 11, 2331. [Google Scholar] [CrossRef] [Green Version]
  79. Xu, C.; Zhang, S.; Zhao, B.; Liu, C.; Sui, H.; Yang, W.; Mei, L. SAR image water extraction using the attention U-net and multi-scale level set method: Flood monitoring in South China in 2020 as a test case. Geo-Spat. Inf. Sci. 2022, 25, 155–168. [Google Scholar] [CrossRef]
  80. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  81. Katiyar, V.; Tamkuan, N.; Nagai, M. Flood area detection using SAR images with deep neural. In Proceedings of the 41st Asian Conference of Remote Sensing, Deqing, China, 9–11 November 2020; Volume 1. [Google Scholar]
  82. Zhao, B.; Sui, H.; Xu, C.; Liu, J. Deep Learning Approach for Flood Detection Using SAR Image: A Case Study in Xinxiang. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B3-2022, 1197–1202. [Google Scholar] [CrossRef]
  83. Enriquez, M.F.; Norton, R.; Cueva, J. Inundaciones de 2020 en Tabasco: Aprender del Pasado para Preparar el Futuro; Technical Report; Centro Nacional de Prevención de Desastres: Ciudad de México, Mexico, 2022. [Google Scholar]
  84. CONAGUA. Situación de los Recursos Hídricos. 2019. Available online: https://www.gob.mx/conagua/acciones-y-programas/situacion-de-los-recursos-hidricos (accessed on 27 November 2021).
  85. Tzouvaras, M.; Danezis, C.; Hadjimitsis, D.G. Differential SAR Interferometry Using Sentinel-1 Imagery-Limitations in Monitoring Fast Moving Landslides: The Case Study of Cyprus. Geosciences 2020, 10, 236. [Google Scholar] [CrossRef]
  86. ESA. SNAP (Sentinel Application Platform). 2020. Available online: https://www.eoportal.org/other-space-activities/snap-sentinel-application-platform#snap-sentinel-application-platform-toolbox (accessed on 12 March 2021).
  87. Ponmani, E.; Palani, S. Image denoising and despeckling methods for SAR images to improve image enhancement performance: A survey. Multim. Tools Appl. 2021, 80, 26547–26569. [Google Scholar] [CrossRef]
  88. Yoshihara, N. ArcGIS-based protocol to calculate the area fraction of landslide for multiple catchments. MethodsX 2023, 10, 102064. [Google Scholar] [CrossRef]
  89. Brisco, B. Mapping and Monitoring Surface Water and Wetlands with Synthetic Aperture Radar. In Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca Raton, FL, USA, 2015; pp. 119–136. [Google Scholar] [CrossRef]
  90. Yi, L.; Yang, G.; Wan, Y. Research on Garbage Image Classification and Recognition Method Based on Improved ResNet Network Model. In Proceedings of the 2022 5th International Conference on Big Data and Internet of Things (BDIOT’22), Beijing, China, 11–13 August 2023; Association for Computing Machinery: New York, NY, USA, 2022; pp. 57–63. [Google Scholar] [CrossRef]
  91. Mishkin, D.; Sergievskiy, N.; Matas, J. Systematic evaluation of convolution neural network advances on the Imagenet. Comput. Vis. Image Underst. 2017, 161, 11–19. [Google Scholar] [CrossRef] [Green Version]
  92. Katherine, L. How to Choose a Learning Rate Scheduler for Neural Networks. Available online: https://neptune.ai/blog/how-to-choose-a-learning-rate-scheduler (accessed on 2 December 2022).
  93. Baeldung. What Is a Learning Curve in Machine Learning? Available online: https://www.baeldung.com/cs/learning-curve-ml#:~:text=A%20learning%20curve%20is%20just,representation%20of%20the%20learning%20process (accessed on 2 December 2022).
Figure 1. Floods around the world from 2000 to 2022. Dataset obtained from EM-DAT.
Figure 1. Floods around the world from 2000 to 2022. Dataset obtained from EM-DAT.
Remotesensing 15 03009 g001
Figure 2. The methodology used for flood mapping with SAR and U-Net images.
Figure 2. The methodology used for flood mapping with SAR and U-Net images.
Remotesensing 15 03009 g002
Figure 3. Geographical location of the study area. Tabasco, Ríos subregion.
Figure 3. Geographical location of the study area. Tabasco, Ríos subregion.
Remotesensing 15 03009 g003
Figure 4. Location of tile containing the SAR images of the study.
Figure 4. Location of tile containing the SAR images of the study.
Remotesensing 15 03009 g004
Figure 5. SAR image processing: (a) no processing; (b) with radiometric processing; (c) with speckle filter; (d) with geometric correction; (e) with RGB layer; and (f) with binary layer.
Figure 5. SAR image processing: (a) no processing; (b) with radiometric processing; (c) with speckle filter; (d) with geometric correction; (e) with RGB layer; and (f) with binary layer.
Remotesensing 15 03009 g005
Figure 6. Creation of training samples: (a) image used to create the training samples, (b) sample of one of the regions used as a sample next to the generated mask.
Figure 6. Creation of training samples: (a) image used to create the training samples, (b) sample of one of the regions used as a sample next to the generated mask.
Remotesensing 15 03009 g006
Figure 7. U-Net architecture for flood mapping.
Figure 7. U-Net architecture for flood mapping.
Remotesensing 15 03009 g007
Figure 8. Example of output during the validation and optimization process of the model with SAR images of the study area.
Figure 8. Example of output during the validation and optimization process of the model with SAR images of the study area.
Remotesensing 15 03009 g008
Figure 11. Result of the implementation of the deep learning model. The blue pixels correspond to the presence of water bodies and floods.
Figure 11. Result of the implementation of the deep learning model. The blue pixels correspond to the presence of water bodies and floods.
Remotesensing 15 03009 g011
Table 1. Proposals that have used different approaches and image datasets.
Table 1. Proposals that have used different approaches and image datasets.
ProposalDatasetModel *Study Area
Tanim et al. [28], 2022Sentinel-1 SARSVM, RF, MLCCA, USA
Pech-May et al. [29], 2022Sentinel-2SVM, RF, CARTTabasco, Mexico
Kunverji et al. [30], 2021OpticalDT, RF, and GBBihar and Orissa in India
Billah et al. [72], 2023SAR Sentinel-1 and Optical Sentinel-2RFGowainghat, Bangladesh
Cazals et al. [73], 2016Sentinel-1Hysteresis ThresholdFrenc, Europe
Bai et al. [48], 2021Sen1Floods11: Sentinel-1 and Sentinel-2CNN, BasNetBolivia
Katiyar et al. [54], 2021Sen1Floods11: Sentinel-1 and Sentinel-2SegNet-lik [74] and U-Net [56]Kerela, India
Nemni et al. [75], 2020UNOSAT: Sentinel-1U-Net [56] and XNet [76]Sagaing Region, Myanmar
Drakonakis et al. [44], 2022Sentinel-1 and Sentinel-2OmbriaNet [44]Global
Ngo et al. [77], 2018Sentinel-1FA-LM-ANN [77]Lao Cai, Vietnam
Mateo-Garcia et al. [47], 2021WorldFloods 1FCNN [59]Global
Xing et al. [55], 2023OpticalFSA-UNetAnhui Province, China
Sarker et al. [78], 2019Optical Landsat-5F-CNNsAustralia
Xu et al. [79], 2022SAR Sentinel-1U-NetSouth China
Rambour et al. [46], 2020SEN12-FLOOD: Sentinel-1 and Sentinel-2Resnet-50 [80]Global
Katiyar et al. [81], 2020ALOS-2 2: SARU-NetSaga, Kurashiki, Japan
Zhao et al. [82], 2022Gaonfen-3 3: SARU-NetXinxiang, China
1 https://tinyurl.com/worldfloods, accessed on 12 May 2022. 2 https://www.eorc.jaxa.jp/ALOS-2/en/about/palsar2.htm, accessed on 12 July 2021. 3 https://www.geospatial.com.co/imagenes-de-satelite/gaofen-3.html, accessed on 19 June 2022. * SVM: Support Vector Machine. RF: Random Forest. DT: Decision Tree. MLC: Maximum Likelihood Classifier. CART: Classification and Regression Tree. GB: Gradient Boost.
Table 2. Annual distribution of rainfall in the state of Tabasco.
Table 2. Annual distribution of rainfall in the state of Tabasco.
SeasonMonths
North (rainy season)January, February
DryMarch, April, May
Temporal (rainy season)June, July, August, and September
NorthOctober, November
Table 3. SAR imagery availability: temporal season and north, 2020. IW: Interferometric Wide Swath. Gray background: selected images.
Table 3. SAR imagery availability: temporal season and north, 2020. IW: Interferometric Wide Swath. Gray background: selected images.
DateIdentifierSensor Mode
1 September 2020S1A_IW_GRDH_1SDV_20200901T001523_20200901T001548_034158_03F7BC_B5A0IW
1 September 2020S1A_IW_GRDH_1SDV_20200901T001458_20200901T001523_034158_03F7BC_87AAIW
5 September 2020S1A_IW_GRDH_1SDV_20200905T115354_20200905T115419_034223_03FA02_ABF4IW
5 September 2020S1A_IW_GRDH_1SDV_20200905T115325_20200905T115354_034223_03FA02_9B37IW
10 September 2020S1A_IW_GRDH_1SDV_20200910T120208_20200910T120233_034296_03FC89_0B73IW
10 September 2020S1A_IW_GRDH_1SDV_20200910T120139_20200910T120208_034296_03FC89_B49BIW
13 September 2020S1A_IW_GRDH_1SDV_20200913T001524_20200913T001549_034333_03FDDC_678CIW
13 September 2020S1A_IW_GRDH_1SDV_20200913T001459_20200913T001524_034333_03FDDC_D146IW
17 September 2020S1A_IW_GRDH_1SDV_20200917T115325_20200917T115354_034398_040021_40C7IW
17 September 2020S1A_IW_GRDH_1SDV_20200917T115354_20200917T115419_034398_040021_7488IW
22 September 2020S1A_IW_GRDH_1SDV_20200922T120140_20200922T120209_034471_0402CF_5DE6IW
22 September 2020S1A_IW_GRDH_1SDV_20200922T120209_20200922T120234_034471_0402CF_92F4IW
25 September 2020S1A_IW_GRDH_1SDV_20200925T001459_20200925T001524_034508_040406_3B3EIW
25 September 2020S1A_IW_GRDH_1SDV_20200925T001524_20200925T001549_034508_040406_D222IW
29 September 2020S1A_IW_GRDH_1SDV_20200929T115354_20200929T115419_034573_040658_E574IW
29 September 2020S1A_IW_GRDH_1SDV_20200929T115325_20200929T115354_034573_040658_955AIW
4 October 2020S1A_IW_GRDH_1SDV_20201004T120209_20201004T120234_034646_0408F2_0529IW
4 October 2020S1A_IW_GRDH_1SDV_20201004T120140_20201004T120209_034646_0408F2_3612IW
7 October 2020S1A_IW_GRDH_1SDV_20201007T001525_20201007T001550_034683_040A31_8F93IW
7 October 2020S1A_IW_GRDH_1SDV_20201007T001500_20201007T001525_034683_040A31_288FIW
10 October 2020S1A_IW_GRDH_1SDV_20201011T115326_20201011T115355_034748_040C78_6262IW
10 October 2020S1A_IW_GRDH_1SDV_20201011T115355_20201011T115420_034748_040C78_1100IW
16 October 2020S1A_IW_GRDH_1SDV_20201016T120140_20201016T120209_034821_040F05_580FIW
16 October 2020S1A_IW_GRDH_1SDV_20201016T120209_20201016T120234_034821_040F05_1A61IW
19 October 2020S1A_IW_GRDH_1SDV_20201019T001500_20201019T001525_034858_041057_468EIW
19 October 2020S1A_IW_GRDH_1SDV_20201019T001525_20201019T001550_034858_041057_88CEIW
23 October 2020S1A_IW_GRDH_1SDV_20201023T115355_20201023T115420_034923_04128B_5C7BIW
23 October 2020S1A_IW_GRDH_1SDV_20201023T115326_20201023T115355_034923_04128B_9366IW
31 October 2020S1A_IW_GRDH_1SDV_20201031T001500_20201031T001525_035033_041641_26BDIW
31 October 2020S1A_IW_GRDH_1SDV_20201031T001525_20201031T001550_035033_041641_7536IW
4 November 2020S1A_IW_GRDH_1SDV_20201104T115352_20201104T115417_035098_041890_34DBIW
4 November 2020S1A_IW_GRDH_1SDV_20201104T115327_20201104T115352_035098_041890_BF00IW
9 November 2020S1A_IW_GRDH_1SDV_20201109T120140_20201109T120209_035171_041B17_8D95IW
9 November 2020S1A_IW_GRDH_1SDV_20201109T120209_20201109T120234_035171_041B17_9212IW
11 November 2020S1A_IW_GRDH_1SDV_20201112T001524_20201112T001549_035208_041C65_0D7BIW
11 November 2020S1A_IW_GRDH_1SDV_20201112T001459_20201112T001524_035208_041C65_72F8IW
16 November 2020S1A_IW_GRDH_1SDV_20201116T115354_20201116T115419_035273_041EA6_B142IW
16 November 2020S1A_IW_GRDH_1SDV_20201116T115325_20201116T115354_035273_041EA6_9581IW
21 November 2020S1A_IW_GRDH_1SDV_20201121T120140_20201121T120209_035346_042131_3140IW
21 November 2020S1A_IW_GRDH_1SDV_20201121T120209_20201121T120234_035346_042131_E41CIW
24 November 2020S1A_IW_GRDH_1SDV_20201124T001459_20201124T001524_035383_04226B_D4FDIW
24 November 2020S1A_IW_GRDH_1SDV_20201124T001524_20201124T001549_035383_04226B_E7EFIW
28 November 2020S1A_IW_GRDH_1SDV_20201128T115354_20201128T115419_035448_0424BB_315EIW
28 November 2020S1A_IW_GRDH_1SDV_20201128T115325_20201128T115354_035448_0424BB_CC44IW
Table 4. Meteorological phenomena that caused flooding in the study area (Tabasco).
Table 4. Meteorological phenomena that caused flooding in the study area (Tabasco).
Start DateEnd DateMeteorological Phenomenon
29 September 20205 October 2020Cold front No. 4, No. 5 and Hurricane Gamma
29 October 20207 November 2020Cold front No. 9, No. 11 and Hurricane Eta
15 November 202019 November 2020Cold front No. 13 and Hurricane Iota
Table 5. Number of samples and times used in training the flood classification model.
Table 5. Number of samples and times used in training the flood classification model.
EpochsSamples
25256
50566
75716
1001036
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pech-May, F.; Aquino-Santos, R.; Delgadillo-Partida, J. Sentinel-1 SAR Images and Deep Learning for Water Body Mapping. Remote Sens. 2023, 15, 3009. https://doi.org/10.3390/rs15123009

AMA Style

Pech-May F, Aquino-Santos R, Delgadillo-Partida J. Sentinel-1 SAR Images and Deep Learning for Water Body Mapping. Remote Sensing. 2023; 15(12):3009. https://doi.org/10.3390/rs15123009

Chicago/Turabian Style

Pech-May, Fernando, Raúl Aquino-Santos, and Jorge Delgadillo-Partida. 2023. "Sentinel-1 SAR Images and Deep Learning for Water Body Mapping" Remote Sensing 15, no. 12: 3009. https://doi.org/10.3390/rs15123009

APA Style

Pech-May, F., Aquino-Santos, R., & Delgadillo-Partida, J. (2023). Sentinel-1 SAR Images and Deep Learning for Water Body Mapping. Remote Sensing, 15(12), 3009. https://doi.org/10.3390/rs15123009

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop