Next Article in Journal
Non-Line-of-Sight Moving Target Detection Method Based on Noise Suppression
Next Article in Special Issue
A Novel Efficient Method for Land Cover Classification in Fragmented Agricultural Landscapes Using Sentinel Satellite Imagery
Previous Article in Journal
Tracking of Maneuvering Extended Target Using Modified Variable Structure Multiple-Model Based on Adaptive Grid Best Model Augmentation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

AHSWFM: Automated and Hierarchical Surface Water Fraction Mapping for Small Water Bodies Using Sentinel-2 Images

1
Key Laboratory for Environment and Disaster Monitoring and Evaluation of Hubei, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430077, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Hubei Water Resources and Hydropower Science and Technology Promotion Center, Hubei Water Resources Research Institute, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(7), 1615; https://doi.org/10.3390/rs14071615
Submission received: 18 February 2022 / Revised: 25 March 2022 / Accepted: 25 March 2022 / Published: 28 March 2022
(This article belongs to the Special Issue Remote Sensing in Land Use and Management)

Abstract

:
Accurately mapping surface water fractions is essential to understanding the distribution and area of small water bodies (SWBs), which are numerous and widespread. Traditional spectral unmixings based on the linear mixture model require high-quality prior endmember information, and are not appropriate in situations such as dealing with multiple scattering effects. To overcome difficulties with unknown mixing mechanisms and parameters, a novel automated and hierarchical surface water fraction mapping (AHSWFM) for mapping SWBs from Sentinel-2 images was proposed. AHSWFM is automated, requires no endmember prior knowledge and uses self-trained regression using scalable algorithms and random forest to construct relationships between the multispectral data and water fractions. AHSWFM uses a hierarchical structure that divides pixels into pure water, pure land and mixed water-land pixels, and predicts their water fractions separately to avoid overestimating water fractions for pure land pixels and underestimating water fractions for pure water pixels. Results show that using the hierarchical strategy can increase the accuracy in estimating SWB areas. AHSWFM predicted SWB areas with a root mean square error of approximately 0.045 ha in a region using more than 1200 SWB samples that were mostly smaller than 0.75 ha.

1. Introduction

Surface water is an important component of the Earth and a critical factor in global ecosystem changes [1]. Compared with the relatively small number of large inland lakes and reservoirs, there are millions of small lakes (~6% of farmlands worldwide) located throughout Asian, North American and European countries [2]. The size of small water bodies (SWBs) are usually defined as about <1 ha in [3], <10 ha in [4] and <50 ha in [5]. SWBs, including natural and artificial water impoundments such as fishponds, are vital to storing water for agriculture and irrigation [4], and are linked to local stream discharge [5]. Most SWBs, such as on-farm reservoirs, were built in the past ~40 years, and exhibit a large variation in area, especially during crop-growing seasons for the use of supplying water for nearby farms [4,6]. Although SWBs are high in number and essential to farming and ecology, the study of SWBs has lagged behind large lakes and reservoirs in recent years [7,8].
Accurately mapping SWBs is crucial for understanding their locations, areas and temporal variations. Compared with in-situ monitoring, satellite remote sensing enables the mapping of water bodies at a large scale with high spatial and temporal resolution [9,10]. For instance, very high-resolution remote sensing images such as IKONOS, QuickBird and SuperDoves facilitate the mapping of SWBs at the meter or sub-meter resolution [11,12]. However, very high-resolution images are usually small in scene size and relatively low in temporal resolution, and are thus limited in monitoring large-scale SWBs and their variations [13,14,15]. Medium-resolution remote sensing images, whose spatial resolution ranges from 10 m to 39.9 m [16], including the Landsat series and Sentinel-2, are observed at a larger scene size and at high temporal repetition rates, and have great potential for mapping SWBs. In general, most studies use pixel-based hard classification to map SWBs, but are limited to mapping mixed water-land pixels [17,18,19,20]. Research has shown that 30 m Landsat images are limited to mapping SWBs of a 0.1–5 ha area [4,5], while Sentinel-2 images with a resolution of 10–20 m have difficulty in mapping water bodies smaller than 0.035 ha [21].
In order to address inaccurate extraction caused by the mixed pixel problem from the pixel-based hard classification, spectral unmixing has been proposed, which decomposes mixed pixels into a set of constituent endmember spectra [22,23]. The result of unmixing represents the proportions of each endmember present within the pixels, which is called the fraction (i.e., areal abundance) of the endmember [24,25]. Various spectral unmixing algorithms have been proposed to map sub-pixel scale water bodies to reduce the impact of mixed pixel problems. Linear spectral mixture analysis (LSMA), which assumes that a given pixel has a linear composition of different endmembers within it, has been widely used for unmixing remote sensing images to extract water body fractions [26,27,28]. LSMA requires prior knowledge about the spectrum of each pure endmember before unmixing, but the spectral library with representative endmembers for different study areas is usually difficult to build [29]. In addition, LSMA is not appropriate in situations such as dealing with multiple scattering effects [30]. Lastly, SWBs may be composed of various chemical and biological components, and LSMA may fail to accurately map SWBs that exhibit high intra-class spectral variability in the image [29,31,32]. To deal with the high intra-class variability and the low inter-class variability, multiple endmember spectral mixture analysis (MESMA) has been applied to map sub-pixel surface water fractions [33]. MESMA also requires a prior endmember library containing different endmembers. Compared with LSMA and MESMA, which have strict constraints (such as the positive and sum to 1 constraints in endmember abundance), Jarchow et al. [34,35] used a partial unmixing model, matched filtering (MF), to map water fractions for SWBs. MF only inputs prior water endmembers in the unmixing model, which greatly simplifies the collection of other endmembers. MF is still not fully automated, considering the requirement of water endmembers.
Compared with the linear mixture models used for mapping sub-pixel surface water, regression-based unmixing models have been proposed to overcome difficulties with unknown parameters and mixing mechanisms. Li et al. [36,37] proposed surface water fraction mapping methods based on a regression model for moderate resolution imaging spectroradiometer (MODIS) images. Ancillary data, such as the 30 m global surface water (GSW) with thematic water map, provided by Pekel et al. [1], were used to construct the regression model. Liang et al. [38] used bi-temporal multispectral MODIS-Landsat images for surface water fraction mapping. The pre-dated Landsat image was classified to a binary water map, which was combined with the pre-dated MODIS image to train the regression model, which was used to predict surface water fractions from the post-dated MODIS image. The same land cover in the pre-dated training image and post-dated prediction image may change in reflectance due to factors including phenology changes (such as changes in reflectance due to different vegetation and crop growth) and different observation angles and satellite positions. Thus, the method using bi-temporal images is appropriate for daily MODIS images, so that the training and prediction images are acquired at a similar time and under similar satellite observation conditions. When predicting surface water fraction from Sentinel-2 images with a relatively longer repetition rate, using bi-temporal data for building the surface water fraction regression model may be unsuitable.
Compared with regression models that use an ancillary thematic map or pre-dated remote sensing images, Rover et al. [39] have proposed a self-trained model that does not require any prior information for the regression of surface water fractions. This self-trained model classifies a Landsat image to a binary water map and aggregates both the Landsat image and water map to a coarser spatial resolution to obtain an “internal” reference dataset to build the regression model. This self-trained model was demonstrated to be appropriate for mapping wetlands from mono-temporal Landsat images [39]. DeVries [31] extended the self-trained model, and assessed its ability to map a time series of inundation areas.
Although the self-trained regression model greatly simplifies the collection of ancillary data, challenges still exist. First, the regression model is used to predict the water fractions of all pixels, regardless of whether the pixel is pure or mixed. The method may overestimate the water fraction for pure land pixels, and underestimate the water fraction for pure water pixels [29,32]. Second, in the production of the binary water map used for producing the aggregated water fraction maps, supervised methods such as the classification tree [39] and decision rule-based methods with pre-defined thresholds [31] were used. The exploration of fully automated methods without any prior information or pre-defined threshold required to generating the water map for the self-trained model is needed. Lastly, the accuracy of the water fraction regression model is related to the aggregated coarse pixels as samples. Previous self-trained models used a fixed window size (with a window size = 3 in [39] and a window size = 5 in [31]) and a fixed window shift to aggregate the water map and remote sensing image into coarse pixels, which were then used as samples for water fraction regression. Although aggregation can be performed with different window sizes (Figure 1a–c) and different window shifts (Figure 1d–f), the impact of different aggregation window sizes and shifts on the regression results has not been addressed, to the best of our knowledge.
In this study, an automated and hierarchical surface water fraction mapping (AHSWFM) method using the self-trained regression was proposed. Unlike other self-trained models that map large lakes or inundations from MODIS and Landsat images, the focus of this method is to automatically map surface water fractions for SWBs from Sentinel-2 images. AHSWFM has the following advantages:
  • AHSWFM uses a hierarchical structure that divides pixels into pure water, pure land and mixed water-land pixels, and predicts their water fractions separately to avoid overestimating the water fraction for pure land pixels and underestimating the water fraction for pure water pixels. Compared with the hierarchical surface water mapping method based on the linear mixture model [24,40], this study first explores the hierarchical strategy based on a random forest (RF) regression model to address the complicated water-land decomposition.
  • AHSWFM is fully automated without any prior information or pre-defined thresholds. Compared with previous self-trained models that require training data or pre-defined thresholds to generate the binary water map used in the regression model, AHSWFM uses OTSU segmentation to automatically generate the water map in the regression model.
  • The impacts of different window sizes and window shifts on the aggregation to coarse pixel samples in the water fraction regression are assessed.
In this study, AHSWFM was assessed using two Sentinel-2 images in Wuhan, China, and was compared with the traditional partial and fully constrained linear mixture models, including MF, LSMA and MESMA.

2. Study Area and Data

2.1. Study Area

The study region is near the Yangtze River in Wuhan, China (114.8916° E, 30.9766° N), with an area of about 132.8 km2 and an elevation of 28–78 m. The study region is mainly composed of cropland and SWBs used to supply water for irrigation. The average annual rainfall is about 1257.9 mm in the study region.

2.2. Data

Two cloud-free Sentinel-2 multispectral images for the study region, acquired on 26 December 2017 and 29 January 2021, were downloaded from the Copernicus European Space Agency hub for water fraction mapping (Figure 2(a1,a2)). The Sentinel-2 images used were Level-1C products of Top-of-Atmosphere (TOA) reflectance datasets, which have been proven to be effective in computing water indices and mapping inland surface water [20,32,41,42,43,44,45]. In this study, the selected Sentinel-2 images were free of cloud contamination. In addition, the atmospheric effects were assumed to be uniform across the study region, which had a limited geographic extent [46]. Two Google Earth RGB images acquired on 22 December 2017 and 29 January 2021, at a spatial resolution of 1 m in the study region, were used for the validation of each water fraction map. The two pairs of Sentinel-2 and Google Earth images were selected because each image pair was observed on the same or a similar date to reduce the land cover change effect, and no clouds were contained in these images.

3. Methodology

The proposed AHSWFM method comprises several steps using a hierarchical strategy to predict water fractions for pure and mixed pixels. First, the 20 m Sentinel-2 bands were pre-processed and downscaled to 10 m. Then, the normalized difference water index (NDWI) map was generated from the multispectral image and an initial binary surface water map was generated based on the NDWI using the OTSU segmentation. A self-trained model was built using samples by aggregating the Sentinel-2 multispectral image and binary water map. The initial water and land pixels in the binary water map were further divided into pure water, pure land and mixed pixels based on the statistical features in NDWI. A hierarchical strategy was adopted in the water fraction mapping for pure and mixed pixels separately. For pure land and water pixels, the surface water fractions were directly assigned to 0% and 100%, respectively. For mixed pixels, the surface water fraction was estimated based on the self-trained regression using the self-trained RF. The flowchart of the proposed AHSWFM is shown in Figure 3. The hierarchical strategy of AHSWFM in mapping the surface water fractions for pure and mixed pixels is shown in Figure 4.

3.1. Data Preprocessing

Sentinel-2 multispectral images have 13 spectral bands (10–60 m resolution). Only the 10 bands with a spatial resolution of 10 and 20 m were used in this study, including blue, green, red, three vegetation red-edge bands, near infrared (NIR), narrow near infrared and two short-wave infrared (SWIR) bands. The six 20-m Sentinel-2 image bands, including three vegetation red-edge bands, the narrow NIR band and two SWIR bands, were downscaled to a resolution of 10 m. In this study, the downscaling was performed based on the principal component analysis pan-sharpening algorithm for its convenience and effectiveness [47,48].

3.2. Producing the Initial 10 m Binary Surface Water Map Based on OTSU Segmentation

An NDWI image was extracted from the Sentinel-2 image to extract surface water information [49]
N D W I = ( B G r e e n B N I R ) ( B G r e e n + B N I R )
where BGreen and BNIR are the Green and NIR bands in the Sentinel-2 image, respectively.
Then, OTSU segmentation [50], which automatically determines a threshold to separate surface water bodies from the background by maximizing the inter-class variance, was applied to the NDWI image to generate a binary initial 10 m surface water map.

3.3. Generating the Pure Water, Pure Land and Mixed Pixels at 10 m Based on the Statistical Values in the NDWI

The proposed AHSWFM uses a hierarchical strategy that divides pixels into pure water, pure land and mixed water-land pixels and predicts their water fractions separately. The initial water and land pixels in the OTSU segmentation map were further divided into pure water, pure land and mixed pixels (Figure 4). The NDWI statistical values were used in this step. The mean and standard deviations of the initial water pixels (i.e., mean(NDWIInitialWater) and std(NDWIInitialWater), respectively) and land pixels (i.e., mean(NDWIInitialLand) and std(NDWIInitialLand), respectively) were first calculated. Then, the thresholds for discriminating pure water, pure land and mixed pixels were automatically determined based on these statistical values. The threshold TPureWater was used to distinguish pure water pixels and mixed pixels in Equation (2), and the threshold TPureLand was used to distinguish pure land pixels and mixed pixels in Equation (3):
T PureWater = m e a n ( N D W I I n i t i a l W a t e r ) s t d ( N D W I I n i t i a l W a t e r )
T PureLand = m e a n ( N D W I I n i t i a l L a n d ) + s t d ( N D W I I n i t i a l L a n d )
Based on TPureWater and TPureLand, the initial water and land pixels were then divided into pure water pixels (PPureWater), pure land pixels (PPureLand) and mixed pixels (PMix) in Equation (4):
P = { P P u r e W a t e r if   N D W I > T P u r e W a t e r P P u r e L a n d if   N D W I < T P u r e L a n d P M i x e d if   N D W I T P u r e L a n d   &   N D W I T P u r e W a t e r

3.4. Self-Trained Regression and Hierarchical Prediction for the Final 10 m Surface Water Fraction Mapping Using RF

In AHSWFM, the water fractions for mixed pixels were estimated based on self-trained regression. First, the initial 10 m surface water map generated from OTSU segmentation was processed with the value “1” assigned to the initial water pixel and “0” assigned to the initial land pixel. Then, the binary map was spatially aggregated to a coarse resolution water fraction map according to the selected window size and the fixed window shift (the performances of different window sizes and window shifts were assessed in the discussion section). For instance, if the window size was set to 2 (the resolution of the aggregated map was 20 m) and there were two 10-m pixels with the value of “1”, then the surface water fraction equaled 2/S2 × 100% = 50%, where S indicated the aggregation window size. Simultaneously, the 10 m resolution Sentinel-2 multispectral image was also aggregated according to the same window size using the local mean function.
After producing the aggregated multispectral image and the surface water fraction map, an RF regression model, which is an ensemble-learning algorithm that combines a large set of regression trees, was used to construct the regression model. In training the RF regression, the inputs were the vectors of each aggregated coarse pixel multispectral band, and the outputs were the corresponding surface water fraction within each coarse pixel.
According to the AHSWFM hierarchical strategy in Figure 4, the final 10-m surface water fraction map was produced by assigning 100% and 0% to pure water and pure land pixels, respectively, and assigning the self-trained regression predictions to mixed pixels.

4. Experiment

4.1. Parameter Settings of AHSWFM

In AHSWFM, the impacts of different window sizes (ranging from 2 to 30, with an interval of 2) and window shifts (multiple shifts in x- and y-directions) on the aggregation on the coarse pixel samples in the water fraction regression were tested. Specifically, the aggregation window with a step of 0 × 10 m, 1 × 10 m, 2 × 10 m, …, (window size) × 10 m was shifted in the x-direction and the aggregation window with a step of 0 × 10 m, 1 × 10 m, 2 × 10 m, …, (window size) × 10 m was shifted in the y-direction, in order to produce different coarse pixels in the aggregation. A simple example of the different window shifts with window size = 2 in the aggregation is shown in Figure 5. The self-trained regression model with a fixed window (0 × 10 m in the x-direction and 0 × 10 m in the y-direction) is illustrated in Figure 5b. 16 coarse pixels, such as those highlighted with red rectangles in Figure 5b, were used for training the RF. AHSWFM with multiple (i.e., all possible) window shifts included those in Figure 5b–e; it enlarged the number of samplings used for training the RF regression, including those in Figure 5c–e, and the number of samples was augmented to 49 samples (16 samples in Figure 5b, 12 samples in Figure 5c, 12 samples in Figure 5d and 9 samples in Figure 5e).
In AHSWFM, the number of trees in RF regression was set to 100 based on the results of trials and previous studies [31].

4.2. Methods for Comparison

Three popular unmixing algorithms for surface water, including the partial unmixing of MF [34,35], fully constrained LSMA [26] and MESMA [33], were compared with the proposed AHSWFM. Unlike the automated AHSWFM, the three comparison methods were supervised and required prior endmember information about the representative land covers in the image. For each unmixing Sentinel-2 image, the image endmembers were collected directly from the Sentinel-2 image in Figure 2(a1,a2) with the help of the corresponding Google Earth image in Figure 2(b1,b2). Four types of endmembers were selected: vegetation, impervious, soil and water. For each endmember, about 20–40 sample pixels were collected from the Sentinel-2 image to construct an endmember library. For MF and LSMA, the mean spectral values of the samples of each endmember were used as input. For MESMA, the spectral values of each sample were considered.
Because AHSWFM used a hierarchical strategy that divided and predicted surface water fractions for mixed and pure pixels separately, in order to assess whether using the hierarchical strategy could improve the surface water fraction accuracy, AHSWFM was also compared with the self-trained RF regression (S_RF) method for all image pixels without considering whether they were mixed or pure.

4.3. Accuracy Assessment

The accuracy of the water fraction maps from each Sentinel-2 image in Figure 2(a1,a2) was assessed based on the SWBs identified from the corresponding 1-m resolution Google Earth images in Figure 2(b1,b2). The Sentinel-2 and Google Earth images were first co-registered to reduce the impact of misalignment. The phase correlation based on the Fourier shift was applied to each of the Sentinel-2 and Google Earth image pairs in this paper [51]. Then, all the water bodies in each 1-m Google Earth image were identified through visual interpretation. Finally, the 1-m surface water maps were aggregated to a resolution of 10 m, and the water fraction in each 10 m pixel was determined by computing the area proportions of interpreted surface water within the 10-m pixel.
A total of more than 1200 SWBs were selected for the accuracy assessment in the study region. The selected SWBs were spatially scattered (with distances of more than about 50 m from each other), while other water bodies that were too close to each other were not counted. Similar to the methods used in other studies [34,35,52,53], a buffer was created for each selected SWB in this study by expanding the water outline outward by 20 m, and all the 10 m pixels predicted within the 20-m buffer were used in the assessment of the SWB.
Because the SWBs may have changed from 2017 to 2021, the numbers of selected SWBs were not identical in the two years. Specifically, 1267 SWBs were selected in 2017 and 1347 SWBs were selected in 2021. The locations of these SWBs are shown in Figure 2(b1,b2). The selected SWBs that were smaller than 0.75 ha accounted for about 97.5% and 98.5% in 2017 and 2021, respectively (Figure 6). The selected SWBs had an average area of 0.1810 ha in 2017 and an average area of 0.1487 ha in 2021. The largest SWB had an area of 3.1276 ha, and the smallest had an area of 0.0027 ha in the selected SWBs.
For each selected SWB, the area predicted by each method within the 20-m buffer was computed and compared with the reference [35,53]. The area of an SWB within the 20-m buffer was computed by summarizing the total water fractions of the 10 m pixels within the buffer. The root mean square error (RMSE) in the SWB area (RMSEarea) is used to assess the difference between the predictions and the reference
RMSE a r e a = 1 N ( P a r e a , i R a r e a , i ) 2 N
where Parea,i is the predicted SWB area of the ith SWB, Rarea,i is the reference SWB area of the ith SWB and N is the total number of selected SWBs in the year.
In addition to the SWB area, the per-pixel accuracy of the water fractions in the buffer and in the entire image (all image pixels were used and all SWBs were accounted for) were also assessed. The RMSEfraction is used to quantify the per-pixel accuracy of water fraction
RMSE f r a c t i o n = 1 M ( P f r a c t i o n , i R f r a c t i o n , i ) 2 M
where Pfraction,i is the predicted surface water fraction of the ith Sentinel-2 pixel, Rfraction,i is the reference water fraction of the ith Sentinel-2 pixel and M is the number of Sentinel-2 pixels in the buffer or in the entire image.

5. Results

5.1. Visual Comparison of Different Water Fraction Maps

The water fraction maps generated using five methods (MF, LSMA, MESMA, S_RF and AHSWFM) in 2017 and 2021 are shown in Figure 7. The results of S_RF and AHSWFM with the window size of 10 and the fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used in the visual comparison in this section. In general, LSMA overestimated the water fractions, as shown in Figure 7(c1,c2), while MF, MESMA, S_RF and AHSWFM generated maps that were similar to the reference.
Five zoomed-in regions highlighted in red rectangles in Figure 7(a1,a2) are shown in Figure 8. The outline of water bodies and the 20-m buffer (for validation) highlighted with red and green are overlapped with the predicted water fraction maps. SWBs 1–3 changed in spatial extent in the selected years according to the Google Earth images shown in Figure 8a. The first two SWBs expanded from 2017 to 2021 Figure 8(a1–a4) and the third SWB Figure 8(a5,a6) varied in shape during 2017–2021. The SWBs 4–5 that had relatively small size (<0.1 ha) were shown in Figure 8(a7–a10).
The five methods predicted different water fractions for the five SWBs shown in Figure 8. For the SWBs 1–3 that have changed area and shape, all the methods predicted the water fraction changes from 2017 to 2021. For the SWBs 4–5 that are relatively small, MF predicted very low water fraction values in the area located by the two SWBs in Figure 8(c7–c10), while the other four methods predicted relatively high water fraction values. In general, MF tended to underestimate the surface water range, especially for the mixed water-land pixels that interacted with the red water body outline (such as those highlighted in black ellipses in Figure 8(c3). In contrast, LSMA tended to overestimate the surface water range. For example, LSMA incorrectly predicted pure land pixels with high water fractions, such as those highlighted in black ellipses in Figure 8(d6,d8). This indicates that LSMA failed to distinguish the water and land pixels if they had similar spectral values [54]. Compared with LSMA, the MESMA method, which considered the variations in the intra-class spectrum and the regression models of S_RF and AHSWFM, better distinguished water and land in most regions in the Sentinel-2 image in Figure 8e–g. MESMA overestimated the water fractions for some pure land pixels, such as those highlighted in black ellipses in Figure 8(e1,e2) and underestimated the water fractions of some pure water pixels and mixed pixels interacted with the red water body outline, such as those highlighted in a black ellipse in Figure 8(e4,e7,e9). Compared with MESMA, the regression methods used in S_RF and AHSWFM better predicted the water fractions for the pure land pixels in Figure 8f–g. Compared with S_RF, AHSWFM predicted higher surface water fractions for pure water, such as those highlighted in black ellipses in Figure 8(f1,g1,f5, g5,f9,g9), and predicted lower surface water fractions for pure land pixels better such as those highlighted in black ellipses in Figure 8(f4,g4,f6,g6), demonstrating the advantage of the hierarchical strategy in sub-pixel water mapping.

5.2. Accuracy Comparison of Different Water Fraction Maps

To fully demonstrate the difference among the five methods in predicting the SWB area in the buffer, Figure 9 shows the scatter plots between the predicted and reference areas for all SWBs. Generally, a simple linear regression is used to estimate the accuracy of predicted values, and the coefficient of determination (R2) and slope of the fitted line are performed to evaluate the goodness of the fit. The closer the R2 is to one, the better the fit. If the slope of the fitted line is larger than 1, the predicted value would be larger than the reference; otherwise, the predicted value would be smaller than the reference. In this paper, linear regressions were performed for each scatter plot.
As shown in Figure 9, it was found that the proposed AHSWFM generated the highest R2 in both the two years among all methods, showing that the SWB area predicted by AHSWFM was the closest match to the reference. The plots in the AHSWFM predictions were the closest to the 1:1 line expected for Figure 9(b1) of LSMA in 2017. In 2017, LSMA had a slope that was closer to the 1:1 line and a smaller intercept (the slope of the fitted line, that is, 0.99, was the closest to 1) than the proposed AHSRFM, but a lower R2 and higher RMSE (Table 1) than AHSRFM, showing that the LSMA prediction had a relatively larger dispersion degree of SWB area than AHSWFM. In addition, the slopes of the fitted lines of MF, MESMA, S_RF and AHSWFM were lower than 1, showing that these methods underestimated surface water area in both two years. In contrast, the slope of the fitted line of LSMA in 2021 (Figure 9(b2)) was higher than 1, showing that LSMA overestimated the surface water area in 2021. This could be used to demonstrate that LSMA predicted high water fractions for pure land pixels in the buffer, such as in Figure 8(d6).
The RMSEarea values in SWB areas within the 20-m buffer for all methods are shown in Table 1. The proposed AHSWFM generated the lowest RMSE values in all the metrics and in both two years. S_RF generated lower RMSEarea values than MF, LSMA and MESMA in 2021, but generated a higher RMSEarea value than LSMA in 2017. Table 1 shows that AHSWFM can predict the area of SWBs (mainly smaller than 0.75 ha in the study region) with RMSEarea values of about 0.045 ha, demonstrating the advantage of the proposed method in SWB area estimation.
Table 1 also shows the accuracies of different methods in surface water fractions in the 20-m buffer, and in the entire image. In general, all methods generated higher RMSE values in the 20-m buffer than in the entire image. Within the buffer, the proposed AHSWFM generated an RMSEfraction of less than 0.18. In the image, MF generated a lower RMSEfraction than LSMA in 2021; MESMA generated an RMSEfraction of higher than 0.10, while S_RF and AHSWFM generated RMSEfraction values of less than 0.10 in both two years. AHSWFM generated the lowest RMSEfraction both within the buffer and in the image in both two years.

6. Discussion

6.1. Impact of Different Window Sizes and Window Shifts in the Aggregation to Coarse Pixel Samples in the Water Fraction Regression on S_RF and AHSWFM

The proposed AHSWFM used a scalable algorithm to aggregate the original multispectral and the corresponding binary surface map to coarse pixel samples for training the RF regression model. The window size and shift determined the sample number, regression model accuracy and running time of the algorithm. The window size ranging from 2 to 30, with an interval of 2, were assessed in this section. The windows with fixed shift (such as Figure 5b) and multiple shifts (i.e., all possible shifts such as Figure 5b–e) were also assessed.

6.1.1. Impact of Different Window Sizes (and the Fixed Window Shift) in the Aggregation to Coarse Pixel Samples in the Water Fraction Regression on S_RF and AHSWFM

The RMSEarea and RMSEfraction values in S_RF and AHSWFM using the aggregation window with different window sizes and a fixed window shift were shown in Figure 10. AHSWFM (orange dotted line in Figure 10) predicted less RMSE values than S_RF (blue dotted line in Figure 10) in all metrics and almost all window sizes. This result showed that using hierarchical strategy in AHSWFM could improve the accuracy in water mapping at different window sizes than S_RF, and the improvement was more obvious when the window size is larger than about 10. Compared with S_RF, AHSWFM decreased the lowest RMSEarea in the estimation of SWB areas by about 0.018, decreasing the lowest RMSEfraction in the estimation of SWB fractions in the buffer and in the entire image by about 0.008 and 0.002.
According to Figure 10, the accuracy of S_RF and AHSWFM varied greatly with the change in window size. Previous S_RF used a window size of 3 to 5 to map water fractions from 30 m Landsat images [31,39]. In this study, when applying to Sentinel-2 image at 10 m resolution for mapping SWBs (mostly smaller than 0.75 ha), the optimal window size ranged from 6 to 10 for S_RF (blue dotted line in Figure 10). A too small or too large window size resulted in relatively higher RMSE values for different accuracy metrics. For AHSWFM, the optimal window size ranged from 6 to 18 (orange dotted line in Figure 10).

6.1.2. Impact of Different Window Sizes and Window Shifts in the Aggregation to Coarse Pixel Samples in the Water Fraction Regression on S_RF

The RMSEarea and RMSEfraction values of S_RF with fixed and multiple shifts in producing the regression samples were shown in Figure 11. S_RF using the fixed shift (blue dotted line) were more sensitive to the window size and the RMSE values increased obviously when the window size was larger than about 12. S_RF using the multiple shifts (blue solid line) did not necessarily decrease the RMSE values than that using the fixed shift. The main reason would be the fact that aggregation with multiple shifts may generate similar and redundant samples used in the RF regression model.

6.1.3. Impact of Different Window Sizes and Window Shifts in the Aggregation to Coarse Pixel Samples in the Water Fraction Regression on AHSWFM

The RMSEarea and RMSEfraction values of AHSWFM with fixed and multiple shifts in producing regression samples were shown in Figure 12. Similar to S_RF in Figure 11, AHSWFM using the multiple shifts (orange solid line) did not necessarily decrease the RMSE values than those using the fixed shift (orange dotted line).

6.1.4. Comparison of Training Sample Number and Running Times of S_RF and AHSWFM

The comparison of training sample number and running times of S_RF and AHSWFM were shown in Table 2. All the methods were performed with Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz and 16 GB of RAM for water fraction mapping using a 1044 × 1272 sized Sentinel-2 image. The number of training samples is the same for S_RF and AHSWFM when using the same aggregation window size and shift—the only difference is that AHSWFM introduced additional steps, such as the computing of NDWI statistical values. AHSWFM converged for about no more than 1 s longer than S_RF in computing time, showing that AHSWFM using the hierarchical strategy in the regression model does not add much complexity compared with S_RF.
Compared with the hierarchical strategy, both window size and window shift have a large impact on the computing times of S_RF and AHSWFM. According to Table 2, it could be found that:
  • When using the fixed window shift in producing regression samples, the increase of window size resulted in a decrease in sample numbers and running time. When the window size is too small, the samples may be not representative for the complex water-land composition. For example, if the window size is 2, when aggregating the 2 × 2 pixels from the binary water map to coarse pixel water fraction, only 5 values of coarse water fractions are obtained including 0% (none of the pixels labeled as water), 25% (1 pixel labeled as water), 50% (2 pixels labeled as water), 75% (3 pixels labeled as water) and 100% (all pixels labeled as water). Thus, both S_RF and AHSWFM with the window size of 2 generated high RMSE values in different metrics in Figure 10, Figure 11 and Figure 12. Considering the accuracies in Figure 10, Figure 11 and Figure 12 and the running times in Table 2, S_RF and AHSWFM with window sizes ranging from 6–10 can generate results with relatively high accuracy and low time cost.
  • When using multiple shifts in producing the regression samples, the increasing of window size resulted in a decrease in sample numbers but not an obvious decrease in running time when the window size is larger than 2. For S_RF and AHSWFM, the running times are longer than 3100 s for the window size of 2, and are shorter than 1200 s for the other window sizes.
  • The running time of S_RF and AHSWFM using multiple shifts are more than 10 times longer than those using fixed shift.

6.2. Limitations and Future Research

Although the method proposed in this study has several advantages over the supervised methods, limitations still exist. The NDWI was used to automatically segment the Sentinel-2 image and generate the binary surface water map for aggregation and for training the RF regression model. Although NDWI can eliminate the noise from soil and vegetation well, the water information extracted by NDWI is often mixed with built-up noise, which may probably result in an overestimation of water fraction, especially in urban areas [55,56]. In regions with a background dominated by low albedo surfaces such as asphalt roads and shadows from buildings and mountains, other water indices such as the modified NDWI and the automated water extraction index that helps to reduce built-up land and shadow noise could also be explored [55,57]. In addition, the used NDWI was calculated based on the green and NIR bands that are sensitive to several influencing factors in water information extraction. First, soil moisture and topographic wetness could impact the NDWI; for instance, the reflectance of the NIR band increases with the decrease of soil moisture [58]. Second, the seasonal variation of the Sentinel-2 images would also affect the NDWI. For instance, seasonal variation could result in different water quality, such as variable dissolved chemical and physical components, and thus result in a variation in the green band reflectance [59]. The seasonal variation could also result in different vegetation growth conditions due to phenology changes, and thus result in variation in the NIR band reflectance [60]. Future work should focus on using multiple water index images and information from synthetic aperture radar (SAR) and the digital elevation model (DEM) to improve accuracy in generating binary surface water maps for aggregation and regression [61,62].
In addition, the AHSWFM used the hierarchical strategy to predict pure and mixed pixels, and the rules used to discriminate pure and mixed pixels were based on statistical values such as the mean and standard variations of the NDWI values according to the OTSU segmentation map. This hierarchical strategy is suitable when the histogram of NDWI image exhibits bimodal distribution [63]. When the proportion of the target area (i.e., the SWB area in the present analysis) is relatively small, the NDWI image histogram is close to the unimodal distribution, and the OTSU segmentation result may not generate a binary water map with high accuracy. Other rules to distinguish pure and mixed pixels, such as those using the slopes of the NDWI histogram curve [29], should be explored in complex regions such as urban cities.

7. Conclusions

Surface water is a crucial resource, and developing accurate methods for the study of SWBs using satellite monitoring is urgent. In this study, a novel automatic surface water fraction mapping method using Sentinel-2 multispectral images to predict the areas of SWBs with an area of smaller than 0.75 ha was proposed. The proposed AHSWFM is fully automatic without any prior knowledge required, thus reducing the labor of collecting additional training samples. AHSWFM used a self-trained regression model that built the regression relation between Sentinel-2 images and corresponding water fractions directly from the input image, thus reducing the impacts of the changes in vegetation phenology and satellite observation conditions between training and prediction dates. AHSWFM used a hierarchical structure that divided and predicted water fractions for pure water, pure land and mixed water-land pixels separately. The findings showed that using the hierarchical strategy increased the accuracy of S_RF, which predicted all pixels using RF and resulted in a negligible increase in the computation time (the use of the hierarchical strategy took much less than 1% of the total computation time in AHSWFM). The impacts of the different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression on the result were also assessed. It was found that AHSWFM using the aggregation window with the fixed shift took about one-tenth of the computation time compared with that of using multiple shifts. The AHSWFM accuracy varied with the window size. AHSWFM with a fixed window shift and a window size ranging from 6–10 could generate results with relatively high accuracy and low time cost. In the experiments, AHSWFM generated more accurate water fraction maps than the methods used for comparison. AHSWFM generated a predicted SWB area with RMSE values of about 0.045 ha for SWBs that were mostly smaller than 0.75 ha, and the R2 values of the fitted linear regression models for the SWB areas were higher than 0.94. Considering its advantages, such as the automation and relatively good computation complexity, the proposed AHSWFM has great potential in monitoring SWBs from Sentinel-2 images.

Author Contributions

Conceptualization, X.L.; methodology, Y.W. and X.L.; software, Y.W. and P.Z.; validation, Y.W.; writing, Y.W. and X.L.; supervision, L.J. and Y.D.; and funding acquisition, X.L. and L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of China (62071457), Application Foundation Frontier project of Wuhan (2020020601012283), key scientific research projects of water conservancy in Hubei Province, China (HBSLKY202103), Key Research Program of Frontier Sciences, Chinese Academy of Sciences (ZDBS-LY-DQC034), Key Research and Development Project of Hubei Province, China (2020BCA074), Innovation Group Project of Hubei Natural Science Foundation (2019CFA019) and Young Top-notch Talent Cultivation Program of Hubei Province.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to express our sincere thanks to the anonymous reviewers, for the reason that the comments and suggestions were of great help to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pekel, J.-F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its long-term changes. Nature 2016, 540, 418–422. [Google Scholar] [CrossRef] [PubMed]
  2. Downing, J.A.; Prairie, Y.; Cole, J.; Duarte, C.; Tranvik, L.; Striegl, R.G.; McDowell, W.; Kortelainen, P.; Caraco, N.; Melack, J. The global abundance and size distribution of lakes, ponds, and impoundments. Limnol. Oceanogr. 2006, 51, 2388–2397. [Google Scholar] [CrossRef] [Green Version]
  3. El Bilali, A.; Taghi, Y.; Briouel, O.; Taleb, A.; Brouziyne, Y. A framework based on high-resolution imagery datasets and MCS for forecasting evaporation loss from small reservoirs in groundwater-based agriculture. Agric. Water Manag. 2022, 262, 107434. [Google Scholar] [CrossRef]
  4. Perin, V.; Tulbure, M.G.; Gaines, M.D.; Reba, M.L.; Yaeger, M.A. A multi-sensor satellite imagery approach to monitor on-farm reservoirs. Remote Sens. Environ. 2022, 270, 112796. [Google Scholar] [CrossRef]
  5. Perin, V.; Tulbure, M.G.; Gaines, M.D.; Reba, M.L.; Yaeger, M.A. On-farm reservoir monitoring using Landsat inundation datasets. Agric. Water Manag. 2021, 246, 106694. [Google Scholar] [CrossRef]
  6. Habets, F.; Molénat, J.; Carluer, N.; Douez, O.; Leenhardt, D. The cumulative impacts of small reservoirs on hydrology: A review. Sci. Total Environ. 2018, 643, 850–867. [Google Scholar] [CrossRef] [Green Version]
  7. Downing, J.A. Emerging global role of small lakes and ponds: Little things mean a lot. Limnetica 2010, 29, 9–24. [Google Scholar] [CrossRef]
  8. Berg, M.D.; Popescu, S.C.; Wilcox, B.P.; Angerer, J.P.; Rhodes, E.C.; McAlister, J.; Fox, W.E. Small farm ponds: Overlooked features with important impacts on watershed sediment transport. J. Am. Water Resour. Assoc. 2016, 52, 67–76. [Google Scholar] [CrossRef]
  9. Pickens, A.H.; Hansen, M.C.; Hancher, M.; Stehman, S.V.; Tyukavina, A.; Potapov, P.; Marroquin, B.; Sherani, Z. Mapping and sampling to characterize global inland water dynamics from 1999 to 2018 with full Landsat time-series. Remote Sens. Environ. 2020, 243, 111792. [Google Scholar] [CrossRef]
  10. Li, X.; Ling, F.; Foody, G.M.; Boyd, D.S.; Jiang, L.; Zhang, Y.; Zhou, P.; Wang, Y.; Chen, R.; Du, Y. Monitoring high spatiotemporal water dynamics by fusing MODIS, Landsat, water occurrence data and DEM. Remote Sens. Environ. 2021, 265, 112680. [Google Scholar] [CrossRef]
  11. Guo, H.; He, G.; Jiang, W.; Yin, R.; Yan, L.; Leng, W. A multi-scale water extraction convolutional neural network (MWEN) method for GaoFen-1 remote sensing images. ISPRS Int. J. Geo-Inf. 2020, 9, 189. [Google Scholar] [CrossRef] [Green Version]
  12. Duan, Y.; Zhang, W.; Huang, P.; He, G.; Guo, H. A New Lightweight Convolutional Neural Network for Multi-Scale Land Surface Water Extraction from GaoFen-1D Satellite Images. Remote Sens. 2021, 13, 4576. [Google Scholar] [CrossRef]
  13. Zhan, W.; Chen, Y.; Zhou, J.; Wang, J.; Liu, W.; Voogt, J.; Zhu, X.; Quan, J.; Li, J. Disaggregation of remotely sensed land surface temperature: Literature survey, taxonomy, issues, and caveats. Remote Sens. Environ. 2013, 131, 119–139. [Google Scholar] [CrossRef]
  14. Li, X.; Ling, F.; Foody, G.M.; Ge, Y.; Zhang, Y.; Du, Y. Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps. Remote Sens. Environ. 2017, 196, 293–311. [Google Scholar] [CrossRef]
  15. Li, X.; Foody, G.M.; Boyd, D.S.; Ge, Y.; Zhang, Y.; Du, Y.; Ling, F. SFSDAF: An enhanced FSDAF that incorporates sub-pixel class fraction change information for spatio-temporal image fusion. Remote Sens. Environ. 2020, 237, 111537. [Google Scholar] [CrossRef]
  16. Thenkabail, P. Remote Sensing Handbook—Three Volume Set; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  17. Ottinger, M.; Bachofer, F.; Huth, J.; Kuenzer, C. Mapping Aquaculture Ponds for the Coastal Zone of Asia with Sentinel-1 and Sentinel-2 Time Series. Remote Sens. 2022, 14, 153. [Google Scholar] [CrossRef]
  18. Kaplan, G.; Avdan, U. Object-based water body extraction model using Sentinel-2 satellite imagery. Eur. J. Remote Sens. 2017, 50, 137–143. [Google Scholar] [CrossRef] [Green Version]
  19. Bie, W.; Fei, T.; Liu, X.; Liu, H.; Wu, G. Small water bodies mapped from Sentinel-2 MSI (MultiSpectral Imager) imagery with higher accuracy. Int. J. Remote Sens. 2020, 41, 7912–7930. [Google Scholar] [CrossRef]
  20. Du, Y.; Zhang, Y.; Ling, F.; Wang, Q.; Li, W.; Li, X. Water bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sens. 2016, 8, 354. [Google Scholar] [CrossRef] [Green Version]
  21. Freitas, P.; Vieira, G.; Canário, J.; Folhas, D.; Vincent, W.F. Identification of a threshold minimum area for reflectance retrieval from thermokarst lakes and ponds using full-pixel data from Sentinel-2. Remote Sens. 2019, 11, 657. [Google Scholar] [CrossRef] [Green Version]
  22. Hu, Y.H.; Lee, H.; Scarpace, F. Optimal linear spectral unmixing. IEEE Trans. Geosci. Remote Sens. 1999, 37, 639–644. [Google Scholar] [CrossRef]
  23. Quintano, C.; Fernández-Manso, A.; Shimabukuro, Y.E.; Pereira, G. Spectral unmixing. Int. J. Remote Sens. 2012, 33, 5307–5340. [Google Scholar] [CrossRef]
  24. Ma, B.; Wu, L.; Zhang, X.; Li, X.; Liu, Y.; Wang, S. Locally adaptive unmixing method for lake-water area extraction based on MODIS 250 m bands. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 109–118. [Google Scholar] [CrossRef]
  25. Keshava, N.; Mustard, J.F. Spectral unmixing. IEEE Signal Process. Mag. 2002, 19, 44–57. [Google Scholar] [CrossRef]
  26. Heinz, D.C. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans. Geosci. Remote Sens. 2001, 39, 529–545. [Google Scholar] [CrossRef] [Green Version]
  27. Ling, F.; Li, X.; Foody, G.M.; Boyd, D.; Ge, Y.; Li, X.; Du, Y. Monitoring surface water area variations of reservoirs using daily MODIS images by exploring sub-pixel information. ISPRS J. Photogramm. Remote Sens. 2020, 168, 141–152. [Google Scholar] [CrossRef]
  28. Xiong, L.; Deng, R.; Li, J.; Liu, X.; Qin, Y.; Liang, Y.; Liu, Y. Subpixel surface water extraction (SSWE) using Landsat 8 OLI data. Water 2018, 10, 653. [Google Scholar] [CrossRef] [Green Version]
  29. Xie, H.; Luo, X.; Xu, X.; Pan, H.; Tong, X. Automated subpixel surface water mapping from heterogeneous urban environments using Landsat 8 OLI imagery. Remote Sens. 2016, 8, 584. [Google Scholar] [CrossRef] [Green Version]
  30. Ray, T.W.; Murray, B.C. Nonlinear spectral mixing in desert vegetation. Remote Sens. Environ. 1996, 55, 59–64. [Google Scholar] [CrossRef]
  31. De Vries, B.; Huang, C.; Lang, M.W.; Jones, J.W.; Huang, W.; Creed, I.F.; Carroll, M.L. Automated quantification of surface water inundation in wetlands using optical satellite imagery. Remote Sens. 2017, 9, 807. [Google Scholar] [CrossRef] [Green Version]
  32. Sun, W.; Du, B.; Xiong, S. Quantifying sub-pixel surface water coverage in urban environments using low-albedo fraction from Landsat imagery. Remote Sens. 2017, 9, 428. [Google Scholar] [CrossRef] [Green Version]
  33. Franke, J.; Roberts, D.A.; Halligan, K.; Menz, G. Hierarchical multiple endmember spectral mixture analysis (MESMA) of hyperspectral imagery for urban environments. Remote Sens. Environ. 2009, 113, 1712–1723. [Google Scholar] [CrossRef]
  34. Jarchow, C.J.; Sigafus, B.H.; Muths, E.; Hossack, B.R. Using full and partial unmixing algorithms to estimate the inundation extent of small, isolated stock ponds in an arid landscape. Wetlands 2020, 40, 563–575. [Google Scholar] [CrossRef]
  35. Sall, I.; Jarchow, C.J.; Sigafus, B.H.; Eby, L.A.; Forzley, M.J.; Hossack, B.R. Estimating inundation of small waterbodies with sub-pixel analysis of Landsat imagery: Long-term trends in surface water area and evaluation of common drought indices. Remote Sens. Ecol. Conserv. 2021, 7, 109–124. [Google Scholar] [CrossRef]
  36. Li, L.; Vrieling, A.; Skidmore, A.; Wang, T.; Turak, E. Monitoring the dynamics of surface water fraction from MODIS time series in a Mediterranean environment. Int. J. Appl. Earth Obs. Geoinf. 2018, 66, 135–145. [Google Scholar] [CrossRef]
  37. Li, L.; Skidmore, A.; Vrieling, A.; Wang, T. A new dense 18-year time series of surface water fraction estimates from MODIS for the Mediterranean region. Hydrol. Earth Syst. Sci. 2019, 23, 3037–3056. [Google Scholar] [CrossRef] [Green Version]
  38. Liang, J.; Liu, D. Automated estimation of daily surface water fraction from MODIS and Landsat images using Gaussian process regression. Int. J. Remote Sens. 2021, 42, 4261–4283. [Google Scholar] [CrossRef]
  39. Rover, J.; Wylie, B.K.; Ji, L. A self-trained classification technique for producing 30 m percent-water maps from Landsat data. Int. J. Remote Sens. 2010, 31, 2197–2203. [Google Scholar] [CrossRef]
  40. Luo, X.; Xie, H.; Xu, X.; Pan, H.; Tong, X. A hierarchical processing method for subpixel surface water mapping from highly heterogeneous urban environments using Landsat OLI data. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 6221–6224. [Google Scholar]
  41. Jiang, W.; Ni, Y.; Pang, Z.; Li, X.; Ju, H.; He, G.; Lv, J.; Yang, K.; Fu, J.; Qin, X. An effective water body extraction method with new water index for sentinel-2 imagery. Water 2021, 13, 1647. [Google Scholar] [CrossRef]
  42. Jiang, H.; Wang, M.; Hu, H.; Xu, J. Evaluating the Performance of Sentinel-1A and Sentinel-2 in Small Waterbody Mapping over Urban and Mountainous Regions. Water 2021, 13, 945. [Google Scholar] [CrossRef]
  43. Yu, Z.; Di, L.; Rahman, M.; Tang, J. Fishpond mapping by spectral and spatial-based filtering on google earth engine: A case study in singra upazila of Bangladesh. Remote Sens. 2020, 12, 2692. [Google Scholar] [CrossRef]
  44. Yang, X.; Zhao, S.; Qin, X.; Zhao, N.; Liang, L. Mapping of urban surface water bodies from Sentinel-2 MSI imagery at 10 m resolution via NDWI-based image sharpening. Remote Sens. 2017, 9, 596. [Google Scholar] [CrossRef] [Green Version]
  45. Wang, Z.; Liu, J.; Li, J.; Zhang, D.D. Multi-spectral water index (MuWI): A native 10-m multi-spectral water index for accurate water mapping on Sentinel-2. Remote Sens. 2018, 10, 1643. [Google Scholar] [CrossRef] [Green Version]
  46. Aspinall, R.J.; Marcus, W.A.; Boardman, J.W. Considerations in collecting, processing, and analysing high spatial resolution hyperspectral data for environmental investigations. J. Geogr. Syst. 2002, 4, 15–29. [Google Scholar] [CrossRef]
  47. Sun, G.; Huang, H.; Weng, Q.; Zhang, A.; Jia, X.; Ren, J.; Sun, L.; Chen, X. Combinational shadow index for building shadow extraction in urban areas from Sentinel-2A MSI imagery. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 53–65. [Google Scholar] [CrossRef]
  48. Kwarteng, P.; Chavez, A. Extracting spectral contrast in Landsat Thematic Mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  49. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  50. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  51. Foroosh, H.; Zerubia, J.B.; Berthod, M. Extension of phase correlation to subpixel registration. IEEE Trans. Image Process. 2002, 11, 188–200. [Google Scholar] [CrossRef] [Green Version]
  52. Zhou, Y.; Luo, J.; Shen, Z.; Hu, X.; Yang, H. Multiscale water body extraction in urban environments from satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4301–4312. [Google Scholar] [CrossRef]
  53. Halabisky, M.; Moskal, L.M.; Gillespie, A.; Hannam, M. Reconstructing semi-arid wetland surface water dynamics through spectral mixture analysis of a time series of Landsat satellite images (1984–2011). Remote Sens. Environ. 2016, 177, 171–183. [Google Scholar] [CrossRef]
  54. Van Der Meer, F. Iterative spectral unmixing (ISU). Int. J. Remote Sens. 1999, 20, 3431–3436. [Google Scholar] [CrossRef]
  55. Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
  56. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  57. Xu, H. A study on information extraction of water body with the modified normalized difference water index (MNDWI). J. Remote Sens. 2005, 9, 589–595. [Google Scholar]
  58. Zhang, J.; Zhang, Q.; Bao, A.; Wang, Y. A new remote sensing dryness index based on the near-infrared and red spectral space. Remote Sens. 2019, 11, 456. [Google Scholar] [CrossRef] [Green Version]
  59. Yang, Y.; Liu, Y.; Zhou, M.; Zhang, S.; Zhan, W.; Sun, C.; Duan, Y. Landsat 8 OLI image based terrestrial water extraction from heterogeneous backgrounds using a reflectance homogenization approach. Remote Sens. Environ. 2015, 171, 14–32. [Google Scholar] [CrossRef]
  60. Misra, G.; Cawkwell, F.; Wingler, A. Status of phenological research using Sentinel-2 data: A review. Remote Sens. 2020, 12, 2760. [Google Scholar] [CrossRef]
  61. Markert, K.N.; Markert, A.M.; Mayer, T.; Nauman, C.; Haag, A.; Poortinga, A.; Bhandari, B.; Thwal, N.S.; Kunlamai, T.; Chishtie, F. Comparing sentinel-1 surface water mapping algorithms and radiometric terrain correction processing in southeast asia utilizing google earth engine. Remote Sens. 2020, 12, 2469. [Google Scholar] [CrossRef]
  62. Ludwig, C.; Walli, A.; Schleicher, C.; Weichselbaum, J.; Riffler, M. A highly automated algorithm for wetland detection using multi-temporal optical satellite data. Remote Sens. Environ. 2019, 224, 333–351. [Google Scholar] [CrossRef]
  63. Yuan, X.; Wu, L.; Peng, Q. An improved Otsu method using the weighted object variance for defect detection. Appl. Surf. Sci. 2015, 349, 472–484. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Pixel-level representation of the aggregation with different window sizes and window shifts to produce coarse pixel samples used for the surface water regression model. (ac) Aggregation with different window sizes and the fixed window shift and (df) aggregation with different window shifts and the fixed window size.
Figure 1. Pixel-level representation of the aggregation with different window sizes and window shifts to produce coarse pixel samples used for the surface water regression model. (ac) Aggregation with different window sizes and the fixed window shift and (df) aggregation with different window shifts and the fixed window size.
Remotesensing 14 01615 g001
Figure 2. Data used in this study. (a1,a2) Sentinel-2 multispectral images used for surface water fraction mapping in 2017 and 2021 and (b1,b2) Google Earth RGB images used for validation in 2017 and 2021. Small water bodies (SWBs) sampled for validation are highlighted as yellow points.
Figure 2. Data used in this study. (a1,a2) Sentinel-2 multispectral images used for surface water fraction mapping in 2017 and 2021 and (b1,b2) Google Earth RGB images used for validation in 2017 and 2021. Small water bodies (SWBs) sampled for validation are highlighted as yellow points.
Remotesensing 14 01615 g002
Figure 3. Flowchart of the proposed automated and hierarchical surface water fraction mapping (AHSWFM) method.
Figure 3. Flowchart of the proposed automated and hierarchical surface water fraction mapping (AHSWFM) method.
Remotesensing 14 01615 g003
Figure 4. The hierarchical strategy of the proposed automated and hierarchical surface water fraction mapping (AHSWFM) method in mapping surface water fractions for pure and mixed pixels.
Figure 4. The hierarchical strategy of the proposed automated and hierarchical surface water fraction mapping (AHSWFM) method in mapping surface water fractions for pure and mixed pixels.
Remotesensing 14 01615 g004
Figure 5. Flowchart of producing aggregated surface water fractions from the initial surface water map from OTSU segmentation with different window shifts (using window size = 2 as an example). (a) Initial binary surface water map and (be) generating aggregated surface water fractions from the initial surface water map with multiple (i.e., all possible) window shifts in the x- and y-directions. (b) contains 16 coarse pixels and (ce) contains 12, 12 and 9 coarse pixels for training the random forest (RF) model, respectively.
Figure 5. Flowchart of producing aggregated surface water fractions from the initial surface water map from OTSU segmentation with different window shifts (using window size = 2 as an example). (a) Initial binary surface water map and (be) generating aggregated surface water fractions from the initial surface water map with multiple (i.e., all possible) window shifts in the x- and y-directions. (b) contains 16 coarse pixels and (ce) contains 12, 12 and 9 coarse pixels for training the random forest (RF) model, respectively.
Remotesensing 14 01615 g005
Figure 6. The count and area of the selected small water bodies (SWBs) in the study region in 2017 and 2021.
Figure 6. The count and area of the selected small water bodies (SWBs) in the study region in 2017 and 2021.
Remotesensing 14 01615 g006
Figure 7. Comparison of water fraction maps generated using five methods in two years. (a1,a2) Reference water fraction maps, in which the red rectangles indicate the locations of zoomed areas of five small water bodies (SWBs) analyzed in Figure 8, (b1,b2) matched filtering (MF), (c1,c2) linear spectral mixture analysis (LSMA), (d1,d2) multiple endmember spectral mixture analysis (MESMA), (e1,e2) self-trained random forest regression (S_RF) prediction and (f1,f2) automated and hierarchical surface water fraction mapping (AHSWFM). S_RF and AHSWFM with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Figure 7. Comparison of water fraction maps generated using five methods in two years. (a1,a2) Reference water fraction maps, in which the red rectangles indicate the locations of zoomed areas of five small water bodies (SWBs) analyzed in Figure 8, (b1,b2) matched filtering (MF), (c1,c2) linear spectral mixture analysis (LSMA), (d1,d2) multiple endmember spectral mixture analysis (MESMA), (e1,e2) self-trained random forest regression (S_RF) prediction and (f1,f2) automated and hierarchical surface water fraction mapping (AHSWFM). S_RF and AHSWFM with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Remotesensing 14 01615 g007
Figure 8. Zoomed-in areas of Google Earth and Sentinel-2 images and predicted water fraction maps obtained from the five methods highlighted in red rectangles in Figure 6(a1a10) Google Earth images, (b1b10) Sentinel-2 images, (c1c10) matched filtering (MF), (d1d10) linear spectral mixture analysis (LSMA), (e1e10) multiple endmember spectral mixture analysis (MESMA), (f1f10) self-trained random forest regression (S_RF) and (g1g10) automatic and hierarchical surface water fraction mapping (AHSWFM). S_RF and AHSWFM with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Figure 8. Zoomed-in areas of Google Earth and Sentinel-2 images and predicted water fraction maps obtained from the five methods highlighted in red rectangles in Figure 6(a1a10) Google Earth images, (b1b10) Sentinel-2 images, (c1c10) matched filtering (MF), (d1d10) linear spectral mixture analysis (LSMA), (e1e10) multiple endmember spectral mixture analysis (MESMA), (f1f10) self-trained random forest regression (S_RF) and (g1g10) automatic and hierarchical surface water fraction mapping (AHSWFM). S_RF and AHSWFM with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Remotesensing 14 01615 g008
Figure 9. Comparison between actual surface water area (from digitized water maps) and predicted surface water area (from all five methods). (a1,a2) matched filtering (MF) scatterplots, (b1,b2) linear spectral mixture analysis (LSMA) scatterplots, (c1,c2) multiple endmember spectral mixture analysis (MESMA) scatterplots, (d1,d2) self-trained random forest regression (S_RF) scatterplots and (e1,e2) automatic and hierarchical surface water fraction mapping (AHSWFM) scatterplots. The 1:1 line is shown as a red dotted line. S_RF and AHSWFM with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Figure 9. Comparison between actual surface water area (from digitized water maps) and predicted surface water area (from all five methods). (a1,a2) matched filtering (MF) scatterplots, (b1,b2) linear spectral mixture analysis (LSMA) scatterplots, (c1,c2) multiple endmember spectral mixture analysis (MESMA) scatterplots, (d1,d2) self-trained random forest regression (S_RF) scatterplots and (e1,e2) automatic and hierarchical surface water fraction mapping (AHSWFM) scatterplots. The 1:1 line is shown as a red dotted line. S_RF and AHSWFM with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Remotesensing 14 01615 g009
Figure 10. Accuracies in self-trained random forest regression (S_RF) and automated and hierarchical surface water fraction mapping (AHSWFM) with different window sizes and the fixed window shift in the aggregation to coarse pixel samples in the water fraction regression. (a1,a2) RMSEarea values in the estimation of small water body (SWB) area, (b1,b2) RMSEfraction values in the estimation of SWB fractions in the buffer and (c1,c2) RMSEfraction values in the estimation of SWB fractions in the entire image.
Figure 10. Accuracies in self-trained random forest regression (S_RF) and automated and hierarchical surface water fraction mapping (AHSWFM) with different window sizes and the fixed window shift in the aggregation to coarse pixel samples in the water fraction regression. (a1,a2) RMSEarea values in the estimation of small water body (SWB) area, (b1,b2) RMSEfraction values in the estimation of SWB fractions in the buffer and (c1,c2) RMSEfraction values in the estimation of SWB fractions in the entire image.
Remotesensing 14 01615 g010
Figure 11. Accuracies in self-trained random forest regression (S_RF) with different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression in S_RF. (a1,a2) RMSEarea values in the estimation of small water body (SWB) area, (b1,b2) RMSEfraction values in the estimation of SWB fractions in the buffer and (c1,c2) RMSEfraction values in the estimation of SWB fractions in the entire image.
Figure 11. Accuracies in self-trained random forest regression (S_RF) with different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression in S_RF. (a1,a2) RMSEarea values in the estimation of small water body (SWB) area, (b1,b2) RMSEfraction values in the estimation of SWB fractions in the buffer and (c1,c2) RMSEfraction values in the estimation of SWB fractions in the entire image.
Remotesensing 14 01615 g011
Figure 12. Accuracies in automated and hierarchical surface water fraction mapping (AHSWFM) with different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression in AHSWFM. (a1,a2) RMSEarea values in the estimation of small water body (SWB) area, (b1,b2) RMSEfraction values in the estimation of SWB fractions in the buffer and (c1,c2) RMSEfraction values in the estimation of SWB fractions in the entire image.
Figure 12. Accuracies in automated and hierarchical surface water fraction mapping (AHSWFM) with different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression in AHSWFM. (a1,a2) RMSEarea values in the estimation of small water body (SWB) area, (b1,b2) RMSEfraction values in the estimation of SWB fractions in the buffer and (c1,c2) RMSEfraction values in the estimation of SWB fractions in the entire image.
Remotesensing 14 01615 g012
Table 1. Accuracy statistics of five methods in two years. The lowest root mean square errors (RMSE) values are highlighted in bold. Self-trained random forest regression (S_RF) and automated and hierarchical surface water fraction mapping (AHSWFM) with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
Table 1. Accuracy statistics of five methods in two years. The lowest root mean square errors (RMSE) values are highlighted in bold. Self-trained random forest regression (S_RF) and automated and hierarchical surface water fraction mapping (AHSWFM) with a window size of 10 and a fixed window shift in the aggregation to coarse pixel samples in the water fraction regression were used.
MethodMFLSMAMESMAS_RFAHSWFM
RMSEarea:
RMSE in SWB area within the buffer (ha)
20170.09380.05170.08450.06080.0461
20210.10650.08730.05920.05260.0440
RMSEfraction:
RMSE in per-pixel water fraction
Within the buffer20170.26790.17850.21190.17720.1714
20210.27650.21880.19590.18260.1799
In the image20170.13360.13270.10550.09640.0926
20210.14160.16180.10180.09650.0940
Table 2. Number of training samples and mean running time over two years of self-trained random forest regression (S_RF) and automated and hierarchical surface water fraction mapping (AHSWFM) using different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression.
Table 2. Number of training samples and mean running time over two years of self-trained random forest regression (S_RF) and automated and hierarchical surface water fraction mapping (AHSWFM) using different window sizes and shifts in the aggregation to coarse pixel samples in the water fraction regression.
Window SizeFixed Window ShiftMultiple Window Shifts
Number of Training SamplesRunning Time
(Second)
Number of Training SamplesRunning Time
(Second)
S_RFAHSWFMS_RFAHSWFM
2331,9921419.71420.01,325,6533102.03102.2
482,998125.6125.81,321,0291203.31203.5
636,888103.6104.01,316,4131075.81075.9
820,67094.895.01,311,8051036.01036.2
1013,20889.189.31,307,2051018.91019.0
12922284.684.81,302,6131022.81022.9
14666081.481.51,298,0291047.01047.1
16513579.179.31,293,4531069.01069.2
18406077.077.21,288,8851092.21092.3
20327675.475.61,284,3251110.61111.1
22267974.474.51,279,7731129.61129.8
24227972.672.81,275,2291149.31149.5
26192068.168.21,270,6931178.91179.0
28166559.860.01,266,1651200.01200.2
30142859.159.21,261,6451228.41228.6
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Li, X.; Zhou, P.; Jiang, L.; Du, Y. AHSWFM: Automated and Hierarchical Surface Water Fraction Mapping for Small Water Bodies Using Sentinel-2 Images. Remote Sens. 2022, 14, 1615. https://doi.org/10.3390/rs14071615

AMA Style

Wang Y, Li X, Zhou P, Jiang L, Du Y. AHSWFM: Automated and Hierarchical Surface Water Fraction Mapping for Small Water Bodies Using Sentinel-2 Images. Remote Sensing. 2022; 14(7):1615. https://doi.org/10.3390/rs14071615

Chicago/Turabian Style

Wang, Yalan, Xiaodong Li, Pu Zhou, Lai Jiang, and Yun Du. 2022. "AHSWFM: Automated and Hierarchical Surface Water Fraction Mapping for Small Water Bodies Using Sentinel-2 Images" Remote Sensing 14, no. 7: 1615. https://doi.org/10.3390/rs14071615

APA Style

Wang, Y., Li, X., Zhou, P., Jiang, L., & Du, Y. (2022). AHSWFM: Automated and Hierarchical Surface Water Fraction Mapping for Small Water Bodies Using Sentinel-2 Images. Remote Sensing, 14(7), 1615. https://doi.org/10.3390/rs14071615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop