Next Article in Journal
YOLOLens: A Deep Learning Model Based on Super-Resolution to Enhance the Crater Detection of the Planetary Surfaces
Next Article in Special Issue
An Intercomparison of Sentinel-1 Based Change Detection Algorithms for Flood Mapping
Previous Article in Journal
Assessing the Performance of a Handheld Laser Scanning System for Individual Tree Mapping—A Mixed Forests Showcase in Spain
Previous Article in Special Issue
A Procedure for the Quantitative Comparison of Rainfall and DInSAR-Based Surface Displacement Time Series in Slow-Moving Landslides: A Case Study in Southern Italy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Segmentation of Water Bodies Using RGB Data: A Physically Based Approach

Escuela de Ingeniería en Obras Civiles, Universidad Diego Portales, Av. Ejército 441, Santiago 8370109, Chile
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(5), 1170; https://doi.org/10.3390/rs15051170
Submission received: 8 December 2022 / Revised: 14 February 2023 / Accepted: 16 February 2023 / Published: 21 February 2023
(This article belongs to the Special Issue Remote Sensing of Climate-Related Hazards)

Abstract

:
A novel method is proposed to automatically segment water extent using optical data. The key features of this approach are (i) the development of a simple physically based model that utilises only RGB data for water extent segmentation; (ii) the achievement of high accuracy in the results, particularly in the estimation of water surface area and perimeter; (iii) the avoidance of any data training process; (iv) the requirement of minimal computational resources; and (v) the release of an open-source software package that provides both command-line codes and a user-friendly graphical interface, making it accessible for various applications, research, and educational purposes. The physically based model integrates reflectance of the water surface with spectral and quantum interpretation of light. The algorithm was tested on 27 rivers and compared to manually-based delimitation, with a resulting robust segmentation procedure. Quantified errors were RMSE = 11.91 (m2) for surface area, RMSE = 12.25 (m) for perimeter, and RMSE in x: 52 (px), RMSE in y: 93 (px) for centroid location. Processing time was faster for automatic segmentation than manual delimitation, with a time reduction of 40% (case-by-case analysis) and 65% (using all case studies together in one run). Shadows, light spots, and natural and non-natural elements in the field of view may affect the accuracy of results.

1. Introduction

Water surface observations are crucial to understanding the ecological health status of water bodies and hydrological processes. Surface water extents are time-dependent, presenting extreme changes occasionally, as in floods (one of the most common natural disasters worldwide). The impacts of floods on society can be catastrophic, leading to consequences on people in terms of infrastructure (e.g., damaged buildings and bridges), cultural heritage, economic losses, and causalities. Flood risk modelling and water resource planning need a dense network of hydrological observations distributed in space and with acceptable temporal resolution. The need for efficient and easy-to-use methods to retrieve this information is globally recognised.
Remote sensing observations might be preferred to deal with water bodies’ information due to their versatility and possibility of reaching difficult-to-access places. Different alternatives are available such as using satellite products (global coverage) and images from Unmanned Aerial Systems (UASs) at the local mapping scale. Regarding the use of satellite products for water extent delineation, different spectral indices have been introduced to identify water from other features. Some of them are the Normalised Difference Vegetation Index (NDVI, Rouse et al. [1]), the Normalised Difference Water Index (NDWI, McFeeters [2]), the Normalised Difference Moisture Index (NDMI, Gao [3]), the Modified Normalised Difference Water Index (MNDWI, Xu [4]), and the Water Ratio Index (WRI, Shen and Li [5]). The reader is referred to Albertini et al. [6] for an up-to-date review. Even though satellite products can be an excellent alternative to deal with water extent segmentation at the global scale, the size of the pixel can be restrictive when small water bodies are under analysis. The use of UASs equipped with multispectral cameras might be a solution, but the cost of this equipment could limit its application in low- and middle-income countries. RGB (red, green, and blue) data might be a natural alternative to dealing with costs.
Convolution neuronal networks, deep learning, and machine learning frameworks have been applied with RGB data to isolate water extent. These approaches usually have high computing power needs and considerable manually organised data. The accuracy of segmented surface water area is relatively high [7], but the high processing time for training the model and computing power—especially CPU [8]—limits adaptation and application to other environmental conditions. Consequently, there are difficulties in extrapolating and using these frameworks under other circumstances or case studies, limiting their use at operational levels [7,9,10,11,12,13,14,15,16,17,18,19]. In addition to what was mentioned before, physically based frameworks might be preferred due to their capacity to provide good results (due to their nature) under different conditions and reduce the database required for model training.
Following this premise, the main contributions of this manuscript are (i) the proposal of a physically based model for water extent segmentation relying on parsimony and using only RGB data; (ii) high accuracy in results in terms of water body surface area and perimeter; (iii) no need for data training; (iv) no need for high computational resources; and (v) the introduction of an open-source software package with command-line codes and a Graphical User Interface (GUI). The Water Automatic segmentation in Rivers (WATER) model is used to segment water surfaces in images automatically. This manuscript is organised as follows: Section 2 presents the basis of WATER, its different running options, and the 27 case studies used to test performance. Section 3 shows results regarding water extent area, perimeter, and centroid coordinates. Additionally, WATER’s graphical user interface (GUI) is presented to facilitate its use. Section 4 presents the impacts, challenges, and future developments. Conclusions are provided at the end.

2. Materials and Methods

2.1. Case Studies

Twenty-seven different case studies located in six rivers in Denmark are used to test the performance of WATER. The dataset is the same used by Bandini et al. [20], available on Zenodo (https://zenodo.org/record/3594392#.YorI4e7MJD8, last access: 6 December 2022). Additionally, an extended list of more properties for each of the 27 case studies is presented in Supplementary Material C, including a description of the bottom of the riverbed classified as high-density vegetation, parches, and clear bottom. Figure 1 shows the location of case studies, whereas Figure 2 shows the histograms for their main characteristics, such as river discharge and width.

2.2. WATER Overview

WATER is a physically based model developed in MATLAB to identify and segment water bodies without needing large amounts of data and/or previous calibration (see Section 3.3 for more information related to available codes). WATER is open source and free to download and use. It only requires four required inputs (RGB video or one frame) and parameters of the camera: focal aperture (default: 1.345 ×107 (nm)–GoPro Hero5), cartesian coordinates (x, y) of the optical centre in the sensor RGB (default: (1928, 1094)–GoPro Hero5). In addition, advanced users can personalise the fourteen optional inputs shown in Figure 3.
The theoretical framework involves the evaluation of RGB performance through the reconstruction of the visible spectrum from RGB data as a function of the standard normative of Commission Internationale I’Éclairage (CIE). The CIE 1964 was chosen due to its validation within colorimetry and photogrammetry [21]. Additionally, WATER considers the simulation of single-slit diffraction (i.e., an intrinsic quantum phenomenon that affects RGB data interpretation) and a pseudo genetic algorithmic for auto-calibration and water body mask. Figure 4 shows WATER fundamentals and methodology, which is covered in more detail below (additionally, the reader is referred to Supplementary Material B).

2.3. Performance of RGB Sensor

The visible light spectrum reconstruction is performed by extracting the first footage frame and calibrating the data correlation. The RGB bands are transformed into the XYZ normalised system to create the spectral frame [21]. Afterwards, the error is quantified in terms of the Kling Gupta Efficiency ( K G E R , G , B –RGB means Red, Green, and Blue) and the Root Mean Square Error ( R M S E R , G , B ), presented in Equations (1) and (5), respectively:
K G E = 1 ( r 1 ) 2 + ( β 1 ) 2 + ( γ 1 ) 2 2 ,
r = λ = 1 λ t ( E C I E λ E ¯ C I E )     ( E F λ E ¯ F ) λ t s E C I E     s E F ,
β =   E ¯ F E ¯ C I E ,
γ = s E F s E C I E ,
R M S E = λ = 1 λ t ( E F λ E C I E λ ) 2 λ t 2 ,
where r is the Pearson coefficient; β is the ratio between the mean values of the CIE Standard with the transformed in-XYZ-system data; γ is the standard deviation ratio of the CIE standard with the transformed in-XYZ-system data; λ t is the total wavelengths of the spectrum; E F λ and E C I E λ are the energies of the wavelength λ of the spectrum by the acquired data and CIE standard, respectively.

2.4. Single-Slit Diffraction Simulation

As visible light is part of the electromagnetic spectrum, WATER considers the behaviour of wave–particle duality [22] (camera lens acquires energy packets as RGB data). According to Yangton and Yington Theory [23], photons and electrons produce zones of construction and destruction of electromagnetic waves [24]. The latter is known as quantum interference [25] (see Supplementary Material D for more details). Consequently, the visible spectrum (RGB data) is altered with more intensities in specific zones (Figure 5a–c for R, G, and B in the XYZ system, respectively).
Single-slit diffraction is an intrinsic physical phenomenon associated with the camera and depends on the equipment, i.e., lens, resolution, and location of the optical centre. These variables are summarised in Table 1 for the GoPro Hero 5 (camera used for case study acquisition). The quantum interference space is generated through the positive conical paraboloid θ , which depends on the resolution and optical centre, as shown in Equation (6):
θ = ( x X o ) 2 ( W 2 ) 2 + ( y Y o ) 2 ( L 2 ) 2 2 ,
where x and y correspond to plane coordinates on the RGB sensor.
The layer of total phase angle δ is a function of the wavelength λ and the focal aperture D , as is presented in Equation (7). Figure 6 shows the layer of δ for the camera used in the case studies. The normalised energy intensity ( I N ) is then computed with the middle phase angle β as shown in Equations (8) and (9). The RGB interpretation by single-slit diffraction ( R G B S S D ) is determined with Equation (10):
δ ( λ ) = 2 π D sin ( θ ) λ ,
β ( λ ) = δ ( λ ) 2 ,
I N ( λ ) = I ( λ ) I m a x ( λ ) = ( sin ( β ( λ ) ) β ( λ ) ) 2 ,
I ( λ ) = I N ( λ ) E C I E λ .

2.5. Reflectance Filter

The reflectance filter (RF) has its basis in the water surface physic capacity of light reflectance, i.e., values more elevated than land and vegetation. Considering the simulation for single-slit diffraction, the RF has six binary bands as presented in Equation (11) (the usual RGB bands and the cross-by-single-slit-diffraction ones). The duplicate bands increment data redundancy, allowing to choose a better combination for water body segmentation analyses. Additionally, two indicators of the river extent are characterised by the centroid of the identified-by-RF water body, type-centroid gaps (TG) and type-centroid data (TD):
R F = R G B R M S E R , G , B R G B S S D ,
where R G B R M S E R , G , B are the three RGB bands calibrated in a binary system by R M S E R , G , B (i.e., spectral interpretation), and R G B S S D are the three RGB bands corrected by single-slit diffraction (quantum interpretation).

2.6. Pseudo Genetic Algorithmic

A pseudo (unconventional)-genetic algorithm (PGA) is applied to select the best band combination or Best Reflectance Filter (BRF). The six bands are the genes that constitute a chromosome. The mutating chromosomes correspond to any possible combination of genes by activating and deactivating RF components. PGA has two different phases: (i) Pre-selection, and (ii) Final selection. These parts are described in more detail below.
Pre-selection chooses three candidates. The first and second postulants are part of the pre-selection phase, evaluating their behaviour by a Morphological Fit (MF) test that optimises K G E R , G , B performance, and is chosen by the Blue Energy Removed Index (BERI) and the Equivalent Index of the Water Body (EIWB), respectively. The MF corresponds to scaling the image gaps by expanding their diameter. It ranges from 0% to 100%, where 0% is defined as the initial state (i.e., RF), and 100% means complete emptiness. BERI quantifies the amount of energy removed from the blue bands ( Ε Z ) when it is applicated with some genetic combination on the image, whereas EIWB is the direct relationship between the sum of the energy removed in red and green ( Ε X + Ε Y ), and the energy removed in blue ( Ε Z ):
Ε I W B = Ε X + Ε Y Ε Z .
The first and second postulants correspond to the genes that maximise BERI and EIWB, respectively. The third is the spectrum–diffraction filter (SDF) constituted by three bands, two from R G B R M S E R , G , B with the highest K G E R , G , B values, and the band with the lowest K G E R , G , B value changes to R G B S S D .
The final selection is dependent on EIWB and integrates three conceptual indices that are presented from Equation (12) to Equation (14). These indices are the Genes Similarity Degradation ( Κ ); Concentrated Shadow Detection ( Ψ ); and the Water Surface Transparency ( Φ ) (see Supplementary Material and codes for more information about them):
Κ = ( χ x ψ x ) 2 + ( χ y ψ y ) 2 ,
Ψ = Κ Ε I W B m a x ( Κ Ε I W B ) ,
Φ = Κ Ε I W B max ( Κ Ε I W B ) ,
where χ x is the coordinate x of the centroid of gaps existing on the chromosome candidate; ψ x is the coordinate x of TG; χ y is the coordinate y of the centroid of gaps existing on the chromosome candidate; and, ψ y is the coordinate y of TG. It is important to recall that when maximising Ε I W B , the Ε Z is minimum and, therefore, the loss of data on the river is minimum. BRF’s decision considers the four indices mentioned above and can be computed following the processing chart shown in Figure 7.

2.7. Water Body Detection and Extracted Characteristics

River segmentation starts with the RF configuration for the BBC. MF is adaptative and iterative from minimum to maximum K G E R , G , B with 1% step. Each step changes the gap distribution, impacting the position of the river centroid. This modification is quantified in real time for the distance between the river centroid and TG. The model ends when the minimum distance is reached, and this value is lower than the distance between TG and TD. Otherwise, the step with the minimum gradient of the curve is chosen.
After the water extent is determined, the symmetry axis of the river can be computed. A poly shape is used to delimit the water limits and determine the axis on which data predominate. Data are divided into two bands, and ad hoc equations are fitted (each for each riverbank). Finally, the symmetry axis of the river is located at the middle distance between the riverbanks. Other variables such as river width, water extent area and perimeter are also computed by WATER.

3. Results

3.1. Automatic Versus Manually Based Water Segmentation

WATER was applied to 27 case studies whose properties were different in terms of aquatic and terrestrial vegetation, river geometry, infrastructure such as bridges, and seeding on the water surface. With the intention to illustrate WATER capabilities to segment water extents, Figure 8 shows two case studies with different geometries and aquatic vegetation to assess its performance visually. The performance of the whole dataset can be visually detected in Supplementary Material E.
In terms of water extent area, water extent perimeter, and segmented surface water centroid, results are catalogued as very good. Table 2 shows a summary of these variables contrasting the automatic-by-WATER and the manually based segmentation. Interesting is the fact that the average error for water extent area was up to 3.6%, that for water extent perimeter was at −6.2%, and that for centroid coordinates in x and y was at (0.1%, 0.0%).
Performance indices such as KGE and RMSE were used to compare the manual (considered the ideal measurement) with WATER for the whole dataset. Figure 9 shows this comparison in terms of water extent area and perimeter as well as centroid locations. Remarkably, KGE and RMSE have outstanding values in all the analysed variables. For instance, and in terms of water extent area, KGE reached a value equal to 0.98, which is considered optimal. Additionally, and for the same variable, 15 out of 27 case studies presented errors of less than 3 (m2). Overall, a good agreement between WATER results and manual segmentation was observed where KGE values were 0.98, 0.91, 0.92, and 0.94 for water extent area, water extent perimeter, and segmented surface water centroid (in coordinates x and y); and RMSE values were 11.92 (m2), 12.25 (m), 92 (px), and 53 (px), respectively. The histograms in Figure 9(a.2,b.2,c.2,d.2) show the absolute error of the variables in question.

3.2. Processing Time

WATER runs on a Dell computer with a processor Intel(R) Core (TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz (10th generation). The processing time for each case study is lower than 360 s, presenting an average value (considering the 27 case studies) of 316 s and a standard deviation of 17 s. Worth mentioning is that all videos were acquired and analysed in 4K resolution. Additionally, on average, manual segmentation was 897 s with a standard deviation of 608 s. The more considerable processing time exceeds 2000 s with manual segmentation. Figure 10(a.2) and (a.3), shows the histograms of the processing time for WATER and manual segmentation, respectively. On average, the time reduction was between 40% and 65%, considering all the case studies analysed in one run.

3.3. WATER v0.01 Software

WATER v0.01 software is presented in two different formats, aiming to reach users with different computational skills. A stand-alone graphical user interface (GUI) developed in Matlab R2021a was compiled as executable and runs on Windows operating system without the need of having Matlab installed. The GUI is also provided to be run directly from Matlab, enabling further flexibility for user’s needs, meaning that the GUI and codes are modifiable. Finally, command-line codes are also available to be downloaded. WATER v0.001 can be downloaded from: http://doi.org/10.17605/OSF.IO/3JXFD (last access: 20 February 2023)
The GUI is split up into five different tabs: (i) Home; (ii) Water Body; (iii) River Properties; (iv) Results; and (v) Help. Figure 11 shows WATER software GUI with the Grindsted case study (ID: 6 in Supplementary Material E). WATER software allows to load both a video and an image to be analysed. Default values for analyses are those of the GoPro Hero5 because case studies were acquired with this equipment. Default values can be modified according to user needs. A command window is also presented at the bottom of the software with the intention to show the user important information, for instance, if WATER is running some process or guiding the user through the workflow to reach results. WATER allows two different ways to perform the analysis: (i) running all the workflow automatically (with default values); or (ii) running the workflow step by step (if the user chooses this option, the software guides the steps which are in line with the tabs, i.e., from Home to Results).

4. Discussion

In the last years, automatic water segmentation has mainly been performed through two approaches: (i) image processing, and (ii) machine intelligent models. The first one focuses on considering image texture and thresholding [26,27]. The second one uses clustering [9,10,11,12], deep learning [7,8,15,19], and machine learning [16,17,18]. All recent efforts have been destinated to improve the accuracy of machine intelligence-related models. However, these models are not simple to replicate and integrate into different computing resources and environmental conditions (the computer is trained to detect specific water body conditions specified within the selected dataset inherently). As was stated above, there are various water body segmentation frameworks categorised into physically-based and machine intelligent methods. Examples of the latter are the Semantic Segmentation Method (SegNet, [7]), the DeepLab V3+ [28], and ATLANTIS [29], each with its respective dataset and computational requirements. These methods require a high number of images for training, validation, and testing, as well as high computational resources, which can be costly. For instance, SegNet required a dataset of 3407 images adopting 60% for training, 20% for validation, and 20% for testing. Data processing was realized with a desktop computer (Intel(R) Xeon(R) [email protected] Ghz, 64 GB RAM, and NVIDIA Titan V Graphic Processing Unit 5120). The model was trained, validated, and tested with one river to reach high accuracy (approximately 98% with a resolution of the images of 256 × 256 and 512 × 512 pixels, respectively). DeepLabV3+ used a dataset of 10,405 images of 100 × 100 pixel resolution. A total of 8941 images was used for training and validation, while 1464 images were used for testing. The model runs on a server. DeepLabV3+ required 41,044,130 learnable/trainable parameters with average Jaccard and Dice scores accuracies of 0.7169 and 0.8412, respectively. ATLANTIS considers 5195 images (3364 for training; 535 for validation; and 1,296 for testing). The accuracy in the aquatic regions variated from 51.98 to 69.89%. No information of used computational resources was discovered in the literature. On the other hand, WATER is a physically based model that needs only one image for river segmentation, with an average accuracy of 98% based on 27 case studies. It has limited computational requirements (it runs on a laptop computer with the following resources: Intel(R) Core (TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz (10th generation) with 12 GB RAM and standard graphic processing unit), making it affordable for research and operational use. Compared to machine-learning-based methods, WATER requires 99.9% less data, yet it has a similar performance and can be used with a simple computer.
Humans can segment water bodies visually and based on the watercolour, brightness, reflectance, sound, and tact (among others), but computers only use part of this information to segment water bodies. Identifying river boundaries (i.e., the interface between water and land) at small mapping scales is particularly complex because that zone generates a gradient of energy reflected. Indeed, surface water generates a critical physical phenomenon that neither land nor plant produces significantly, i.e., reflectance.
This automatic segmentation of water bodies identifies the water extent through the superficial water reflectance, expressed as more complete data in the flood river zone when transforming the RGB data to binary bands. Additionally, the visible spectrum in terms of RGB data are surrounded by quantum interference (single-slit diffraction) and lens diffraction [30]. The correction of quantum interference allows band duplication, increasing redundancy. As a result, it establishes higher accuracy and stability of the model to identify water bodies in natural conditions. The latter will foster applications at small mapping scales, adopting parsimony within analysis (see Supplementary Material D for physics behind diffraction).
In addition to what was mentioned above is the Global Navigation Satellite System Reflectometry (GNSS-R). GNSS-R and WATER differ in the type of input data used to segment land, with the former relying on wave interference generated by ground reflection of GNSS electromagnetic signals [31,32], while the latter uses a single image for river segmentation. Despite this difference, a comparison of their accuracy is possible. GNSS-R achieved a 92% accuracy in detecting small water surfaces when tested in a lake, river, and artificial water catchment [33], while WATER reached a higher accuracy of 98% in 27 rivers. In terms of coverage, GNSS-R segments land by lines or patches (see [34] for a worldwide analysis), whereas WATER can detect every part of a river in a single measurement, and it was tested on coverage ranging from 100 to 1000 m2. Considering its coverage, accuracy, and cost-effectiveness, WATER shows promise as a river segmentation technique.
This study was developed retrospectively to understand the human system perception and interpret water segmentation. Using the same data provided by footage to autocorrect the intrinsic quantum physical phenomenon associated with the camera and auto-calibrate the model, the water extent in an RGB image can be distinguished successfully. WATER has high accuracy—in terms of surface river area—with KGE values close to 1 and RMSE values close to 0 (m2). WATER also has a high capacity for adaptation to other environments without extra training (due to its physically based nature). Additionally, all variables used to quantify model performance had results that can be classified as very good.
WATER was written in Matlab R2021a. Command-line codes and a GUI were developed to reach users with different computational skills. WATER software is open source and can be downloaded from the Code Availability statement. In terms of computational benefits, WATER reduces the processing time (in comparison with manual segmentation) and provides more information automatically (such as surface river area and perimeter as well as water extent surface centroids, river width, and symmetry axis). Reducing processing time implicates optimising computer system resources (RAM, CPU, and GPU usage) which can be used for other purposes. Worth noting is that the first run of WATER requires the generation and saving of the quantum interpretation layer (single-slit diffraction simulation) that requires approximately 15 min. The following river segmentation requires approximately 5 min only. Difficulties to segment correctly are detected when the field of view near to river edge presents a high seeding density on the water surface, aquatic and non-aquatic vegetation, shadows, and human-built structures (i.e., bridges).
WATER aims to complement the increasing daily technology in estimating fluvial variables remotely. In particular, WATER can be used to identify an ROI on which image velocimetry can run automatically. Furthermore, footage stabilisation can be another WATER application, focusing on quantifying fluvial geomorphological variables in any part of the river. If a Digital Elevation Model is available, WATER can be used to identify any cross-section with its bathymetry. As a result, and in combination with image-velocimetry algorithms, WATER can be used to estimate river discharge with cameras.

5. Conclusions

Water Automatic segmentation in Rivers (WATER) is a physically based model that relies on two phenomena: (i) the reflectance of water surface, and (ii) quantum interpretation (single-slit diffraction simulation). Its application allows segmentation and identification of fluvial characteristics of rivers (e.g., river width, area, and perimeter) with high accuracy (KGE always higher than 0.91 and tested on 27 case studies). WATER has the particularity to be autocalibratable using the same image information, with an average processing time of five minutes (from the beginning to the final result). WATER has several benefits (in comparison with manual segmentation or machine intelligence models) because it eliminates the training process, and consequently, a considerable database reduction is met. As WATER is a physically based model, there is no restriction on its applicability under different environmental conditions. WATER software was also developed and can be downloaded from http://doi.org/10.17605/OSF.IO/3JXFD (last access: 20 February 2023). The main difficulties in river segmentation have been detected in the river edge by the concentration of seeding density on the water surface, different degrees of shadows and brightness, and vegetation.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs15051170/s1, A (WATER modelling overview with required and optional inputs as well outputs.); B (WATER modelling framework); C (Case studies information and performance of WATER); D (Quantum interference for single-slit diffraction); and E (Contrasting human versus machine performance).

Author Contributions

Conceptualisation, M.G. and A.P.; methodology, M.G., A.P. and H.A.; software, M.G. and A.P.; writing—original draft preparation, M.G.; writing—review and editing, M.G., A.P. and H.A.; supervision, H.A. and A.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The dataset used in this manuscript is the same one used by Bandini et al. [20] and can be downloaded from: https://zenodo.org/record/3594392#.YorI4e7MJD8 (last access: 20 February 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Freden, S.C.; Mercanti, E.P.; Becker, M.A. Third Earth Resources Technology Satellite-1 Symposium: Section A–B. Technical presentations; Scientific and Technical Information Office, National Aeronautics and Space Administration: Washington, DC, USA, 1973. [Google Scholar]
  2. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  3. Gao, B.-C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
  4. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  5. Shen, L.; Li, C. Water body extraction from Landsat ETM+ imagery using adaboost algorithm. In Proceedings of the 2010 18th International Conference on Geoinformatics, Beijing, China, 18–20 June 2010; pp. 1–4. [Google Scholar] [CrossRef]
  6. Albertini, C.; Gioia, A.; Iacobellis, V.; Manfreda, S. Detection of Surface Water and Floods with Multispectral Satellites. Remote Sens. 2022, 14, 6005. [Google Scholar] [CrossRef]
  7. Akiyama, T.S.; Junior, J.M.; Gonçalves, W.N.; Bressan, P.O.; Eltner, A.; Binder, F.; Singer, T. Deep Learning Applied to Water Segmentation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2, 1189–1193. [Google Scholar] [CrossRef]
  8. Xia, M.; Cui, Y.; Zhang, Y.; Xu, Y.; Liu, J.; Xu, Y. DAU-Net: A novel water areas segmentation structure for remote sensing image. Int. J. Remote Sens. 2021, 42, 2594–2621. [Google Scholar] [CrossRef]
  9. Wan, H.L.; Jung, C.; Hou, B.; Wang, G.T.; Tang, Q.X. Novel Change Detection in SAR Imagery Using Local Connectivity. IEEE Geosci. Remote Sens. Lett. 2012, 10, 174–178. [Google Scholar] [CrossRef]
  10. Li, N.; Wang, R.; Liu, Y.; Du, K.; Chen, J.; Deng, Y. Robust river boundaries extraction of dammed lakes in mountain areas after Wenchuan Earthquake from high resolution SAR images combining local connectivity and ACM. ISPRS J. Photogramm. Remote Sens. 2014, 94, 91–101. [Google Scholar] [CrossRef]
  11. Yuan, X.; Sarma, V. Automatic Urban Water-Body Detection and Segmentation From Sparse ALSM Data via Spatially Constrained Model-Driven Clustering. IEEE Geosci. Remote Sens. Lett. 2010, 8, 73–77. [Google Scholar] [CrossRef]
  12. Ansari, E.; Akhtar, M.N.; Abdullah, M.N.; Othman, W.A.F.W.; Abu Bakar, E.; Hawary, A.F.; Alhady, S.S.N. Image Processing of UAV Imagery for River Feature Recognition of Kerian River, Malaysia. Sustainability 2021, 13, 9568. [Google Scholar] [CrossRef]
  13. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  14. Teichmann, M.T.T.; Cipolla, R. Convolutional CRFs for Semantic Segmentation. In Proceedings of the 30th British Machine Vision Conference 2019, Cardiff, UK, 9–12 September 2019. [Google Scholar] [CrossRef]
  15. Rankin, A.; Matthies, L. Daytime water detection and localization for unmanned ground vehicle autonomous navigation. In Proceedings of the 25th Army Science Conference, Orlando, FL, USA, 27–30 November 2006. [Google Scholar]
  16. Li, K.; Wang, J.; Yao, J. Effectiveness of machine learning methods for water segmentation with ROI as the label: A case study of the Tuul River in Mongolia. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102497. [Google Scholar] [CrossRef]
  17. Sarwal, A.; Nett, J.; Simon, D. Detection of Small Water-Bodies; Perceptek Inc.: Littleton, CO, USA, 2004. [Google Scholar]
  18. Achar, S.; Sankaran, B.; Nuske, S.; Scherer, S.; Singh, S. Self-supervised segmentation of river scenes. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 6227–6232. [Google Scholar] [CrossRef] [Green Version]
  19. Zhang, X.; Jin, J.; Lan, Z.; Li, C.; Fan, M.; Wang, Y.; Yu, X.; Zhang, Y. ICENET: A Semantic Segmentation Deep Network for River Ice by Fusing Positional and Channel-Wise Attentive Features. Remote Sens. 2020, 12, 221. [Google Scholar] [CrossRef] [Green Version]
  20. Bandini, F.; Lüthi, B.; Peña-Haro, S.; Borst, C.; Liu, J.; Karagkiolidou, S.; Hu, X.; Lemaire, G.G.; Bjerg, P.L.; Bauer-Gottwein, P. A Drone-Borne Method to Jointly Estimate Discharge and Manning’s Roughness of Natural Streams. Water Resour. Res. 2021, 57, e2020WR028266. [Google Scholar] [CrossRef]
  21. Trezona, P.W. Derivation of the 1964 CIE 10° XYZ colour-matching functions and their applicability in photometry. Color Res. Appl. 2000, 26, 67–75. [Google Scholar] [CrossRef]
  22. Compton, A.H.; Heisenberg, W. The Physical Principles of the Quantum Theory; Springer: Berlin/Heidelberg, Germany, 1984. [Google Scholar] [CrossRef]
  23. Wu, E.T.H. Yangton and Yington-A Hypothetical Theory of Everything. Sci. J. Phys. 2015, 2013. [Google Scholar] [CrossRef]
  24. Wu, E.T.H. Single Slit Diffraction and Double Slit Interference Interpreted by Yangton and Yington Theory. IOSR J. Appl. Phys. 2020, 12. [Google Scholar] [CrossRef]
  25. Knight, P.L.; Bužek, V. Squeezed States: Basic Principles. Quantum Squeezing 2004, 27, 3–32. [Google Scholar] [CrossRef]
  26. Mancini, A.; Frontoni, E.; Zingaretti, P.; Longhi, S. High-resolution mapping of river and estuary areas by using unmanned aerial and surface platforms. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 534–542. [Google Scholar] [CrossRef]
  27. Muhadi, N.A.; Abdullah, A.F.; Bejo, S.K.; Mahadi, M.R.; Mijic, A. Image Segmentation Methods for Flood Monitoring System. Water 2020, 12, 1825. [Google Scholar] [CrossRef]
  28. Harika, A.; Sivanpillai, R.; Variyar, V.V.S.; Sowmya, V. Extracting Water Bodies in Rgb Images Using Deeplabv3+ Algorithm. In The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences; ISPRS: Gottingen, Germany, 2022; Volume XLVI-M-2–2022, pp. 97–101. [Google Scholar] [CrossRef]
  29. Erfani, S.M.H.; Wu, Z.; Wu, X.; Wang, S.; Goharian, E. ATLANTIS: A benchmark for semantic segmentation of waterbody images. Environ. Model. Softw. 2022, 149, 105333. [Google Scholar] [CrossRef]
  30. Zhou, Y.; Ren, D.; Emerton, N.; Lim, S.; Large, T. Image Restoration for Under-Display Camera. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 9175–9184. [Google Scholar] [CrossRef]
  31. Gamba, M.T.; Marucco, G.; Pini, M.; Ugazio, S.; Falletti, E.; Presti, L.L. Prototyping a GNSS-Based Passive Radar for UAVs: An Instrument to Classify the Water Content Feature of Lands. Sensors 2015, 15, 28287–28313. [Google Scholar] [CrossRef] [Green Version]
  32. Issa, H.; Stienne, G.; Reboul, S.; Raad, M.; Faour, G. Airborne GNSS Reflectometry for Water Body Detection. Remote Sens. 2021, 14, 163. [Google Scholar] [CrossRef]
  33. Imam, R.; Pini, M.; Marucco, G.; Dominici, F.; Dovis, F. UAV-Based GNSS-R for Water Detection as a Support to Flood Monitoring Operations: A Feasibility Study †. Appl. Sci. 2019, 10, 210. [Google Scholar] [CrossRef] [Green Version]
  34. Perez-Portero, A.; Munoz-Martin, J.F.; Park, H.; Camps, A. Airborne GNSS-R: A Key Enabling Technology for Environmental Monitoring. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 6652–6661. [Google Scholar] [CrossRef]
Figure 1. Location of the case studies used to test WATER (27 case studies in total located in Denmark). (af) are zooms of the case studies’ locations.
Figure 1. Location of the case studies used to test WATER (27 case studies in total located in Denmark). (af) are zooms of the case studies’ locations.
Remotesensing 15 01170 g001
Figure 2. Histograms of the main characteristics of the 27 case studies in consideration. (a) river discharge with a mean value equal to 1477 (L/s); (b) river slope with a mean value equal to 0.00105 (m/m); (c) river width with a mean value equal to 8.15 (m); and (d) scalar factor from pixel to meter unit with a mean value equal to 0.0062 (m/px).
Figure 2. Histograms of the main characteristics of the 27 case studies in consideration. (a) river discharge with a mean value equal to 1477 (L/s); (b) river slope with a mean value equal to 0.00105 (m/m); (c) river width with a mean value equal to 8.15 (m); and (d) scalar factor from pixel to meter unit with a mean value equal to 0.0062 (m/px).
Remotesensing 15 01170 g002
Figure 3. WATER overview with required and optional inputs as well outputs. The reader is referred to Supplementary Material A and B as well as codes for more information regarding the algorithms involved.
Figure 3. WATER overview with required and optional inputs as well outputs. The reader is referred to Supplementary Material A and B as well as codes for more information regarding the algorithms involved.
Remotesensing 15 01170 g003
Figure 4. Schematic diagram of the modelling process from the image acquisition to the output result (water body mask and geometrical characteristics such as water body area and water body perimeter). Supplementary Material B presents specific details of WATER.
Figure 4. Schematic diagram of the modelling process from the image acquisition to the output result (water body mask and geometrical characteristics such as water body area and water body perimeter). Supplementary Material B presents specific details of WATER.
Remotesensing 15 01170 g004
Figure 5. Camera and single-slit diffraction for the 27 case studies under analysis. Normalised PDF for CIE standard 1964 in X (a), Y (b), and Z (c).
Figure 5. Camera and single-slit diffraction for the 27 case studies under analysis. Normalised PDF for CIE standard 1964 in X (a), Y (b), and Z (c).
Remotesensing 15 01170 g005
Figure 6. Quantum interference in the RGB sensor in GoPro Hero 5.
Figure 6. Quantum interference in the RGB sensor in GoPro Hero 5.
Remotesensing 15 01170 g006
Figure 7. Processing chart of the best reflectance filter selection.
Figure 7. Processing chart of the best reflectance filter selection.
Remotesensing 15 01170 g007
Figure 8. Illustration of automatic and manually based water segmentation. (a.1) River Vejle Å, section XS1 (ID: 23 in Supplementary Material E), presenting a high density of aquatic vegetation. Automatic (yellow) and manually based (cyan) water segmentation. Dots are the centroids of the segmented water extent. (a.2) Mask computed by WATER (automatic analysis). (a.3) Mask by manual segmentation. (b.1) River Grindsted Å’, section ST6 (ID: 12 in Supplementary Material E), presenting patches of aquatic vegetation and concentrated seeding density. Automatic (yellow) and manually based (cyan) water segmentation. Dots are the centroids of the segmented water extent. (b.2) Mask computed by WATER (automatic analysis). (b.3) Mask by manual segmentation.
Figure 8. Illustration of automatic and manually based water segmentation. (a.1) River Vejle Å, section XS1 (ID: 23 in Supplementary Material E), presenting a high density of aquatic vegetation. Automatic (yellow) and manually based (cyan) water segmentation. Dots are the centroids of the segmented water extent. (a.2) Mask computed by WATER (automatic analysis). (a.3) Mask by manual segmentation. (b.1) River Grindsted Å’, section ST6 (ID: 12 in Supplementary Material E), presenting patches of aquatic vegetation and concentrated seeding density. Automatic (yellow) and manually based (cyan) water segmentation. Dots are the centroids of the segmented water extent. (b.2) Mask computed by WATER (automatic analysis). (b.3) Mask by manual segmentation.
Remotesensing 15 01170 g008
Figure 9. Comparison between automatic and manual water segmentation in terms of water extent area (a.1); water extent perimeter (b.1); and centroid location (c.1,d.1). Red line represents perfect agreement between automatic and manually based water segmentation. Histograms of the absolute errors are also presented at the lower-right position of each of the subplots (histrograms of the errors in terms of surface area, surface perimeter, centroid x- and y-coordinates are in (a.2,b.2,c.2,d.2) respectively).
Figure 9. Comparison between automatic and manual water segmentation in terms of water extent area (a.1); water extent perimeter (b.1); and centroid location (c.1,d.1). Red line represents perfect agreement between automatic and manually based water segmentation. Histograms of the absolute errors are also presented at the lower-right position of each of the subplots (histrograms of the errors in terms of surface area, surface perimeter, centroid x- and y-coordinates are in (a.2,b.2,c.2,d.2) respectively).
Remotesensing 15 01170 g009
Figure 10. Confronting the processing time of all case studies analysed in terms of WATER (yellow) and manually-based (cyan) segmentation. (a.1) Processing time for the 27 case studies. (a.2) Histogram of the processing time by the automatic segmentation using WATER. (a.3) Histogram of the processing time by manual segmentation. Worth mentioning is that the processing time is always lower for WATER than for manual segmentation.
Figure 10. Confronting the processing time of all case studies analysed in terms of WATER (yellow) and manually-based (cyan) segmentation. (a.1) Processing time for the 27 case studies. (a.2) Histogram of the processing time by the automatic segmentation using WATER. (a.3) Histogram of the processing time by manual segmentation. Worth mentioning is that the processing time is always lower for WATER than for manual segmentation.
Remotesensing 15 01170 g010
Figure 11. WATER software GUI with Grindsted case study (only for illustrative purposes, ID 6 in Supplementary Material E). (a) Home tab; (b) Results (river mask); (c) Results (symmetry axis and river width).
Figure 11. WATER software GUI with Grindsted case study (only for illustrative purposes, ID 6 in Supplementary Material E). (a) Home tab; (b) Results (river mask); (c) Results (symmetry axis and river width).
Remotesensing 15 01170 g011
Table 1. GoPro Hero 5. Camera intrinsic characteristic.
Table 1. GoPro Hero 5. Camera intrinsic characteristic.
CharacteristicItemValueUnit
Focal aperture D 1.345 × 107(nm)
Optical centre Y Y o 1094(px)
Optical centre X X o 1928(px)
Table 2. Summary of WATER performance as a function of water extent area, water extent perimeter, and centroid coordinates. Errors were computed as ( V W A T E R V M a n u a l ) / V M a n u a l , where V W A T E R is the variable in question retrieved by WATER and V M a n u a l is the variable in question retrieved by manual segmentation.
Table 2. Summary of WATER performance as a function of water extent area, water extent perimeter, and centroid coordinates. Errors were computed as ( V W A T E R V M a n u a l ) / V M a n u a l , where V W A T E R is the variable in question retrieved by WATER and V M a n u a l is the variable in question retrieved by manual segmentation.
Area (m2)Perimeter (m)Centroid Coordinate X (px)Centroid Coordinate Y (px)
IDWATERManualErrorWATERManualErrorWATERManualErrorWATERManualError
149.3749.86−1.0%3646−20.4%20092040−1.5%130412851.5%
250.6054.15−6.6%4354−20.0%22092244−1.5%10561080−2.2%
340.3438.614.5%4258−27.1%25852652−2.5%11731214−3.4%
4120.48117.502.5%7078−9.6%20412047−0.3%124612370.7%
5130.67140.15−6.8%7074−4.2%18751955−4.1%862870−1.0%
679.3377.981.7%63614.1%18681897−1.5%122712240.3%
7130.36129.061.0%6886−20.2%18991907−0.4%9349280.7%
896.6982.7116.9%7282−12.7%1900172010.5%909935−2.8%
9163.92167.01−1.9%6474−13.6%200719254.3%9971015−1.8%
10202.37182.8310.7%108111−3.1%25652606−1.6%10101072−5.8%
1186.9587.47−0.6%837116.7%184618131.8%150114861.0%
12144.88143.081.3%7596−21.6%17751867−4.9%12051208−0.2%
13128.86135.52−4.9%8297−14.9%199319551.9%10011002−0.1%
1493.5592.960.6%63613.0%19311934−0.1%104510450.0%
1555.6953.593.9%3945−13.4%21692203−1.5%109310751.7%
1645.9648.09−4.4%3038−20.8%187018322.1%108610661.9%
17131.1992.1242.4%1158634.8%173315918.9%12761499−14.9%
1847.4855.89−15.0%54532.8%18832007−6.2%7177150.4%
19133.01130.951.6%77761.4%15761704−7.5%151014891.4%
20563.26545.423.3%157166−5.8%19352005−3.5%10651067−0.1%
21132.73104.8126.6%8689−2.6%15031682−10.7%104292013.3%
22456.72474.06−3.7%175183−4.3%202019871.6%106610580.7%
2376.8875.741.5%5777−25.6%195619470.4%127512710.4%
24236.94250.94−5.6%103126−17.7%183817882.8%125112043.9%
2577.6161.6925.8%72686.4%2146188513.8%123912082.6%
2647.1945.473.8%4344−2.5%192219150.4%111211021.0%
27122.45124.34−1.5%796423.3%192019001.0%11171120−0.3%
AVERAGE3.6% AVERAGE−6.2% AVERAGE0.1% AVERAGE0.0%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

García, M.; Alcayaga, H.; Pizarro, A. Automatic Segmentation of Water Bodies Using RGB Data: A Physically Based Approach. Remote Sens. 2023, 15, 1170. https://doi.org/10.3390/rs15051170

AMA Style

García M, Alcayaga H, Pizarro A. Automatic Segmentation of Water Bodies Using RGB Data: A Physically Based Approach. Remote Sensing. 2023; 15(5):1170. https://doi.org/10.3390/rs15051170

Chicago/Turabian Style

García, Matías, Hernán Alcayaga, and Alonso Pizarro. 2023. "Automatic Segmentation of Water Bodies Using RGB Data: A Physically Based Approach" Remote Sensing 15, no. 5: 1170. https://doi.org/10.3390/rs15051170

APA Style

García, M., Alcayaga, H., & Pizarro, A. (2023). Automatic Segmentation of Water Bodies Using RGB Data: A Physically Based Approach. Remote Sensing, 15(5), 1170. https://doi.org/10.3390/rs15051170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop