Next Article in Journal
Integrating DMSP-OLS and NPP-VIIRS Nighttime Light Data to Evaluate Poverty in Southwestern China
Next Article in Special Issue
Fusion of Drone-Based RGB and Multi-Spectral Imagery for Shallow Water Bathymetry Inversion
Previous Article in Journal
Retrieval of Fine-Grained PM2.5 Spatiotemporal Resolution Based on Multiple Machine Learning Models
Previous Article in Special Issue
An Improved Imaging Algorithm for Multi-Receiver SAS System with Wide-Bandwidth Signal
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Simple Cloud-Native Spectral Transformation Method to Disentangle Optically Shallow and Deep Waters in Sentinel-2 Images

by
Chengfa Benjamin Lee
1,*,
Dimosthenis Traganos
1 and
Peter Reinartz
2
1
German Aerospace Center (DLR), Remote Sensing Technology Institute (IMF), Rutherfordstr. 2, 12489 Berlin, Germany
2
German Aerospace Center (DLR), Remote Sensing Technology Institute (IMF), 82234 Wessling, Germany
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(3), 590; https://doi.org/10.3390/rs14030590
Submission received: 14 December 2021 / Revised: 13 January 2022 / Accepted: 21 January 2022 / Published: 26 January 2022
(This article belongs to the Special Issue Remote Sensing for Shallow and Deep Waters Mapping and Monitoring)

Abstract

:
This study presents a novel method to identify optically deep water using purely spectral approaches. Optically deep waters, where the seabed is too deep for a bottom reflectance signal to be returned, is uninformative for seabed mapping. Furthermore, owing to the attenuation of light in the water column, submerged vegetation at deeper depths is easily confused with optically deep waters, thereby inducing misclassifications that reduce the accuracy of these seabed maps. While bathymetry data could mask out deeper areas, they are not always available or of sufficient spatial resolution for use. Without bathymetry data and based on the coastal aerosol blue green (1-2-3) bands of the Sentinel-2 imagery, this study investigates the use of band ratios and a false colour HSV transformation of both L1C and L2A images to separate optically deep and shallow waters across varying water quality over four tropical and temperate submerged sites: Tanzania, the Bahamas, the Caspian Sea (Kazakhstan) and the Wadden Sea (Denmark and Germany). Two supervised thresholds based on annotated reference data and an unsupervised Otsu threshold were applied. The band ratio group usually featured the best overall accuracies (OA), F1 scores and Matthews correlation coefficients, although the individual band combination might not perform consistently across different sites. Meanwhile, the saturation and hue band yielded close to best performance for the L1C and L2A images, featuring OA of up to 0.93 and 0.98, respectively, and a more consistent behaviour than the individual band ratios. Nonetheless, all these spectral methods are still susceptible to sunglint, the Sentinel-2 parallax effect, turbidity and water colour. Both supervised approaches performed similarly and were superior to the unsupervised Otsu’s method—the supervised methods featuring OA were usually over 0.70, while the unsupervised OA were usually under 0.80. In the absence of bathymetry data, this method could effectively remove optically deep water pixels in Sentinel-2 imagery and reduce the issue of dark pixel misclassification, thereby improving the benthic mapping of optically shallow waters and their seascapes.

Graphical Abstract

1. Introduction

Coastal seascape ecosystems such as seagrasses and coral reefs provide a multitude of highly valued services such as coastal protection, biodiversity maintenance, blue carbon sequestration as well as nursery and feeding grounds for many marine animals [1]. However, these ecosystems are also highly threatened and thus require urgent conservation and monitoring efforts [1,2]. Coastal aquatic remote sensing can help monitor these ecosystems by providing information on their benthic composition, water quality and bathymetry. Unlike terrestrial remote sensing, aquatic applications require an additional consideration of the water column and its interactions with the light signal [3], which has implications for seabed signals.

1.1. Seabed Remote Sensing

When the object(s) of interest is located on the benthic floor and not floating near the water surface, the depth affects the attenuation of the water column and its constituents [4,5] on the return signals [6], which modifies the spectral profile of the reflected signal with increasing depth [7]. This effect on the benthic spectral signature not only reduces the spectral difference between benthic classes [7,8,9] but also restricts the informative bands mostly to the visible spectrum [4,10,11]. With sufficient depth, all information on the benthic cover is lost, and only the backscattering of the water column remains in these so-called optically deep waters [4,5]. These non-informative pixels should be removed from a seabed image, especially since some classes such as the darker seagrasses are easily confused with these deep waters [4,9,12]. However, identifying these optically deep waters can be a challenge.
There are many approaches to manage this, such as through water column correction, using bathymetry data or satellite-derived bathymetry (SDB) [13,14,15], object-based image analysis [7] and machine learning techniques [16]. Bathymetric data are usually exploited, since pixels deeper than 10 m have very weak benthic signals [5] or are quasi-optically deep [4]. Nonetheless, the benthic floor is still detectable in optimal conditions up to about 43 m [17]. The optical limit is also highly influenced by water quality, bottom cover and optical conditions during image acquisition [4], thereby suggesting potential information loss with an arbitrary depth threshold. Alternatively, semi-analytical approaches [4,15,18] such as the substratum detectability index (SDI) [5] could identify the optically shallow, quasi-optically deep and optically deep pixels via radiative transfer models. However, these underlying models are highly technical and might put off non-specialist researchers. Naturally, manual digitisation via photointerpretation is possible but labour-intensive [19]. Thus, it would still be optimal to have a simple yet effective method for masking out optically deep waters.
Such an approach might be possible when there is a correlation between pixel values and depth [20], such as band ratios [20,21,22] or logarithmic-transformed band ratio [18,23,24]. While deriving bathymetry would require external data for depth calibration, Legleiter et al. [20] found a strong linear relationship between radiometric values and bathymetry depth in shallow river waters up to 1 m in depth, which could be extended to identify optically deep water pixels without bathymetric information. Nonetheless, the choice of band ratio is important, as shorter wavelengths have greater depth penetration but are susceptible to turbidity, while longer wavelengths, such as the red band with its limit of about 5 m [25], do not provide sufficient penetration or range [8,17]. Existing optimisation approaches for depth estimation such as the Optimised Band Ratio Analysis (OBRA), Multiple Optimal Depth Predictors Analysis (MODPA) and the Sample-specific Multiple Band Ratio Techniques for SDB (SMART-SDB) [20,26,27] can naturally be translated to the detection of the optically deep and shallow waters. Nonetheless, given the challenges of the band ratios as well as the limited number of informative bands, alternative spectral approaches might prove helpful.

1.2. Colour Spaces in Remote Sensing

Multispectral and hyperspectral remote sensing both lie in the red-green-blue (RGB) colour space, expanding from three primary hues or wavelengths to the number of bands detected [4]. While the RGB system is based on the physics of light, sometimes an RGB image may not look similar to its supposed colour given its wavelength inputs [28,29]. Alternatively, the hue-saturation-luminosity colour space is based on the colour perception of a normal human [28]. By considering the full colour spectrum as a wheel with red as the origin, the hue or a human-perceived colour is the angle on this colour wheel, while saturation is the richness of a colour with respect to its luminosity or brightness. Both hue and saturation describe the spectral shape of a signal [28]. While there might be different luminosity variables, the “value” is used for the hue-saturation-value (HSV) colour space.
The use of the HSV colour space is not new to remote sensing. Some of its applications include cloud and cloud shadow detection [30,31], crop and farmland detection [32,33], algal bloom detection [34], surface water extraction and river extraction [35,36], oceanic water assessments [37] and water quality retrieval [29,38]. Most recently, Lee et al. [6] related the true colour hue angle to the vertical variation of light quality in marine waters, as well as the different optical water types, which represent different aquatic environments globally [39,40]. Many applications rely on a true colour RGB to HSV transformation, even when using a hyperspectral sensor [41]. However, the red wavelength, with its low penetrative property, is less useful at deeper depths [8]. An alternative is to transform a false colour RGB image instead, as there are no restrictions on the input bands [41]. The false colour RGB is most notably seen in band combinations for visualisation purposes and has been used to successfully detect global surface water using the high-resolution Landsat series [42]. In conjunction with the aforementioned difference in depth penetration between wavelengths as well as the general use of blue and green bands for oceanic remote sensing [43], this theoretically suggests that a coastal aerosol-blue-green wavelength (~430–560 nm central wavelength range) false-colour-derived HSV image could provide the required information to distinguish between the optically deep and shallow water pixels.

1.3. Study Aim

We employ Sentinel-2 imagery to develop and test our method as it features the aforementioned wavelengths of interest, high spatial resolution of 10 m, high temporal resolution of 5 days and widespread use in coastal aquatic remote sensing. By focusing on the bands B1–B3, this study aims to investigate if the resultant band ratios and coastal aerosol-blue-green false colour HSV transformation of the Sentinel-2 L1C and L2A images can distinguish between optically deep and optically shallow waters in the absence of bathymetry data.

2. Materials and Methods

2.1. Study Area and Data

Four sites were selected based on different environmental conditions in order to investigate the effectiveness of the different indices in these conditions (Table 1 and Figure 1). The images were either cloud-free or contained as little cloud cover as possible. The Tanzanian and Bahamian sites were chosen as representatives of tropical, coastal waters. While the selected image of the former had issues with sunglint, the selected Bahamian image had highly optimal conditions. Meanwhile, the Caspian Sea and Wadden Sea feature highly turbid, temperate waters of inland freshwater and coastal marine waters, respectively.
The following datasets were used: Sentinel-2 MSI: MultiSpectral Instrument, Top-Of-Atmosphere Level-1C (L1C) and Sentinel-2 MSI: MultiSpectral Instrument, Bottom-Of-Atmosphere Level-2A (L2A).

2.2. Preprocessing

Most of the processing was performed in the Google Earth Engine (GEE) cloud computing platform [44]. Thus, unless otherwise stated, any functions mentioned are GEE native functions, products or datasets. The same image ID was used for both the L1C and L2A products (Table 1). Clouds were excluded using both the QA band and the Sentinel-2: Cloud Probability dataset set to 40% threshold. Although the atmospheric correction for air and water column is the standard for coastal water image analysis [9], Lee et al. [16] was able to bypass atmospheric correction by using machine learning methods for their semi-analytical model on Landsat 8 TOA images while Thomas et al. [45] deliberately did not perform any atmospheric correction in order to evaluate the errors between using Sentinel-2 L1C and L2A products for their SDB method. As the preliminary trials had shown that no further atmospheric corrections were required when applying this method onto single scenes, the corrections were not performed. In line with Thomas et al. [45], sunglint correction was also not performed in order to investigate its effects on the methods of interest.
An adaptive Otsu’s threshold of the NDWI was then applied to all images of both products to mask out the land pixels [46].

2.3. Vector Dataset and Indices

Fifty line vectors were manually digitised from their corresponding scenes by visual interpretation on the GEE web-based interface. Since the optically deep property is similar to an apparent optical property, it is an image-specific property related to bathymetric depth but is scene and sensor dependent [5]. Unlike the SDI [5], only two classes of optically deep and shallow were selected, for the quasi-optically deep class was ignored due to the difficulty in visually distinguishing it from the two other classes. The deep water regions were determined by a consistent, large area of dark pixels that is connected to the larger ocean or sea in the image and thus could be distinguished from deeper but still optically shallow dark benthic cover. From the visually identified shallow-deep boundary, line vectors were drawn on both sides of the boundary which are perpendicular to the shore to imitate a field survey’s line transect from shallow to deep and to better represent this gradual gradient between classes. Each transect consists of one deep water and one shallow water vector of similar length. In areas where a clear and distinct shallow-deep boundary could not be identified, no vectors were drawn. For areas in which the optically shallow-deep border is insufficient to contain all the vectors without possible spatial autocorrelation, the remaining vectors were annotated on suitable pixels in the image (Figure 2).
As the line vectors are created via photointerpretation, there would naturally be some biases in the base image. As such, bathymetric inputs from ETOPO1 Global Arc-Minute Elevation (ETOPO) [47], General Bathymetric Chart of the Oceans (GEBCO) [48] and a Sentinel-2 image-based SDB [15] were used to evaluate the labels for all sites. As the ICESAT-2 data for 2018 and 2021 are available for the Bahamian site, an SDB was created from the same image in Table 1 based on a 7:3 split using three approaches—spatial, random and temporal [45]. In addition, Kd photosynthetically active radiation (PAR) and seabed PAR dataset for 2005–2009 by the European Marine Observation and Data Network (EMODnet) are available at the Wadden Sea site to assess light penetration into the water column [49]. Other than the image-based SDB method, these auxiliary datasets were not of the same time period as the images, as the images were pre-selected, as described in Section 2.1. A comparison of depth values between classes using the Welch’s unequal variance t-test was performed in R version 4.1.0 [50]. The choice of a continuous instead of a binary optically deep-optically shallow comparison was because the binary requires a photointerpretation of the depth to the class labels in the image (s). That would have defeated the purpose of this calibration. However, the combined analysis from all these datasets showed that the vectors were reasonably placed to detect and label pixels as optically deep and optically shallow (Figure 3) given their very significantly different group means (all p values < 0.001). Nonetheless, a single depth threshold is insufficient as a blanket cut-off across a single image, as seen by some overlaps in bathymetry depth values between the optically deep and optically shallow vectors. This is especially valid for the coarser resolution of the ETOPO1, GEBCO and EMODnet dataset. Nonetheless, most sites do not have an overlap in the interquartile ranges except for Tanzania (Figure 3), these line vectors are deemed sufficient for use in this study. The reader is directed to Appendix A Table A1 for the p values of all test combinations.
A 7:3 training–validation data (training data—TD, validation data—VD) split of the vector dataset was performed for the subsequent k-fold cross-validation, where k = 10 as commonly used in literature [51] and based on the GEE seeds 0 to 9 in the randomColumn function. The same images were used as the optically deep–shallow boundary was observed to have marginal temporal shifts owing to slightly temporally different environments. Subsequently, this small border shift might affect the ends of drawn transects and contribute to the mislabelling of border pixels, which are crucial for determining the threshold between the two optical classes.
Six indices were compared in this study—the three 1-2-3 false colour HSV bands and the band ratios B1/B2, B1/B3 and B2/B3 (Figure 4 and Figure 5). B1, B2 and B3 were selected as they have the best penetration, which means that they are more likely to pick up the bottom signal at deeper depths than the other bands, such as B4, which attenuates fast [8,52]. A 3 × 3 low pass filter was used to smoothen the image before the transformations.
For the HSV bands, the rgbToHsv function was used to transform the 1-2-3 false colour image to obtain its HSV components as seen in Figure 4 and Figure 5. This function in GEE is based on Java’s Color: RGBtoHSB function, which is normalised to a scale between 0 and 1 as well as based on standard HSV transformation algorithms [28,29]:
= C m a x C m i n
S a t u r a t i o n = { 0 , C m a x = 0 C m a x , C m a x 0
H u e = { 0 , = 0 1 6 × ( G B ) , C m a x = R , G > B 1 1 6 × ( G B ) , C m a x = R , G < B 1 3 + 1 6 × ( B R ) , C m a x = G 2 3 + 1 6 × ( R G ) , C m a x = B
V a l u e = { C m a x , C m a x < 1 1 , C m a x 1
The band ratios are an extension of Lyzenga’s relative depth index [18,23,24]. While many studies employed log-transformed band ratios [20,43], the transformation is not required for the purpose of a simple thresholding. The simple band ratio calculation was applied to the smoothened image to obtain the band ratio indices as seen in Figure 4 and Figure 5. Given only the three sets of band ratio combinations, all band ratios were tested instead of using an optimised approach. Unlike the HSV bands, the ranges are not normalised and instead are arbitrarily capped between −10 and 10 to mitigate very large pixel values arising from divisions involving a very small denominator (Figure 6).

2.4. Deep Water Extraction

Using only the training dataset, the thresholds of HSV bands and band ratios were set according to Table 2. The unsupervised method was chosen as the difference in pixel values between the optically deep and shallow pixels (Figure 6), as well as a reported steep chromocline in the MODIS true colour hue at ~25 m [6], seemed similar to the difference in MNDWI values between land and water pixels [46]. Subsequently, the connectedPixelcount function was used to create clusters based on a four-connected pixel connectivity. Clusters of less than 100 pixels were then identified and masked out to reduce noise. This is to mitigate the “salt-and-pepper” commissioning of deep aquatic vegetation onto the deep water regions [17].

2.5. Validation

Using the validation dataset of each k-fold, the overall accuracy (OA), F1 score (F1) and the Matthews correlation coefficient (MCC) were calculated. The former two metrics are commonly used in remote sensing, while the latter is recommended for evaluating binary classifications [53].
Given class i, the equation to derive the class-specific F1 score is:
F 1 i = 2 × PA i × UA i PA i + UA i
As the class of greater importance for a deep water mask is the shallow water class, the F1 score for the shallow water class is used for comparison.
By assuming that the two-class error matrix is similar to a standard statistical error matrix, we can relate both matrices together as seen in the table below (Table 3).
MCC is a binary metric that equally weighs both classes and thus only achieves high scores when both are correctly predicted. It ranges from −1 to 1, with 0 as an outcome that is equivalent to chance [53]. By assuming that class 1 represents positive and class 2 represents negative, the MCC can be easily transferred to a two-class classification. Thus, the MCC equation is:
M C C = ( TP × TN ) ( FP × FN ) ( TP + FP ) × ( TP + FN ) × ( TN + FP ) × ( TN + FN )
In the monoclass outcomes, MCC faces a potential issue whereby the denominator might be zero and in turn result in an undefined outcome. For such cases, the partial sums in denominator that sum to zero are substituted with a small arbitrary value [53], thus avoiding the outcome. Regardless of either monoclass outcome, MCC would be zero instead of undefined.
In consideration of the different k-fold validation results, the mean and 95% confidence interval for each metric were then calculated in R version 4.1.0 [50] and compared across the various methods.

3. Results

3.1. Performance Comparison between Band Indices

The performances for the Sentinel-2 L1C products are generally similar to that of their L2A counterparts, except that the saturation is the most consistent band for the former and hue for the latter. While saturation was minimally on par with the second-best site-specific band ratio for the L1C product, hue performed the best most of the time for the L2A product (Figure 7 and Figure 8). The best band index had large overlaps in 95% confidence intervals with other band indices, except for the L1C product in the Bahamas, where the best index B1/B2 had a non-overlapping 95% confidence interval with the second-best band index, hue. Meanwhile, the band ratios were more site-specific. For brevity, only the L1C classification maps are displayed and readers are referred to the Appendix A Figure A1, Figure A2, Figure A3 and Figure A4 for the L2A classification maps.
The optimal waters of the Bahamas allowed all bands to perform quantitatively well, even for values whose metrics outperformed saturation in the L1C product but had a worse classification map (Figure 9). The B1/B2 band ratio performed best in these clear waters, too, with the Cross PAUA method achieving a mean OA of 0.98 ± 0.00 and the Best OA method of 0.99 ± 0.00 for the L1C product, in comparison with its relatively worse performance among the band ratios in all other sites with their less optimal environmental conditions. Nonetheless, B1/B2 had the best classification map here with the least under- or overcommission.
In the more turbid waters of the Caspian Sea and Wadden Sea, the B2/B3 band ratio was the best band combination (Figure 10 and Figure 11). There is a contrasting quantitative performance between hue and saturation for both products using the Cross PAUA method, which translated into an inversed map (Figure 11). This is reflected in the very low OA and F1 scores and most evidently in the highly negative MCC scores (Figure 7 and Figure 8), suggesting that the MCC can better highlight potential inversion with a lack of relationship at 0 instead of 50. In particular, the lack of an inversed map for the Best OA method (Figure 10) when compared to the cross PAUA method suggests that minimising estimation errors during training would produce better results than merely focusing on the best overall accuracy.
In the sunglinted waters of Tanzania, the B2/B3 band ratio was the best for the L1C product with a mean OA of 0.81 ± 0.04 using the best OA method, while saturation had a corresponding value of 0.77 ± 0.04. For the L2A product, hue was the best at 0.87 ± 0.03 using the best OA method while the B1/B3 band ratio was the best band ratio at 0.87 ± 0.03 by preserving more shallow water pixels than the B2/B3 band ratio. Notably, the sunglint and Sentinel-2 parallax effect posed a challenge, with noise in the eastern waters in the B1/B3 band ratio as well as saturation and distinct overcommission of sunglinted pixels along the parallax strips in the other indices (Figure 12). This parallax effect is also subtly present in the Bahamas, such as the diagonal noise patterns in the B2/B3 band ratio classification map (Figure 9).

3.2. Performance Comparison between Threshold Methods

The Best OA and Cross PAUA method had similar performances of mean OA that were usually above 0.70, and their recommended threshold values are very close. While the Cross PAUA method has an edge when considering the inversed maps as shown in Section 3.1, the Best OA method allows for other bands to produce a consistently better mean OA in almost all sites.
Comparatively, more of the unsupervised method’s mean OA scores were below 0.80. As an unsupervised method, the Otsu’s selected threshold remained constant throughout the k-fold. Yet, it yielded larger errors of margin for the confidence intervals of the various metrics than the corresponding results of the Best OA and Cross PAUA in the Wadden Sea (Figure 7). While this implies that the k-fold is successful in producing different training and validation datasets, the unsuitability of the Otsu’s method for these spectral indices is also highly evident.

4. Discussion

This study explores the potential of using different spectral features from another colour space to differentiate between optically deep and optically shallow waters. This is less commonly looked into since many studies are generally focused at shallower depths to reduce or avoid uncertainty errors [26,27]. As purely spectral methods, the band ratios and false colour HSV-transformed bands provide a user-friendly option to mask out deep waters.

4.1. Atmospheric Correction and Its Interaction with HSV

Interestingly, no additional atmospheric or water column correction is required before the masking process. This is atypical for coastal remote sensing [3] but not without precedent [16,45]. On the other hand, the lack of atmospheric correction would lead to a classification that is more susceptible to environmental conditions. Sunglinted and turbid pixels as well as pixels lying in the brighter parallax stripes might introduce underestimations in depth [6,54]. Furthermore, the parallax effect is especially apparent over the non-Lambertian sea surface [54], and some noise is currently recalcitrant despite a morphological filter. Thus, this study is not advocating for disregarding atmospheric correction altogether. Regardless, this approach is transferable to other sites, since the optical properties of water are generally similar in the tropical and temperate coastal aquatic regions [55].
The finding that the bottom of atmosphere L2A images work better with hue might explain the prominence of true colour hue in ocean colours of different sensors [6,37,56] over saturation. Many of these studies are more interested in the water colour or water column signals, while our study is on the substratum or benthic seabed signal. Thus, this suggests possibilities in form of alternative band combinations that are suitable for different purposes or even in different types of aquatic environments such as rivers.

4.2. Comparison with Systematic Optimisation

Since this study showed that different band pairs perform best at different sites, it would logically follow that a systematic optimisation approach such as OBRA, MODPA or SMART-SDB would provide competitive performances. This is in line with Lee et al. [16], who found that the Rrs of shallow waters are governed by multiple variables and no single band ratio can account for all these variations. However, owing to a lack of normalisation and the potential for overcorrection to produce extreme values, the dynamic ranges of band ratios need to be managed. The computational demand might be greater if the optimisation search algorithm is iterative and reliant on the range itself [29]. Regardless, the optimisation approaches are still available to users.

4.3. Thresholding

While the boundary between both classes might not align with the bathymetric contours, owing to possible heterogeneity across the images as well as between different seabed covers [57], the supervised methods proved its possibility. Unfortunately, the selected unsupervised method was not compatible with this application. Beyond Otsu’s method, alternative possibilities could be explored, such as the recent development of a transferable SDB approach based on multitemporal compositing and a semi-analytical model that is currently restricted to clearer waters [15], feature subspace methods to separate between different spectral clusters before applying spectral models to obtain the classification [26,45], other adaptive threshold techniques, as well as object-based image analysis [7,58]. The development on another suitable automated thresholding approach would greatly assist this HSV approach and other indices.

5. Conclusions

While an uncommon approach, hue has been previously established to identify different optical water types, as well as vertical variations in the water column. The current usage relies on original or reconstructed true colour images from remote sensing reflectance. In comparison, this study showed the viability of a 1-2-3 false colour-derived saturation and hue band to separate the optically deep and shallow waters in Sentinel-2 L1C and L2A images, respectively, over four wide-ranging tropical and temperate water quality environments without any additional atmospheric correction or bathymetry data.
Both the band ratios and 1-2-3 false colour HSV bands were able to separate optically deep and shallow waters using a supervised threshold. While some of the band ratios might have a slightly better performance quantitatively than the HSV bands, their individual performances are highly reliant on the water quality and, by extension, location. In comparison, the hue and saturation bands have a greater tolerance to more water types and are easily transferable to other sites or regions with minimal finetuning. Therefore, they show more consistency in performance and do not require optimisation. The current lack of an unsupervised adaptive threshold necessitates additional experimentation to improve its automation, scalability and effectiveness. GEE offers a computationally efficient environment for application and scalability of the HSV features across a wide range of spatial scales and coastal aquatic remote sensing applications using open and dense satellite time series, including turbidity and SDB estimations, and benthic habitat mapping.

Author Contributions

C.B.L. conceived the idea, designed the work, annotated the training data, developed the cloud-based framework and wrote the manuscript; C.B.L., D.T. and P.R. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

C.B.L. is supported by the DLR-DAAD Research Fellowship (No. 57478193). D.T. is funded by DLR through the Global Seagrass Watch technology marketing project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available upon request.

Acknowledgments

We thank Nathan Marc Thomas (NASA) for processing the ICESAT2 data. We are also grateful to four anonymous reviewers for their comprehensive feedback on this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Welch’s test p-value in scientific form showing difference in group means between optically deep and optically shallow classes over four single scene Sentinel-2 images. The results for the L1C and L2A products are highly similar. The greyed cells indicate an absence of valid data for the site. ETOPO—ETOPO1 Global Arc-Minute Elevation, GEBCO—the General Bathymetric Chart of the Oceans, SDB—Sentinel-2 image-based satellite derived bathymetry based on [18], the I-S—ICESAT2 derived SDB based on [51] using a pooled ICESAT2 data that was spatially split, I-S ICESAT2 which data was randomly split, I-S—ICESAT2 which data was temporally split (ICESAT-T), E-K—the European Marine Observation and Data Network dataset for Kd PAR (EMODnet-K), E-S—the EMODnet dataset for seabed PAR (EMODnet-S).
Table A1. Welch’s test p-value in scientific form showing difference in group means between optically deep and optically shallow classes over four single scene Sentinel-2 images. The results for the L1C and L2A products are highly similar. The greyed cells indicate an absence of valid data for the site. ETOPO—ETOPO1 Global Arc-Minute Elevation, GEBCO—the General Bathymetric Chart of the Oceans, SDB—Sentinel-2 image-based satellite derived bathymetry based on [18], the I-S—ICESAT2 derived SDB based on [51] using a pooled ICESAT2 data that was spatially split, I-S ICESAT2 which data was randomly split, I-S—ICESAT2 which data was temporally split (ICESAT-T), E-K—the European Marine Observation and Data Network dataset for Kd PAR (EMODnet-K), E-S—the EMODnet dataset for seabed PAR (EMODnet-S).
SiteETOPOGEBCOSDBI-SI-RI-TE-KE-S
Bahamas2.8 × 10−132.2 × 10−162.2 × 10−162.2 × 10−162.2 × 10−162.2 × 10−16
Caspian Sea3.1 × 10−62.2 × 10−162.2 × 10−16
Tanzania2.7 × 10−41.1 × 10−112.2 × 10−16
Wadden Sea2.2 × 10−162.2 × 10−162.2 × 10−16 2.2 × 10−162.2 × 10−16
Table A2. Mean and 95% confidence interval of the overall accuracy (OA) over four single scene Sentinel-2 L1C images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
Table A2. Mean and 95% confidence interval of the overall accuracy (OA) over four single scene Sentinel-2 L1C images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
SiteThreshold TypeB1B2B1B3B2B3HueSaturationValue
BahamasCP0.98 ± 0.000.92 ± 0.010.82 ± 0.020.93 ± 0.020.92 ± 0.010.95 ± 0.03
BA0.99 ± 0.000.93 ± 0.010.88 ± 0.020.94 ± 0.010.93 ± 0.010.95 ± 0.02
Otsu0.79 ± 0.040.70 ± 0.040.68 ± 0.050.50 ± 0.070.70 ± 0.050.49 ± 0.07
Caspian SeaCP0.61 ± 0.050.71 ± 0.060.79 ± 0.050.16 ± 0.040.71 ± 0.060.56 ± 0.05
BA0.59 ± 0.060.77 ± 0.060.77 ± 0.050.43 ± 0.040.77 ± 0.060.50 ± 0.06
Otsu0.59 ± 0.050.66 ± 0.04NA ± NA0.50 ± 0.070.62 ± 0.04NA ± NA
TanzaniaCP0.74 ± 0.050.78 ± 0.040.79 ± 0.050.45 ± 0.050.78 ± 0.040.62 ± 0.08
BA0.77 ± 0.070.79 ± 0.040.81 ± 0.040.51 ± 0.100.79 ± 0.040.57 ± 0.07
Otsu0.82 ± 0.040.76 ± 0.040.79 ± 0.040.59 ± 0.080.86 ± 0.04NA ± NA
Wadden SeaCP0.90 ± 0.030.94 ± 0.020.94 ± 0.020.08 ± 0.020.93 ± 0.020.85 ± 0.02
BA0.91 ± 0.020.92 ± 0.020.93 ± 0.020.45 ± 0.040.93 ± 0.020.86 ± 0.03
Otsu0.56 ± 0.060.89 ± 0.040.91 ± 0.040.52 ± 0.050.89 ± 0.03NA ± NA
Table A3. Mean and 95% confidence interval of the F1 score for the shallow water class over four single scene Sentinel-2 L1C images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
Table A3. Mean and 95% confidence interval of the F1 score for the shallow water class over four single scene Sentinel-2 L1C images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
SiteThreshold TypeB1B2B1B3B2B3HueSaturationValue
BahamasCP0.98 ± 0.010.92 ± 0.020.82 ± 0.030.93 ± 0.020.92 ± 0.020.95 ± 0.03
BA0.99 ± 0.000.93 ± 0.010.87 ± 0.020.94 ± 0.010.93 ± 0.010.95 ± 0.02
Otsu0.74 ± 0.020.59 ± 0.040.55 ± 0.050.07 ± 0.010.58 ± 0.050.00 ± 0.00
Caspian SeaCP0.60 ± 0.070.71 ± 0.060.79 ± 0.040.14 ± 0.030.71 ± 0.060.55 ± 0.07
BA0.63 ± 0.080.79 ± 0.060.79 ± 0.040.28 ± 0.160.79 ± 0.060.44 ± 0.13
Otsu0.47 ± 0.070.56 ± 0.07NA ± NA0.08 ± 0.030.43 ± 0.08NA ± NA
TanzaniaCP0.74 ± 0.050.77 ± 0.050.78 ± 0.050.44 ± 0.040.77 ± 0.050.65 ± 0.06
BA0.74 ± 0.060.77 ± 0.040.80 ± 0.040.43 ± 0.040.77 ± 0.040.58 ± 0.03
Otsu0.78 ± 0.030.76 ± 0.050.78 ± 0.050.19 ± 0.020.82 ± 0.03NA ± NA
Wadden SeaCP0.89 ± 0.040.93 ± 0.030.94 ± 0.030.09 ± 0.040.93 ± 0.030.83 ± 0.02
BA0.90 ± 0.030.92 ± 0.020.92 ± 0.030.24 ± 0.180.92 ± 0.020.85 ± 0.02
Otsu0.17 ± 0.060.89 ± 0.040.91 ± 0.040.03 ± 0.030.86 ± 0.04NA ± NA
Table A4. Mean and 95% confidence interval of the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L1C images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
Table A4. Mean and 95% confidence interval of the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L1C images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
SiteThreshold TypeB1B2B1B3B2B3HueSaturationValue
BahamasCP0.96 ± 0.010.84 ± 0.020.65 ± 0.040.85 ± 0.030.84 ± 0.020.89 ± 0.05
BA0.97 ± 0.000.87 ± 0.020.77 ± 0.040.89 ± 0.020.87 ± 0.020.90 ± 0.04
Otsu0.64 ± 0.040.51 ± 0.040.48 ± 0.050.13 ± 0.020.50 ± 0.050.00 ± 0.00
Caspian SeaCP0.25 ± 0.130.44 ± 0.120.60 ± 0.09−0.68 ± 0.070.44 ± 0.120.13 ± 0.12
BA0.19 ± 0.120.56 ± 0.120.57 ± 0.080.08 ± 0.040.56 ± 0.120.08 ± 0.08
Otsu0.23 ± 0.110.37 ± 0.09NA ± NA0.14 ± 0.030.33 ± 0.07NA ± NA
TanzaniaCP0.48 ± 0.090.56 ± 0.090.57 ± 0.09−0.06 ± 0.100.55 ± 0.090.27 ± 0.13
BA0.57 ± 0.080.57 ± 0.080.60 ± 0.080.21 ± 0.100.57 ± 0.080.19 ± 0.11
Otsu0.63 ± 0.050.52 ± 0.100.57 ± 0.090.24 ± 0.030.71 ± 0.05NA ± NA
Wadden SeaCP0.82 ± 0.050.87 ± 0.040.89 ± 0.04−0.85 ± 0.030.87 ± 0.040.70 ± 0.04
BA0.82 ± 0.040.85 ± 0.030.86 ± 0.03−0.01 ± 0.030.85 ± 0.030.71 ± 0.05
Otsu0.22 ± 0.050.8 ± 0.070.84 ± 0.070.06 ± 0.050.79 ± 0.05NA ± NA
Table A5. Mean and 95% confidence interval of the overall accuracy (OA) over four single scene Sentinel-2 L2A images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
Table A5. Mean and 95% confidence interval of the overall accuracy (OA) over four single scene Sentinel-2 L2A images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
SiteThreshold TypeB1B2B1B3B2B3HueSaturationValue
BahamasCP0.96 ± 0.010.89 ± 0.020.80 ± 0.020.98 ± 0.010.88 ± 0.020.93 ± 0.03
BA0.97 ± 0.000.90 ± 0.020.84 ± 0.020.98 ± 0.000.89 ± 0.020.95 ± 0.02
Otsu0.96 ± 0.010.82 ± 0.040.48 ± 0.060.53 ± 0.070.76 ± 0.040.48 ± 0.07
Caspian SeaCP0.65 ± 0.060.85 ± 0.040.91 ± 0.020.91 ± 0.020.35 ± 0.050.66 ± 0.05
BA0.61 ± 0.050.84 ± 0.030.91 ± 0.020.91 ± 0.020.38 ± 0.050.65 ± 0.06
Otsu0.54 ± 0.060.60 ± 0.060.52 ± 0.070.49 ± 0.070.39 ± 0.06NA ± NA
TanzaniaCP0.82 ± 0.030.84 ± 0.040.81 ± 0.040.85 ± 0.040.47 ± 0.060.67 ± 0.07
BA0.81 ± 0.040.87 ± 0.030.85 ± 0.040.87 ± 0.030.50 ± 0.070.79 ± 0.04
Otsu0.47 ± 0.080.61 ± 0.050.57 ± 0.050.83 ± 0.030.52 ± 0.050.54 ± 0.09
Wadden SeaCP0.85 ± 0.040.88 ± 0.030.92 ± 0.010.93 ± 0.010.13 ± 0.030.93 ± 0.02
BA0.87 ± 0.040.90 ± 0.040.93 ± 0.020.93 ± 0.010.44 ± 0.040.93 ± 0.03
Otsu0.50 ± 0.050.69 ± 0.050.79 ± 0.050.84 ± 0.050.13 ± 0.04NA ± NA
Table A6. Mean and 95% confidence interval of the F1 score for the shallow water class over four single scene Sentinel-2 L2A images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
Table A6. Mean and 95% confidence interval of the F1 score for the shallow water class over four single scene Sentinel-2 L2A images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
SiteThreshold TypeB1B2B1B3B2B3HueSaturationValue
BahamasCP0.96 ± 0.010.89 ± 0.020.80 ± 0.030.97 ± 0.010.88 ± 0.020.94 ± 0.03
BA0.97 ± 0.010.90 ± 0.010.82 ± 0.020.98 ± 0.000.88 ± 0.010.95 ± 0.02
Otsu0.96 ± 0.020.83 ± 0.040.65 ± 0.060.17 ± 0.020.70 ± 0.030.00 ± 0.00
Caspian SeaCP0.63 ± 0.090.84 ± 0.040.91 ± 0.030.90 ± 0.030.37 ± 0.060.65 ± 0.07
BA0.60 ± 0.070.84 ± 0.040.91 ± 0.030.91 ± 0.020.19 ± 0.150.70 ± 0.07
Otsu0.68 ± 0.060.72 ± 0.060.68 ± 0.060.01 ± 0.000.45 ± 0.06NA ± NA
TanzaniaCP0.80 ± 0.040.83 ± 0.050.80 ± 0.050.84 ± 0.050.47 ± 0.030.69 ± 0.06
BA0.79 ± 0.050.85 ± 0.030.82 ± 0.050.85 ± 0.040.58 ± 0.100.75 ± 0.04
Otsu0.62 ± 0.080.68 ± 0.060.66 ± 0.060.75 ± 0.030.58 ± 0.050.00 ± 0.00
Wadden SeaCP0.84 ± 0.040.88 ± 0.030.91 ± 0.020.92 ± 0.020.12 ± 0.050.92 ± 0.03
BA0.87 ± 0.050.90 ± 0.040.93 ± 0.020.92 ± 0.020.30 ± 0.190.92 ± 0.04
Otsu0.66 ± 0.050.76 ± 0.050.82 ± 0.050.86 ± 0.050.06 ± 0.03NA ± NA
Table A7. Mean and 95% confidence interval of the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L2A images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
Table A7. Mean and 95% confidence interval of the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L2A images, six band indices and three threshold methods, to 2 decimal places. CP—Cross PAUA, BO—Best OA, Otsu—unsupervised Otsu’s threshold.
SiteThreshold TypeB1B2B1B3B2B3HueSaturationValue
BahamasCP0.93 ± 0.020.78 ± 0.030.61 ± 0.040.95 ± 0.010.76 ± 0.030.86 ± 0.06
BA0.93 ± 0.010.81 ± 0.030.70 ± 0.030.95 ± 0.010.78 ± 0.030.90 ± 0.03
Otsu0.92 ± 0.020.65 ± 0.06−0.16 ± 0.020.21 ± 0.020.60 ± 0.040.00 ± 0.00
Caspian SeaCP0.33 ± 0.130.70 ± 0.070.82 ± 0.050.81 ± 0.05−0.29 ± 0.120.34 ± 0.12
BA0.25 ± 0.110.69 ± 0.060.82 ± 0.040.82 ± 0.04−0.09 ± 0.110.31 ± 0.12
Otsu0.09 ± 0.080.26 ± 0.120.00 ± 0.000.05 ± 0.01−0.25 ± 0.12NA ± NA
TanzaniaCP0.64 ± 0.050.67 ± 0.080.61 ± 0.100.70 ± 0.08−0.05 ± 0.130.36 ± 0.13
BA0.62 ± 0.060.72 ± 0.070.68 ± 0.080.73 ± 0.070.09 ± 0.160.59 ± 0.05
Otsu0.06 ± 0.030.29 ± 0.110.19 ± 0.120.66 ± 0.030.03 ± 0.140.00 ± 0.00
Wadden SeaCP0.72 ± 0.060.78 ± 0.050.84 ± 0.020.85 ± 0.02−0.74 ± 0.060.86 ± 0.04
BA0.75 ± 0.070.82 ± 0.070.86 ± 0.030.86 ± 0.03−0.01 ± 0.010.86 ± 0.05
Otsu0.08 ± 0.030.49 ± 0.060.65 ± 0.080.72 ± 0.07-0.75 ± 0.06NA ± NA
Figure A1. S2 L2A Surface reflectance true colour RGB image of the Bahamas with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The Sentinel-2 striping is slightly noticeable post-masking, in particular the B2/B3 and saturation bands.
Figure A1. S2 L2A Surface reflectance true colour RGB image of the Bahamas with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The Sentinel-2 striping is slightly noticeable post-masking, in particular the B2/B3 and saturation bands.
Remotesensing 14 00590 g0a1
Figure A2. S2 L2A Surface reflectance true colour RGB image of the Caspian Sea with the deep-water-masked images using the Cross PAUA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to the shallow areas that are easily omitted.
Figure A2. S2 L2A Surface reflectance true colour RGB image of the Caspian Sea with the deep-water-masked images using the Cross PAUA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to the shallow areas that are easily omitted.
Remotesensing 14 00590 g0a2
Figure A3. S2 L2A Surface reflectance true colour RGB image of the Wadden Sea with deep-water-masked images using the Cross PAUA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. Notice that the saturation-masked image is inverted and thus scored very badly in the accuracy metrics (Main text Figure 8).
Figure A3. S2 L2A Surface reflectance true colour RGB image of the Wadden Sea with deep-water-masked images using the Cross PAUA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. Notice that the saturation-masked image is inverted and thus scored very badly in the accuracy metrics (Main text Figure 8).
Remotesensing 14 00590 g0a3
Figure A4. S2 L2A Surface reflectance true colour RGB image of Tanzania with deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to a challenging shallow area that is susceptible to omission. The B2/B3 band had the least omission in this region, at the cost of commissioning some sunglinted pixels in the east. The Sentinel-2 parallax effect striping is particularly apparent in this image.
Figure A4. S2 L2A Surface reflectance true colour RGB image of Tanzania with deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to a challenging shallow area that is susceptible to omission. The B2/B3 band had the least omission in this region, at the cost of commissioning some sunglinted pixels in the east. The Sentinel-2 parallax effect striping is particularly apparent in this image.
Remotesensing 14 00590 g0a4

References

  1. Duffy, J.; Benedetti-Cecchi, L.; Trinanes, J.; Muller-Karger, F.; Ambo-Rappe, R.; Boström, C.; Buschmann, A.; Byrnes, J.E.; Coles, R.; Creed, J.; et al. Toward a coordinated global observing system for marine macrophytes. Front. Mar. Sci. 2019, 6, 1–26. [Google Scholar] [CrossRef] [Green Version]
  2. Dunic, J.C.; Brown, C.J.; Connolly, R.M.; Turschwell, M.P.; Côté, I.M. Long-term declines and recovery of meadow area across the world’s seagrass bioregions. Glob. Change Biol. 2021, 27, 4096–4109. [Google Scholar] [CrossRef] [PubMed]
  3. Phinn, S.; Roelfsema, C.; Kovacs, E.; Canto, R.; Lyons, M.; Saunders, M.; Maxwell, P. Mapping, Monitoring and Modelling Seagrass Using Remote Sensing Techniques. In Seagrasses of Australia; Springer: Berlin/Heidelberg, Germany, 2018; pp. 445–487. [Google Scholar]
  4. Jay, S.; Guillaume, M.; Minghelli, A.; Deville, Y.; Chami, M.; Lafrance, B.; Serfaty, V. Hyperspectral remote sensing of shallow waters: Considering environmental noise and bottom intra-class variability for modeling and inversion of water reflectance. Remote Sens. Environ. 2017, 200, 352–367. [Google Scholar] [CrossRef]
  5. Brando, V.E.; Anstee, J.M.; Wettle, M.; Dekker, A.G.; Phinn, S.R.; Roelfsema, C. A physics based retrieval and quality assessment of bathymetry from suboptimal hyperspectral data. Remote Sens. Environ. 2009, 113, 755–770. [Google Scholar] [CrossRef]
  6. Lee, Z.; Shang, S.; Li, Y.; Luis, K.; Dai, M.; Wang, Y. Three-Dimensional Variation in Light Quality in the Upper Water Column Revealed With a Single Parameter. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
  7. Topouzelis, K.; Makri, D.; Stoupas, N.; Papakonstantinou, A.; Katsanevakis, S. Seagrass mapping in Greek territorial waters using Landsat-8 satellite images. Int. J. Appl. Earth Obs. Geoinf. 2018, 67, 98–113. [Google Scholar] [CrossRef]
  8. Li, J.; Fabina, N.S.; Knapp, D.E.; Asner, G.P. The Sensitivity of Multi-spectral Satellite Sensors to Benthic Habitat Change. Remote Sens. 2020, 12, 532. [Google Scholar] [CrossRef] [Green Version]
  9. Petit, T.; Bajjouk, T.; Mouquet, P.; Rochette, S.; Vozel, B.; Delacourt, C. Hyperspectral remote sensing of coral reefs by semi-analytical model inversion–Comparison of different inversion setups. Remote Sens. Environ. 2017, 190, 348–365. [Google Scholar] [CrossRef]
  10. Li, J.; Knapp, D.E.; Schill, S.R.; Roelfsema, C.; Phinn, S.; Silman, M.; Mascaro, J.; Asner, G.P. Adaptive bathymetry estimation for shallow coastal waters using Planet Dove satellites. Remote Sens. Environ. 2019, 232, 111302. [Google Scholar] [CrossRef]
  11. Rowan, G.S.; Kalacska, M. A Review of Remote Sensing of Submerged Aquatic Vegetation for Non-Specialists. Remote Sens. 2021, 13, 623. [Google Scholar] [CrossRef]
  12. Coffer, M.M.; Schaeffer, B.A.; Zimmerman, R.C.; Hill, V.; Li, J.; Islam, K.A.; Whitman, P.J. Performance across WorldView-2 and RapidEye for reproducible seagrass mapping. Remote Sens. Environ. 2020, 250, 112036. [Google Scholar] [CrossRef] [PubMed]
  13. Traganos, D.; Poursanidis, D.; Aggarwal, B.; Chrysoulakis, N.; Reinartz, P. Estimating Satellite-Derived Bathymetry (SDB) with the Google Earth Engine and Sentinel-2. Remote Sens. 2018, 10, 859. [Google Scholar] [CrossRef] [Green Version]
  14. Traganos, D.; Aggarwal, B.; Poursanidis, D.; Topouzelis, K.; Chrysoulakis, N.; Reinartz, P. Towards Global-Scale Seagrass Mapping and Monitoring Using Sentinel-2 on Google Earth Engine: The Case Study of the Aegean and Ionian Seas. Remote Sens. 2018, 10, 1227. [Google Scholar] [CrossRef] [Green Version]
  15. Li, J.; Knapp, D.E.; Lyons, M.; Roelfsema, C.; Phinn, S.; Schill, S.R.; Asner, G.P. Automated Global Shallow Water Bathymetry Mapping Using Google Earth Engine. Remote Sens. 2021, 13, 1469. [Google Scholar] [CrossRef]
  16. Lee, Z.; Shangguan, M.; Garcia, R.A.; Lai, W.; Lu, X.; Wang, J.; Yan, X. Confidence Measure of the Shallow-Water Bathymetry Map Obtained through the Fusion of Lidar and Multiband Image Data. J. Remote Sens. 2021, 2021, 16. [Google Scholar] [CrossRef]
  17. Poursanidis, D.; Topouzelis, K.; Chrysoulakis, N. Mapping coastal marine habitats and delineating the deep limits of the Neptune’s seagrass meadows using very high resolution Earth observation data. Int. J. Remote Sens. 2018, 39, 8670–8687. [Google Scholar] [CrossRef]
  18. Lyzenga, D.R. Remote sensing of bottom reflectance and water attenuation parameters in shallow water using aircraft and Landsat data. Int. J. Remote Sens. 1981, 2, 71–82. [Google Scholar] [CrossRef]
  19. Astuty, I.S.; Wicaksono, P. Seagrass species composition and above-ground carbon stock mapping in Parang Island using Planetscope image. In Proceedings of the Sixth Geoinformation Science Symposium, Yogyakarta, Indonesia, 21 November 2019; p. 1131103. [Google Scholar]
  20. Legleiter, C.J.; Roberts, D.A.; Lawrence, R.L. Spectrally based remote sensing of river bathymetry. Earth Surf. Processes Landf. 2009, 34, 1039–1059. [Google Scholar] [CrossRef]
  21. Stumpf, R.P.; Holderied, K.; Sinclair, M. Determination of water depth with high-resolution satellite imagery over variable bottom types. Limnol. Oceanogr. 2003, 48, 547–556. [Google Scholar] [CrossRef]
  22. Poursanidis, D.; Traganos, D.; Reinartz, P.; Chrysoulakis, N. On the use of Sentinel-2 for coastal habitat mapping and satellite-derived bathymetry estimation using downscaled coastal aerosol band. Int. J. Appl. Earth Obs. Geoinf. 2019, 80, 58–70. [Google Scholar] [CrossRef]
  23. Lyzenga, D.R. Passive remote sensing techniques for mapping water depth and bottom features. Appl. Opt. 1978, 17, 379–383. [Google Scholar] [CrossRef] [PubMed]
  24. Lyzenga, D.R. Shallow-water bathymetry using combined lidar and passive multispectral scanner data. Int. J. Remote Sens. 1985, 6, 115–125. [Google Scholar] [CrossRef]
  25. Maritorena, S.; Morel, A.; Gentili, B. Diffuse reflectance of oceanic shallow waters: Influence of water depth and bottom albedo. Limnol. Oceanogr. 1994, 39, 1689–1703. [Google Scholar] [CrossRef]
  26. Niroumand-Jadidi, M.; Bovolo, F.; Bruzzone, L. SMART-SDB: Sample-specific multiple band ratio technique for satellite-derived bathymetry. Remote Sens. Environ. 2020, 251, 112091. [Google Scholar] [CrossRef]
  27. Niroumand-Jadidi, M.; Vitti, A.; Lyzenga, D.R. Multiple Optimal Depth Predictors Analysis (MODPA) for river bathymetry: Findings from spectroradiometry, simulations, and satellite imagery. Remote Sens. Environ. 2018, 218, 132–147. [Google Scholar] [CrossRef]
  28. Malacara, D. Color Vision and Colorimetry: Theory and Applications, 2nd ed.; Spie: Bellingham, WA, USA, 2011. [Google Scholar]
  29. Zhao, Y.; Shen, Q.; Wang, Q.; Yang, F.; Wang, S.; Li, J.; Zhang, F.; Yao, Y. Recognition of Water Colour Anomaly by Using Hue Angle and Sentinel-2 Image. Remote Sens. 2020, 12, 716. [Google Scholar] [CrossRef] [Green Version]
  30. Huang, W.; Wang, Y.; Chen, X. Cloud detection for high-resolution remote-sensing images of urban areas using colour and edge features based on dual-colour models. Int. J. Remote Sens. 2018, 39, 6657–6675. [Google Scholar] [CrossRef]
  31. Han, H.; Han, C.; Lan, T.; Huang, L.; Hu, C.; Xue, X. Automatic Shadow Detection for Multispectral Satellite Remote Sensing Images in Invariant Color Spaces. Appl. Sci. 2020, 10, 6467. [Google Scholar] [CrossRef]
  32. Hamuda, E.; Mc Ginley, B.; Glavin, M.; Jones, E. Automatic crop detection under field conditions using the HSV colour space and morphological operations. Comput. Electron. Agric. 2017, 133, 97–107. [Google Scholar] [CrossRef]
  33. Xu, L.; Ming, D.; Zhou, W.; Bao, H.; Chen, Y.; Ling, X. Farmland Extraction from High Spatial Resolution Remote Sensing Images Based on Stratified Scale Pre-Estimation. Remote Sens. 2019, 11, 108. [Google Scholar] [CrossRef] [Green Version]
  34. Park, C.W.; Jeon, J.J.; Moon, Y.H.; Eom, I.K. Single Image Based Algal Bloom Detection Using Water Body Extraction and Probabilistic Algae Indices. IEEE Access 2019, 7, 84468–84478. [Google Scholar] [CrossRef]
  35. Ngoc, D.D.; Loisel, H.; Jamet, C.; Vantrepotte, V.; Duforêt-Gaurier, L.; Minh, C.D.; Mangin, A. Coastal and inland water pixels extraction algorithm (WiPE) from spectral shape analysis and HSV transformation applied to Landsat 8 OLI and Sentinel-2 MSI. Remote Sens. Environ. 2019, 223, 208–228. [Google Scholar] [CrossRef]
  36. Li, J.; Feng, K.; Yu, J.; Gu, H. River extraction of color remote sensing image based on HSV and shape detection. In Proceedings of the Seventh Symposium on Novel Photoelectronic Detection Technology and Applications, Kunming, China, 12 March 2021; p. 117635W. [Google Scholar]
  37. Van der Woerd, H.J.; Wernand, M.R. Hue-angle Product for Low to Medium Spatial Resolution Optical Satellite Sensors. Remote Sens. 2018, 10, 180. [Google Scholar] [CrossRef] [Green Version]
  38. Niroumand-Jadidi, M.; Bovolo, F.; Bruzzone, L. Novel spectra-derived features for empirical retrieval of water quality parameters: Demonstrations for OLI, MSI, and OLCI Sensors. IEEE Trans. Geosci. Remote Sens. 2019, 57, 10285–10300. [Google Scholar] [CrossRef]
  39. Pitarch, J.; van der Woerd, H.J.; Brewin, R.J.; Zielinski, O. Optical properties of Forel-Ule water types deduced from 15 years of global satellite ocean color observations. Remote Sens. Environ. 2019, 231, 111249. [Google Scholar] [CrossRef]
  40. Spyrakos, E.; O’donnell, R.; Hunter, P.D.; Miller, C.; Scott, M.; Simis, S.G.; Neil, C.; Barbosa, C.C.; Binding, C.E.; Bradt, S. Optical types of inland and coastal waters. Limnol. Oceanogr. 2018, 63, 846–870. [Google Scholar] [CrossRef] [Green Version]
  41. Liu, H.; Lee, S.-H.; Chahl, J.S. Transformation of a high-dimensional color space for material classification. J. Opt. Soc. Am. A 2017, 34, 523–532. [Google Scholar] [CrossRef]
  42. Pekel, J.-F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its long-term changes. Nature 2016, 540, 418–422. [Google Scholar] [CrossRef]
  43. Warren, M.A.; Simis, S.G.; Martinez-Vicente, V.; Poser, K.; Bresciani, M.; Alikas, K.; Spyrakos, E.; Giardino, C.; Ansper, A. Assessment of atmospheric correction algorithms for the Sentinel-2A MultiSpectral Imager over coastal and inland waters. Remote Sens. Environ. 2019, 225, 267–289. [Google Scholar] [CrossRef]
  44. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  45. Thomas, N.; Pertiwi, A.P.; Traganos, D.; Lagomasino, D.; Poursanidis, D.; Moreno, S.; Fatoyinbo, L. Space-Borne Cloud-Native Satellite-Derived Bathymetry (SDB) Models Using ICESat-2 And Sentinel-2. Geophys. Res. Lett. 2021, 48, e2020GL092170. [Google Scholar] [CrossRef]
  46. Donchyts, G.; Schellekens, J.; Winsemius, H.; Eisemann, E.; Van de Giesen, N. A 30 m Resolution Surface Water Mask Including Estimation of Positional and Thematic Differences Using Landsat 8, SRTM and OpenStreetMap: A Case Study in the Murray-Darling Basin, Australia. Remote Sens. 2016, 8, 386. [Google Scholar] [CrossRef] [Green Version]
  47. Amante, C.; Eakins, B.W. ETOPO1 1 Arc-Minute Global Relief Model: Procedures, Data Sources and Analysis. Natl. Geophys. Data Cent. 2009, 10, V5C8276M. [Google Scholar]
  48. GEBCO Compilation Group. GEBCO 2020 Grid; British Oceanographic Data Centre: Liverpool, UK, 2020. [Google Scholar] [CrossRef]
  49. Populus, J.; Vasquez, M.; Albrecht, J.; Manca, E.; Agnesi, S.; Al Hamdani, Z.; Andersen, J.; Annunziatellis, A.; Bekkby, T.; Bruschi, A.; et al. EUSeaMap. A European broad-scale seabed habitat map. Arch. Inst. L’ifremer 2017, 10, 49975. [Google Scholar] [CrossRef]
  50. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  51. Espel, D.; Courty, S.; Auda, Y.; Sheeren, D.; Elger, A. Submerged macrophyte assessment in rivers: An automatic mapping method using Pléiades imagery. Water Res. 2020, 186, 116353. [Google Scholar] [CrossRef]
  52. Xu, J.; Zhao, J.; Wang, F.; Chen, Y.; Lee, Z. Detection of Coral Reef Bleaching Based on Sentinel-2 Multi-Temporal Imagery: Simulation and Case Study. Front. Mar. Sci. 2021, 8, 268. [Google Scholar] [CrossRef]
  53. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef] [Green Version]
  54. European Space Agency. Sentinel-2 Data Quality Report; Tech. Rep. S2-PDGS-MPC-DQR. 2021. Available online: https://sentinel.esa.int/documents/247904/3897638/Sentinel-2_L1C_Data_Quality_Report (accessed on 1 November 2021).
  55. Kutser, T.; Hedley, J.; Giardino, C.; Roelfsema, C.; Brando, V.E. Remote sensing of shallow waters–A 50 year retrospective and future directions. Remote Sens. Environ. 2020, 240, 111619. [Google Scholar] [CrossRef]
  56. Wernand, M.; Hommersom, A.; van der Woerd, H.J. MERIS-based ocean colour classification with the discrete Forel–Ule scale. Ocean Sci. 2013, 9, 477–487. [Google Scholar] [CrossRef] [Green Version]
  57. Wei, J.; Wang, M.; Lee, Z.; Briceño, H.O.; Yu, X.; Jiang, L.; Garcia, R.; Wang, J.; Luis, K. Shallow water bathymetry with multi-spectral satellite ocean color sensors: Leveraging temporal variation in image data. Remote Sens. Environ. 2020, 250, 112035. [Google Scholar] [CrossRef]
  58. Kovacs, E.; Roelfsema, C.; Lyons, M.; Zhao, S.; Phinn, S. Seagrass habitat mapping: How do Landsat 8 OLI, Sentinel-2, ZY-3A, and Worldview-3 perform? Remote Sens. Lett. 2018, 9, 686–695. [Google Scholar] [CrossRef]
Figure 1. Map showing the study sites mentioned in Table 1.
Figure 1. Map showing the study sites mentioned in Table 1.
Remotesensing 14 00590 g001
Figure 2. Demonstration of how the line vectors were drawn. As much as possible, the vectors of both classes were paired in a continuous line to mimic a line transect (A). If the length of the deep-shallow boundary is insufficient to fit all the vectors without resulting in possible spatial autocorrelation, then the remaining vectors are created based on the conventional sampling (B).
Figure 2. Demonstration of how the line vectors were drawn. As much as possible, the vectors of both classes were paired in a continuous line to mimic a line transect (A). If the length of the deep-shallow boundary is insufficient to fit all the vectors without resulting in possible spatial autocorrelation, then the remaining vectors are created based on the conventional sampling (B).
Remotesensing 14 00590 g002
Figure 3. Boxplots of the various bathymetric (m) and PAR (mol photon m−2 d−1) datasets with respect to the optically deep and optically shallow vector labels, namely, the ETOPO1 Global Arc-Minute Elevation (ETOPO), the General Bathymetric Chart of the Oceans (GEBCO), a Sentinel-2 image-based satellite derived bathymetry (SDB) based on [15], the ICESAT2 derived SDB based on [45] using a pooled ICESAT2 data that was spatially split (ICESAT-S), randomly split (ICESAT-R) and temporally split (ICESAT-T), as well as the European Marine Observation and Data Network dataset for Kd PAR (EMODnet-K) and seabed PAR (EMODnet-S).
Figure 3. Boxplots of the various bathymetric (m) and PAR (mol photon m−2 d−1) datasets with respect to the optically deep and optically shallow vector labels, namely, the ETOPO1 Global Arc-Minute Elevation (ETOPO), the General Bathymetric Chart of the Oceans (GEBCO), a Sentinel-2 image-based satellite derived bathymetry (SDB) based on [15], the ICESAT2 derived SDB based on [45] using a pooled ICESAT2 data that was spatially split (ICESAT-S), randomly split (ICESAT-R) and temporally split (ICESAT-T), as well as the European Marine Observation and Data Network dataset for Kd PAR (EMODnet-K) and seabed PAR (EMODnet-S).
Remotesensing 14 00590 g003
Figure 4. S2 L1C Top-of-Atmosphere true colour RGB image (left) with the band indices in greyscale (right).
Figure 4. S2 L1C Top-of-Atmosphere true colour RGB image (left) with the band indices in greyscale (right).
Remotesensing 14 00590 g004
Figure 5. S2 L2A Bottom-of-Atmospheric true colour RGB image (left) with the band indices in greyscale (right).
Figure 5. S2 L2A Bottom-of-Atmospheric true colour RGB image (left) with the band indices in greyscale (right).
Remotesensing 14 00590 g005
Figure 6. Spectral profile box plots of the two classes in the Bahamas derived from Sentinel-2 L1C (A) and L2A (B) image for all spectral bands as well as band indices. H denotes hue, S denotes saturation and V denotes value.
Figure 6. Spectral profile box plots of the two classes in the Bahamas derived from Sentinel-2 L1C (A) and L2A (B) image for all spectral bands as well as band indices. H denotes hue, S denotes saturation and V denotes value.
Remotesensing 14 00590 g006
Figure 7. Mean and 95% confidence interval of the overall accuracy (OA), the F1 score for the shallow water class and the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L1C images, six band indices and three threshold methods. The y-axis range for MCC is between −1 and 1, unlike the OA and F1. For a tabular version, readers are directed to the Appendix A Table A2, Table A3 and Table A4.
Figure 7. Mean and 95% confidence interval of the overall accuracy (OA), the F1 score for the shallow water class and the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L1C images, six band indices and three threshold methods. The y-axis range for MCC is between −1 and 1, unlike the OA and F1. For a tabular version, readers are directed to the Appendix A Table A2, Table A3 and Table A4.
Remotesensing 14 00590 g007
Figure 8. Mean and 95% confidence interval of the overall accuracy (OA), the F1 score for the shallow water class and the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L2A images, six band indices and three threshold methods. The y-axis range for MCC is between −1 and 1, unlike the OA and F1. For a tabular version, readers are directed to the Appendix A Table A5, Table A6 and Table A7.
Figure 8. Mean and 95% confidence interval of the overall accuracy (OA), the F1 score for the shallow water class and the Matthews correlation coefficient (MCC) over four single scene Sentinel-2 L2A images, six band indices and three threshold methods. The y-axis range for MCC is between −1 and 1, unlike the OA and F1. For a tabular version, readers are directed to the Appendix A Table A5, Table A6 and Table A7.
Remotesensing 14 00590 g008
Figure 9. S2 L1C Top-of-Atmosphere true colour RGB image of the Bahamas with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. Only the B2/B3 band preserved this area, at the cost of retaining the sunglinted waters in the east.
Figure 9. S2 L1C Top-of-Atmosphere true colour RGB image of the Bahamas with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. Only the B2/B3 band preserved this area, at the cost of retaining the sunglinted waters in the east.
Remotesensing 14 00590 g009
Figure 10. S2 L1C Top-of-Atmosphere true colour RGB image of the Caspian Sea with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to the shallow areas that are easily omitted. While the Cross PAUA method did not perform quantitatively better than the Best OA method, its Hue map was strongly inversed, albeit with some errors.
Figure 10. S2 L1C Top-of-Atmosphere true colour RGB image of the Caspian Sea with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to the shallow areas that are easily omitted. While the Cross PAUA method did not perform quantitatively better than the Best OA method, its Hue map was strongly inversed, albeit with some errors.
Remotesensing 14 00590 g010
Figure 11. S2 L1C Top-of-Atmosphere true colour RGB image of the Wadden Sea with the deep-water-masked images using the Cross PAUA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. Notice that the hue-masked image is inverted and thus scored very low in the accuracy metrics (Figure 7).
Figure 11. S2 L1C Top-of-Atmosphere true colour RGB image of the Wadden Sea with the deep-water-masked images using the Cross PAUA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. Notice that the hue-masked image is inverted and thus scored very low in the accuracy metrics (Figure 7).
Remotesensing 14 00590 g011
Figure 12. S2 L1C Top-of-Atmosphere true colour RGB image of Tanzania with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to a challenging shallow area that is susceptible to omission. Only the B2/B3 band preserved this area, at the cost of retaining the sunglinted waters in the east. The Sentinel-2 parallax effect striping is particularly apparent in this image.
Figure 12. S2 L1C Top-of-Atmosphere true colour RGB image of Tanzania with the deep-water-masked images using the Best OA method of the different band indices. Dark blue denotes optically deep waters and light blue denotes optically shallow waters. The red arrow points to a challenging shallow area that is susceptible to omission. Only the B2/B3 band preserved this area, at the cost of retaining the sunglinted waters in the east. The Sentinel-2 parallax effect striping is particularly apparent in this image.
Remotesensing 14 00590 g012
Table 1. Test site descriptions.
Table 1. Test site descriptions.
Site, CountrySite DescriptionSatellite Image Id
Unguja, TanzaniaSingle scene, coastal/marine, tropical, sunglinted20191207T073209_20191207T074734_T37MEP
Andros and Nassau, The BahamasSingle scene, coastal/marine, tropical20191119T155531_20191119T155526_T17RRH
Caspian Sea, KazakhstanSingle scene, freshwater, temperate, turbid20190610T072629_20190610T073416_T39TVK
Wadden Sea, Germany and DenmarkSingle scene, coastal/marine, temperate, turbid20190617T104029_20190617T104030_T32UMF
Table 2. List of thresholding methods.
Table 2. List of thresholding methods.
Method NameDescription
Best OASupervised; Corresponding threshold to the best possible resubstitution Overall accuracy (OA)
Cross PAUASupervised; Corresponding threshold to the minimum sum of difference between Producer’s accuracy (PA) and User’s accuracy (UA) for both classes, based on the resubstitution error matrix
OtsuUnsupervised; Adaptive threshold using the Otsu’s threshold and Canny Edge detection on an index [46]
Table 3. Framework of a standard two-class statistical error matrix.
Table 3. Framework of a standard two-class statistical error matrix.
Actual
Class 1Class 2
PredictedClass 1True positive (TP)False positive (FP)
Class 2False negative (FN)True Negative (TN)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, C.B.; Traganos, D.; Reinartz, P. A Simple Cloud-Native Spectral Transformation Method to Disentangle Optically Shallow and Deep Waters in Sentinel-2 Images. Remote Sens. 2022, 14, 590. https://doi.org/10.3390/rs14030590

AMA Style

Lee CB, Traganos D, Reinartz P. A Simple Cloud-Native Spectral Transformation Method to Disentangle Optically Shallow and Deep Waters in Sentinel-2 Images. Remote Sensing. 2022; 14(3):590. https://doi.org/10.3390/rs14030590

Chicago/Turabian Style

Lee, Chengfa Benjamin, Dimosthenis Traganos, and Peter Reinartz. 2022. "A Simple Cloud-Native Spectral Transformation Method to Disentangle Optically Shallow and Deep Waters in Sentinel-2 Images" Remote Sensing 14, no. 3: 590. https://doi.org/10.3390/rs14030590

APA Style

Lee, C. B., Traganos, D., & Reinartz, P. (2022). A Simple Cloud-Native Spectral Transformation Method to Disentangle Optically Shallow and Deep Waters in Sentinel-2 Images. Remote Sensing, 14(3), 590. https://doi.org/10.3390/rs14030590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop