Next Article in Journal
Hyperspectral REE (Rare Earth Element) Mapping of Outcrops—Applications for Neodymium Detection
Previous Article in Journal
Monitoring the Distribution and Dynamics of an Invasive Grass in Tropical Savanna Using Airborne LiDAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Assessment of High Resolution Airborne Full Waveform LiDAR for Shallow River Bathymetry

by
Zhigang Pan
1,2,*,
Craig Glennie
1,2,†,
Preston Hartzell
1,2,†,
Juan Carlos Fernandez-Diaz
1,2,†,
Carl Legleiter
3,† and
Brandon Overstreet
3,†
1
Department of Civil and Environmental Engineering, University of Houston, Houston, TX 77204, USA
2
National Center for Airborne Laser Mapping, University of Houston, 5000 Gulf Freeway Building 4 Room 216, Houston, TX 77204, USA
3
Department of Geography, University of Wyoming, 1000 E University Ave, Laramie, WY 82071, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2015, 7(5), 5133-5159; https://doi.org/10.3390/rs70505133
Submission received: 17 February 2015 / Revised: 13 April 2015 / Accepted: 20 April 2015 / Published: 24 April 2015

Abstract

:
We evaluate the performance of full waveform LiDAR decomposition algorithms with a high-resolution single band airborne LiDAR bathymetry system in shallow rivers. A continuous wavelet transformation (CWT) is proposed and applied in two fluvial environments, and the results are compared to existing echo retrieval methods. LiDAR water depths are also compared to independent field measurements. In both clear and turbid water, the CWT algorithm outperforms the other methods if only green LiDAR observations are available. However, both the definition of the water surface, and the turbidity of the water significantly influence the performance of the LiDAR bathymetry observations. The results suggest that there is no single best full waveform processing algorithm for all bathymetric situations. Overall, the optimal processing strategies resulted in a determination of water depths with a 6 cm mean at 14 cm standard deviation for clear water, and a 16 cm mean and 27 cm standard deviation in more turbid water.

Graphical Abstract

1. Introduction

Airborne Light Detection and Ranging (LiDAR) is an active remote sensing technique used to acquire 3D representations of objects with very high resolution [1,2]. LiDAR systems emit short laser pulses to illuminate the Earth’s surface and then capture the reflected light with photodiode detectors. By measuring the laser flight time propagating through the medium, a distance to target (range) can be determined assuming a known constant speed of light in the medium [3,4]. With superior performance for acquiring 3D measurements and easy deployment, LiDAR data has been used for many scientific applications such as biomass estimation, archaeological application, power line detection, and earth science applications including temporal change detection [5,6,7,8].
A conventional discrete-LiDAR system records only a few ( 5) discrete-returns for each outgoing laser pulse. A hardware ranging method called constant fractional discrimination (CFD) [9] is implemented in most current LiDAR systems to discriminate vertically cluttered illumination targets along the laser path [10]. With critical requirements for vertical resolution and the advancement of processing and storage capacity over last decade, Full Waveform LiDAR (FWL) has emerged as a viable alternative to discrete-LiDAR. FWL records the entire digitized backscatter laser pulse received by detector with very high sampling rate (1–2 GHz) [11].
FWL was introduced in commercial topographic LiDAR systems in 2004 and a number of LiDAR systems now have the capability to store the entire digitized return waveform [12,13,14]. FWL enables better vertical resolution because discrete-return LiDAR resolution is largely influenced by laser pulse width or Full Width at Half Maximum (FWHM) [10,15]. However, sophisticated digital signal processing methods are required to extract points and other target information (e.g., radiometry) from current FWL systems. To date, many full waveform processing algorithms have been proposed and are widely used in the research community. An overview of waveform processing techniques can be found in [11]. Gaussian decomposition [16,17,18] and deconvolution [15,19] are two processing strategies that have been applied in a number of previous studies. Hartzell et al. [20] proposed an empirical system response model which is estimated from single return waveforms over the dynamic range of the instrument. The empirical response is then used as a template for actual return waveforms, resulting in improved ranging.
Recently, the analysis of FWL processing has focused on evaluating the different methods in parallel to find superior algorithms for specific applications. For example, Wu et al. [21] compared three deconvolution methods: Richardson-Lucy, Wiener filter, and nonnegative least squares to determine the best performance using simulated full waveforms from radiative transfer modeling; the Richardson-Lucy method was found to have superior performance for deconvolution of the simulated full waveforms. Parrish et al. [10] presented an empirical technique to compare three different methods for full waveform processing: Gaussian decomposition, Expectation-Maximization (EM) deconvolution and a hybrid method (deconvolve-decompose). Using precisely located screen-targets in a laboratory, they arrived at the conclusion that no single best full waveform method can be found for all applications.
Despite the recent focus on applications of FWL for topographic studies, it was first evaluated for the processing of LiDAR bathymetry [22]. Recently, however, full waveform bathymetric LiDAR has not received much attention in the literature, especially compared to topographic FWL. This is likely due to the lack of available bathymetric LiDAR datasets for the scientific community and the more complicated modeling required for LiDAR bathymetry to compensate for factors such as water surface reflection and refraction, water volume scattering and turbidity that can complicate the propagation models and attenuate return strength resulting in a lower signal to noise ratio (SNR). Water volume scattering can be difficult to rigorously model, especially for shallow water environments where water surface backscatter, water volume scattering, and benthic layer backscattering are mixed into a single complex waveform that makes discrimination of individual responses from a single return difficult [23]. The complex waveform signals in a bathymetric environment demand a noise-resistant and adaptive signal processing methodology. In order to reduce the complexity of bathymetric LiDAR, multiple wavelengths (usually a NIR LiDAR system for water surface detection, and a green LiDAR system for water penetration) systems are normally used to facilitate benthic layer retrieval [12]. For example, Allouis et al. [24] compared two new processing methods for depth extraction by using near-infrared (NIR), green and Raman LiDAR signals. By combining NIR and green waveforms, significantly more points are extracted by full waveform processing and better accuracy is achieved. Even though multi-wavelength LiDAR systems are common for bathymetry, single band systems have emerged recently as well [14,25,26]. Wang et al. [27] has compared several full waveform processing algorithms for single band shallow water bathymetry using both simulated and actual full waveform data, and concluded that Richardson-Lucy deconvolution performed the best of the tested waveform processing techniques. However, the performance with the actual full waveform data was not verified with comparison to external high accuracy truth data. There have also been several studies which have examined the performance of single band full waveform bathymetry using simulated LiDAR datasets. Abady et al. [23] proposed a mixture of Gaussian and quadrilateral functions for bathymetric LiDAR waveform decomposition using nonlinear recursive least squares. Both satellite and airborne configurations were simulated and examined and showed significant improvement for bathymetry retrieval, however, the simulation has not to date been validated with observations from real-world studies, especially for very shallow water bathymetry in turbid conditions. The performance of full waveform LiDAR in shallow water has received little attention in the literature beyond the study by McKean et al. [25]. Limited water depths and significant turbidity impose challenges for bathymetric LiDAR, especially for longer pulse width laser systems where water surface, water column and benthic layer returns mix together. A bathymetric full waveform processing strategy to account for the longer pulse width and the excessive noise present in the bathymetric waveform would enable more accurate bathymetry determination.
In this paper, we first propose a novel full waveform processing algorithm using a continuous wavelet transformation (CWT) to decompose single band bathymetric waveforms. The seed peak locations acquired from CWT are then used as input to both an empirical system response (ESR) algorithm and a Gaussian decomposition method. As a benchmark for comparison, a common Gaussian decomposition algorithm is also used with candidate seed locations acquired from second derivative peaks, similar to that presented in [17,18]. The waveform processing methods are applied to two distinct fluvial environments with varying degrees of water turbidity. Water depths extracted from both a discrete point cloud and full waveform processed point clouds are then compared to water depths measured in the field with an Acoustic Doppler Current Profiler (ADCP). Finally, we analyze the accuracy of water surfaces extracted from discrete point clouds and full waveform processed point clouds using both green wavelength and near-infrared detected water surfaces compared to GNSS RTK field measurements. The rest of the paper is organized as follows: Section 2 presents the waveform processing mathematical models and the procedure used for water depth generation from the LiDAR observations, Section 3 provides a description of the airborne datasets and ground truthing used to evaluate the processing methodology, and Section 4 presents the experimental results. The paper closes with a discussion of the study conclusions and areas for future research.

2. Method and Mathematical Model

FWL return profiles are normally a fixed length discrete time signal containing backscatter information for a large region of interest. For return profiles where the echoes are clustered in a short range window, a significant portion of the full waveform does not carry useful information (i.e., the profile represents the noise threshold); an effective method to pre-process the full waveforms that removes this extraneous information from the original waveform will reduce the total amount of processing time required. A noise level can be defined as the minimum amplitude and can be estimated from the full waveform data itself; for example as the median absolute deviation for each waveform [28]. For our study, amplitudes within 10% of the return gate are considered as the noise level (Figure 1). The return gate is an instrument specific configuration parameter used to reduce the effect of sun glint and noise returns. Herein, all the bins below the noise level were removed, and only the remaining signal was examined. The removal of data below the noise threshold significantly speeds up the calculations due to the decreased data volume to be analyzed. It should be noted that bathymetric LiDAR waveforms can have quite complicated return energy profiles. To demonstrate this, representative samples of bathymetric waveform are given in Figure 2.
Figure 1. Pre-processing of the return waveform by removing data below the noise threshold of the original waveform. The noise level is defined as 10% above the return gate which is given by the manufacturer specifications. The truncated waveform is saved for posterior processing.
Figure 1. Pre-processing of the return waveform by removing data below the noise threshold of the original waveform. The noise level is defined as 10% above the return gate which is given by the manufacturer specifications. The truncated waveform is saved for posterior processing.
Remotesensing 07 05133 g001
Figure 2. Typical bathymetric return waveforms. (ad) are from clear water with multiple visible peaks of varying peak amplitude; (ef) contain more subtle evidence of multiple peaks; (gi) contain multiple peaks that overlap and are visibly not discernible; (j) contains a single peak. Multiple peaks are critical for benthic layer retrieval as the first return is normally the water surface and the latter return more probably from the benthic layer.
Figure 2. Typical bathymetric return waveforms. (ad) are from clear water with multiple visible peaks of varying peak amplitude; (ef) contain more subtle evidence of multiple peaks; (gi) contain multiple peaks that overlap and are visibly not discernible; (j) contains a single peak. Multiple peaks are critical for benthic layer retrieval as the first return is normally the water surface and the latter return more probably from the benthic layer.
Remotesensing 07 05133 g002

2.1. Continuous Wavelet Transformation

The wavelet transformation can be used to project a continuous time signal into multiple subspaces consisting of wavelets [29]. By examining this projection, objectives such as denoising, compression, filtering and other applications can be achieved. A continuous wavelet transformation (CWT) projects the signal into a continuous time and scale subspace (instead of discrete subspaces) whereby the signal can be reconstructed from the resulting continuous components [29,30]. CWT is a very effective method to detect the peak locations in an overlapped waveform [31]. Extending the use of CWT to FWL thus is natural since the return waveforms can be highly mixed due to potentially closely spaced backscatters along the laser path.
CWT can construct a time frequency representation of a signal that offers very good time and frequency localization, making it suitable to localize the peak locations as initial approximations for subsequent peak estimation algorithms. The mother wavelet template should be continuously differentiable and compactly support scaling and capture of a high vanishing moment. Considering that most FWL systems have Gaussian-like signals, the Lorentzian of Gaussian mother wavelet has been used in this study [31], and is given in Equation (1).
ω a , b ( t ) = [ 1 ( t b a ) 2 ] exp [ ( t b 2 a ) 2 ]
Here, ω a , b is the mother wavelet used in CWT, a dilates the mother wavelet and b translates the mother wavelet, t is time. Special caution is needed for determination of a and b . A smaller a can assist in discriminating highly overlapped peaks, but a slight undulation of the waveform may result in a false peak; larger values of a can resist disturbing undulations (i.e., noise) but could miss weak returns and result in single returns for multiple echoes; however, the smallest a cannot be less than the digitizing interval of signal. The ridge defined in [31] is a good implementation for the detection of peaks (and a determination of a ) but requires a significant amount of computation, so instead we directly chose a single value for a to detect potential peaks. In our study, a is set to 1.0 ns because the interval of full waveform samples is 1.0 ns and b was set to 0.1 ns for both data types. A value of 0.1 ns for b is equivalent to 1.5 cm in air. Parameters a and b can be adjusted to fit different applications and different FWL systems. The wavelet decomposition process is a good noise-resistant subspace representation of a signal, and therefore a simple local maximum filter can be used to find the peak locations after a wavelet transformation. In our study, a window with a size of 15 ns was used to detect the local maxima for the peak locations as the Full Width at Half Maximum (FWHM) is 8.3 ns for the Optech Aquarius LiDAR system used in this study [14].

2.2. Gaussian Decomposition Method

Gaussian decomposition is a popular approach for FWL processing as it can simultaneously provide estimation of peak locations and widths. Gaussian decomposition is implemented using Expectation-Maximization (EM) in this study. EM is an iterative method, normally used in signal and image processing, to estimate the maximum probability for a set of parameters of a statistical model. As the name indicates, there should be an expectation (E) step and a maximization (M) step, and EM iterates between the E step and the M step until a convergence criterion is satisfied [28,32].
A LiDAR waveform return can be represented as the sum of multiple Gaussian distributions [17], and mathematically this can be expressed as:
f ( t ) ~ i = 1 n N ( μ i , σ i )
Here, f ( t ) is the full waveform that is the sum of the Gaussian components with multiple components ( n ) and t is time, N ( μ i , σ i ) represents a Gaussian component with an individual mean ( μ i ) and a standard deviation ( σ i ). The number of peaks and the initial peak locations are needed as initial values for the EM algorithm described by the following equations:
Q i j = p j f i ( i ) j = 1 k p j f i ( i )
p j = i = 1 S N i Q i j i = 1 S N i
μ i = i = 1 S N i Q i j i p j i = 1 S N i
σ i = i = 1 S N i Q i j ( i μ i ) 2 p j i = 1 S N i
Here, p j is the relative weight of the component distribution f i ( x ) ;
  • Q i j is the probability that sample i belongs to component j ;
  • N i is the amplitude for sample i ;
  • S is the number of samples in the waveform;
  • μ i is the mean peak location; and
  • σ i is the standard deviation for that component, proportional to the pulse width or FWHM.
As EM is a local maximum searching method, peaks with spurious µi or σi are removed to ensure a reasonable result. Also, extremely weak returns, for example, peaks with pi less than 0.05 are removed to guarantee algorithm convergence. From Equations (2)–(5), it is evident that EM is actually a Gaussian decomposition because its underlying model is a Gaussian mixture model. For the purpose of assessing performance of Gaussian decomposition with different seeding peak locations, both CWT detected peak locations and peaks acquired from second derivative analysis [18] are applied to initialize EM estimation.

2.3. Empirical System Response Waveform Decomposition

An alternative to the Gaussian model for waveform decomposition is an empirical system response (ESR) model that represents the convolution of the emitted pulse shape and the sensor response. Decomposition with an ESR model has the potential to reduce decomposition residuals and improve ranging precision compared to Gaussian decomposition [20]. The method described in [20] requires an ESR model spanning the dynamic range of a terrestrial LiDAR sensor to accommodate nonlinear response characteristics. However, for a FWL sensor with a predominantly linear response, which includes the airborne systems used in this study, a simplified ESR waveform decomposition method can be derived.
In lieu of an ESR model spanning the sensor dynamic range, a single empirical response model can be approximated by averaging waveforms from a single, diffuse, extended target illuminated at normal incidence. Using standard nonlinear least squares, the model is iteratively shifted ( μ parameter), scaled in amplitude ( A parameter), and scaled in width ( w parameter) until the parameter corrections are negligible, i.e., the model is fit to the observed waveform in an optimal sense. An un-weighted Gauss-Newton least squares expression can be written in matrix form as [33]:
J X = K + V
  X = ( J T J ) 1 J T K
where J is the m × 3 matrix ( m = number of waveform data points) of partial derivatives of the ESR model with respect to the unknown μ , A , and w parameters evaluated at each waveform data point; K is the column vector of differences between the observed waveform amplitudes and the amplitudes computed from the ESR model; V is the column vector of residuals; and X is the column vector of ESR model parameter corrections. The partial derivatives required to populate the J matrix are numerically computed from the ESR model using the current parameter values at each iteration in the adjustment. Figure 3 illustrates the numeric partial derivatives. As with Gaussian decomposition, the least squares algorithm can be extended to accommodate a superposition of multiple ESR models when overlapping return echoes are detected in the observed waveform.
Figure 3. Graphical representation of numeric partial derivatives necessary for the empirical system response least squares waveform decomposition algorithm.
Figure 3. Graphical representation of numeric partial derivatives necessary for the empirical system response least squares waveform decomposition algorithm.
Remotesensing 07 05133 g003

2.4. Water Depth Generation

As the field measurements used in the paper are water depths records collected with an Acoustic Doppler Current Profiler (ADCP), we need to infer water depths from the 3D LiDAR points as a basis of comparison. We also need to segment the raw point clouds from each of the target extraction techniques to separate water column and bottom returns and properly identify the benthic layer. The basic strategy for benthic classification is to first classify the last of multiple returns as initial candidate benthic returns, and then use a region growing method with the initial benthic points and regionally lowest elevation points to refine the total benthic surface points using the TerraScan software package. The classification algorithm is similar to that used to determine ground returns in topographic LiDAR surveys and is based on the methodology presented in [34]. It should be noted that each of the green LiDAR returns from below the surface of the water has been corrected for both refraction of the pulse at the air/water interface, and for the change in the speed of light within water [22]. To define the water boundary, a fluvial river-line is acquired from aerial orthoimages by visually identifying and digitizing the land/water border.
To convert the benthic layer points extracted from the point cloud to water depths, a water surface is required to subtract the benthic layer elevation from the water surface elevation for each benthic point. To highlight the differences in depth determination between single and multiple bands bathymetric LiDAR sensors we examine two realizations of the water surface for each river: The first water surface is extracted from alternative sources (NIR water surface for the Snake River, RTK water surface for the Blue/Colorado River) and the second water surface is extracted from each of the green LiDAR point clouds alone. For green LiDAR point clouds, the water surface can be defined as the remaining LiDAR returns within the boundary of the water body after benthic classification. The NIR water surface was acquired by extracting all NIR returns within the water boundary, as NIR LIDAR can theoretically only be retro-reflected from the water surface [35].
Figure 4. Definition of point to plane distance. For each target point, the neighbor points are those within the cylinder with a radius of R . A fitted plane is constructed by least squares estimation, and the distance of the candidate point to the fitted plane is defined as the point to plane distance.
Figure 4. Definition of point to plane distance. For each target point, the neighbor points are those within the cylinder with a radius of R . A fitted plane is constructed by least squares estimation, and the distance of the candidate point to the fitted plane is defined as the point to plane distance.
Remotesensing 07 05133 g004
Point clouds created by airborne LiDAR are generally irregularly distributed, and therefore conventional image processing techniques which assume raster input are not suitable for posterior analysis. As an alternative, we utilized a point to plane distance to compute the distances between an individual LiDAR returns and its neighbor points [36]. Figure 4 shows the schematic steps to compute the point to plane distance. For each specific candidate point, neighbor points are selected within the cylinder with specific searching radius R , and thus a fitted plane is constructed by least squares estimation. The distance from the candidate point to the fitted plane is defined as the point to plane distance d . The point to plane distance is used in this study to calculate the water depth given a cloud of water surface (reference points) and benthic points (target points).

3. Description of Datasets

3.1. Airborne Bathymetric LiDAR Datasets

To assess the performance of single band full waveform bathymetric LiDAR and the processing algorithms described in this paper, two datasets representing different river conditions are investigated: the Snake River in Wyoming’s Grand Teton National Park and the confluence of the Blue and Colorado Rivers in north-central Colorado. Both the Snake and Blue/Colorado Rivers originate from the Rocky Mountains and their flow conditions are dominated by the annual snowmelt hydrograph; all remote sensing data collection occurred during low flow conditions in late summer. The Snake River is predominantly clear water after snowmelt runoff recedes. Here, the portion of river bed mainly consists of gravel and cobble (see Figure 5a) and is coated with varying degrees of periphyton and bright green filamentous algae. The varying water depths present in the study area of the Snake River are well suited for a performance assessment of bathymetric LiDAR. The Colorado River originates in Rocky Mountain Park and the Blue River enters the Colorado River from the south near the town of Kremmling, CO. The Blue/Colorado River has lower gradients than the Snake River and bed materials consist mainly of sand and fine sediment. This site has variable water conditions because the Colorado River is also joined by a smaller tributary called Muddy Creek, which as the name implies, was turbid due to rainfall and surface erosion a few days before the LiDAR data collection. The Blue River also contains dense aquatic vegetation extending into the water column from the bed (see Figure 5b). These varying water conditions present an opportunity to assess how water clarity influences bathymetric LiDAR performance.
Figure 5. Study areas, (a) Overview of the Snake River with ADCP profiles locations highlighted and colored by depth. (b) Overview of the Blue/Colorado River study area, the circled area is turbid plume from Muddy Creek; Colorado River flows from east to west in the image and the Blue River enters the channel from the south. The ADCP profiles locations are highlighted and colored by depth.
Figure 5. Study areas, (a) Overview of the Snake River with ADCP profiles locations highlighted and colored by depth. (b) Overview of the Blue/Colorado River study area, the circled area is turbid plume from Muddy Creek; Colorado River flows from east to west in the image and the Blue River enters the channel from the south. The ADCP profiles locations are highlighted and colored by depth.
Remotesensing 07 05133 g005
The airborne datasets were collected by the National Center for Airborne Laser Mapping (NCALM) with Optech Aquarius and Gemini systems. The Aquarius sensor is a single band LiDAR based on a Q-switched frequency doubled Nd:YAG laser with a resultant wavelength of 532 nm, pulse repetition frequencies (PRFs) of 33, 50, and 70 kHz, a pulse energy of 30 μJ (at 70 kHz), and a beam divergence of 1 mrad. The scanner is a conventional side-to-side oscillating mirror (saw-tooth pattern) with an adjustable field of view up to ± 25° and a maximum mirror frequency of 70 Hz. The return signal is both analyzed in real time by a constant fraction discriminator (CFD) and stored using a waveform recorder with 12 bit amplitude quantization and a sampling speed of 1 GHz for post-mission processing. The Gemini system is similar to the Aquarius system with a Nd:YAG laser at 1064 nm, smaller and adjustable divergence angle and PRF up to 167 kHz. Table 1 shows the principal data acquisition parameters for both project sites.
Table 1. Airborne LiDAR acquisition parameters.
Table 1. Airborne LiDAR acquisition parameters.
Snake RiverSnake RiverBlue/Colorado River
Laser wavelength (nm)5321064532
Pulse width (FWHM in ns)8.3128.3
Digitization frequency (GHz)1N/A1
Resolution of full waveform (bits)12N/A12
Field of View (FOV)40°46°40°
Beam divergence (mrad)10.25/0.81
Pulse rate (KHz)3310033
Date of surveyAugust 2012August 2012September 2012
Flight height (AGL, m)510580580
Point density (pts/m2)4.26.34.0
It should be noted in Table 1 that there is no NIR data listed for the Blue/Colorado River. NIR was collected for this study, but unfortunately was acquired at a high flight elevation (2600 m AGL); laser pulses on water surfaces were mostly absorbed. Effectively no water surface returns were found and therefore the NIR data for the Blue/Colorado River was not used. It should also be noted that flights with the Gemini system and Aquarius system cannot be performed at the same time. The Aquarius data was collected three days after Gemini data collection for Snake River, but negligible water surface elevation change was found for the Snake River and verified using USGS river gauge station data. Both ADCP field data and Aquarius data were collected on the same day for Blue River, and thus the water surface elevation change is negligible.

3.2. Acoustic Doppler Current Profiler Data

To assess the ability of full waveform bathymetric LiDAR for measuring river morphology, ground reference datasets were collected with a Sontek RiverSurveyor S5 Acoustic Doppler Current Profiler (ADCP) deployed from a kayak. SonTek reports a depth resolution of 0.001 m and an accuracy of 1% over the range of 0.2–15 m. ADCP data is our primary ground reference data, as the accuracy should be better than 3 cm for these two projects because most water was shallower than 3 m. Real-Time Kinematic (RTK) GPS observations were collected concurrently with the ADCP observations to provide measurements of water surface elevation. The vertical datum difference between the LiDAR and RTK GPS was corrected by using common observed ground control points in a parking lot and then applying the offset to correct the LiDAR observations. The ADCP depth observation locations for both projects are shown in Figure 5. The distribution of ADCP water depths through the Snake River (Figure 6a) and Blue/Colorado River (Figure 6b) show that most water depths for the Snake River are less than 2 m while most water depths for the Blue/Colorado River are less than 1.5 m.
Figure 6. Field measured Acoustic Doppler Current Profiler (ADCP) water depth distribution for (a) Snake River (6968 measurements) and (b) Blue/Colorado River (16,681 measurements). Most water depths for the Snake River are less than 2 m while most water depths for the Blue/Colorado River are less than 1.5 m.
Figure 6. Field measured Acoustic Doppler Current Profiler (ADCP) water depth distribution for (a) Snake River (6968 measurements) and (b) Blue/Colorado River (16,681 measurements). Most water depths for the Snake River are less than 2 m while most water depths for the Blue/Colorado River are less than 1.5 m.
Remotesensing 07 05133 g006

3.3. Water Turbidity Data

In this study, a WET Labs EcoTriplet was deployed from a kayak on the Blue/Colorado River to measure the portion of the total back-scattering associated with particulates (i.e., suspended sediment and organic material) in the water column. Turbidity, a common metric of water clarity, is derived from the measured backscatter. Figure 7 shows the spatial distribution of turbidity measurements across areas with distinct levels of water clarity at the Blue/Colorado River confluence site. The northern part of the river is distinctly more turbid than the southern portion of the river. Note that turbidity measurements and ADCP measurements were collected on separate deployments of the kayak. Figure 8 shows the bimodal distribution of the turbidity measurements.
Figure 7. Turbidity measurements for the Blue/Colorado River (4663 measurements) with Muddy Creek entering from the north.
Figure 7. Turbidity measurements for the Blue/Colorado River (4663 measurements) with Muddy Creek entering from the north.
Remotesensing 07 05133 g007
Figure 8. Distribution of turbidity data for the Blue/Colorado River confluence. An obvious bimodal distribution is displayed; more turbid water is present in the northern part of the river due to Muddy creek (Figure 7), and the clearer water is present in the southern portion.
Figure 8. Distribution of turbidity data for the Blue/Colorado River confluence. An obvious bimodal distribution is displayed; more turbid water is present in the northern part of the river due to Muddy creek (Figure 7), and the clearer water is present in the southern portion.
Remotesensing 07 05133 g008

4. Experimental Results

4.1. Experiment I: Snake River Bathymetry Study

4.1.1. Distribution of Number of Full Waveform Returns

Four different full waveform processing algorithms have been applied in this study. The full waveform data for the Snake River was first preprocessed to reduce computing load by thresholding (Section 2). To analyze the effect of the initial peak location estimates on nonlinear least square Gaussian decomposition, peak locations that were detected with a second derivative and peak locations that were detected with a CWT were both used as initial approximations for Gaussian decomposition. The resulting point clouds are referred to as s_G (Gaussian decomposition initiated with second derivatives) and c_G (Gaussian decomposition initiated with CWT), respectively. The peak locations detected by CWT are also used as initial seed values for the ESR pulse fitting. A point cloud was also generated by using just the peak locations derived from CWT without further Gaussian or ESR refinement. The four point clouds from these full waveform fitting are then used for further analysis.
The CWT and s_G algorithms generated 24.32% and 43.35% more points, respectively, compared to the discrete points provided by the manufacturer software, for the Snake River. The distribution of the number of returns for discrete points, CWT and s_G are shown in Figure 9. This suggests that s_G performs better than CWT for peak detection in the fluvial environment of the Snake River. More importantly, both CWT and s_G methods are markedly better at resolving multiple returns; almost all discrete points are composed of single return points. More return points have a direct benefit for bathymetric mapping as better coverage and higher density data is the result. In addition, multiple returns are also critical for shallow water bathymetric mapping as the surface returns and benthic returns are more likely both represented by multiple reflections. It should be noted that the ESR and c_G methods are not given in Figure 9 because they were both seeded using the CWT peak locations and therefore theoretically have the same statistics as the CWT results.
Figure 9. Distribution of the number of full waveform returns using different peak detection methods for the Snake River. CWT and s_G methods are better able to detect multiple returns while almost all discrete points are single return.
Figure 9. Distribution of the number of full waveform returns using different peak detection methods for the Snake River. CWT and s_G methods are better able to detect multiple returns while almost all discrete points are single return.
Remotesensing 07 05133 g009

4.1.2. Water Depth Analysis

To avoid local anomalies (e.g., floating wood, submerged objects, facets of waves, etc.), for each benthic point, the point to plane distance (see Section 2.4) is calculated as water depth with a search radius of 10 m for both the NIR and green water surfaces. To evaluate full waveform bathymetric LiDAR performance, the retrieved water depths have been compared to field measured ADCP depths. Figure 10 shows all the possible combination of water depths compared to ADCP water depths and Table 2 shows the statistical comparison results for each water depth estimate.
Figure 10. Comparison of LiDAR depth to ADCP depth for the Snake River (top row: LiDAR depths generated with NIR estimated water surface, bottom row: LiDAR depths with green return water surface). Black lines in the figures are the 1:1 line while the red lines are regression lines, and the green shaded areas highlight extremely shallow water (<0.8 m), (a) water depths from discrete points; (b) water depths from s_G points; (c) water depths from c_G points; (d) water depths from CWT points; and (e) water depths from ESR points.
Figure 10. Comparison of LiDAR depth to ADCP depth for the Snake River (top row: LiDAR depths generated with NIR estimated water surface, bottom row: LiDAR depths with green return water surface). Black lines in the figures are the 1:1 line while the red lines are regression lines, and the green shaded areas highlight extremely shallow water (<0.8 m), (a) water depths from discrete points; (b) water depths from s_G points; (c) water depths from c_G points; (d) water depths from CWT points; and (e) water depths from ESR points.
Remotesensing 07 05133 g010
Table 2. Comparison of LiDAR retrieved water depths to field measured ADCP water depths for the Snake River. Results in meters.
Table 2. Comparison of LiDAR retrieved water depths to field measured ADCP water depths for the Snake River. Results in meters.
Point Type:Discretes_Gc_GCWTESR
Water Surface:NIRGreenNIRGreenNIRGreenNIRGreenNIRGreen
Mean(Zf−Zr) (m)−0.020.13−0.020.180.130.32−0.110.06−0.130.17
Std.(Zf−Zr) (m)0.170.200.160.180.140.170.150.140.130.14
Slope1.081.160.930.790.910.751.121.081.080.95
Intercept (m)−0.06−0.290.090.04−0.04−0.060.00−0.140.05−0.12
R20.870.870.870.810.900.830.910.920.920.88
*Zf is field measurement, Zr is LiDAR derived water depth.
With the NIR water surface, ESR performs the best with the lowest standard deviation (Std.) of 13 cm and the highest R2 of 0.92; water depths retrieved from discrete points have slightly higher Std. of 17 cm and lower R2 of 0.87. With a green water surface, CWT performs the best with a Std. of 14 cm and the highest R2 of 0.92 while s_G water depths and c_G water depths have the worst performance with 18 cm and 17 cm for Std., 0.81 and 0.83 for R2, respectively. The mean bias of water depth using a NIR water surface is lower than the mean bias with a green water surface except for CWT derived points; this is likely caused by water volume scattering and the overlap of benthic and surface returns for shallow water. In addition, the R2 values for water depths retrieved with a NIR water surface are higher than those for water depths retrieved with a green water surface with the exception of the CWT points (0.87 vs. 0.87 for discrete points, 0.87 vs. 0.81 for s_G points, 0.90 vs. 0.83 for c_G points, 0.92 vs. 0.88 for ESR). These differences indicate that NIR returns give a more accurate water surface than green returns. The CWT methodology is the lone outlier, and shows the opposite performance as water depths retrieved with a green water surface are better than water depths retrieved with a NIR water surface (−11 cm vs. 6 cm for mean depth error, 15 cm vs. 14 cm for Std., 0.91 vs. 0.92 for R2, respectively). This suggests that the CWT is more effective than the other methods for green LiDAR waveform processing as it provides a better estimate of the water surface.
The water depths retrieved from c_G points are slightly better than water depths retrieved from s_G points (with NIR water surface: 14 cm vs. 16 cm for Std., 0.90 vs. 0.87 for R2, respectively; with green water surface: 17 cm vs. 18 cm for Std., 0.83 vs. 0.81 for R2, respectively). This suggests that the initial peak location estimates have an effect on the final least square estimates, and that CWT provides marginally better seed locations.
The green shaded areas (depths < 0.8 m) in Figure 10 indicate that all shallow water depths retrieved from LiDAR observations have been underestimated. Theoretically, LiDAR can underestimate water depth because of overlap between the surface return and benthic return for extremely shallow water. Also, any suspended particulate matter in the water body, or a rough benthic layer can stretch the incident laser pulse. For very shallow water (green shaded area), the final laser return will be a superposition waveform of water surface backscatter, water volume backscatter and benthic layer backscatter.
Because Table 2 shows significant differences between water depths with either an NIR or green water surface definition, a further inspection of these water surface definitions is warranted. The NIR water surface shows the best overall internal consistency, with a Std. of 11.76 cm for planar fits of points within a 2 m search radius. Therefore the NIR water surface is used as a common basis for comparison for all the green water surfaces by calculating the point to plane distance with a 2 m search radius from the green LiDAR points to the NIR surface plane. As Table 3 shows, different green water surfaces have significantly different mean vertical errors with ESR having the largest at 45 cm and c_G the smallest at 17 cm. The discrete water surface has only a 10 cm of Std., indicating that the discrete point cloud estimates the water surface well (at least for the Snake River conditions). However, the overall performance (i.e., determining water depths) of discrete returns is not as good as CWT which has Std. of 24 cm for water surface; this implies that a CFD is unable to properly estimate benthic returns in the presence of water column backscatter. The c_G method performs better than s_G for water surface detection with 17 cm vs. 34 cm for mean vertical error, and 28 cm and 31 cm for Std, respectively. Again, this is further evidence that an accurate initial peak estimate is necessary for nonlinear Gaussian decomposition.
Table 3. Statistical mean vertical error and Std. for different green water surfaces. NIR water surface has an 11.76 cm Std.
Table 3. Statistical mean vertical error and Std. for different green water surfaces. NIR water surface has an 11.76 cm Std.
Water SurfaceDiscretes_Gc_GCWTESR
Mean (m)0.180.340.170.290.45
Std. (m)0.100.310.280.240.33

4.2. Experiment II: Blue/Colorado River Study

4.2.1. Distribution of Number of Full Waveform Returns

To further assess full waveform bathymetric LiDAR performance, we performed another study on the Blue/Colorado River, which has significantly more turbid water than the Snake River. Similar to the Snake River analysis, all four full waveform processing algorithms were applied to extract individual point clouds. Only 4.62% more points were detected with a CWT over discrete returns. The s_G method actually gave 2.07% fewer points than the discrete. The distribution of returns for this fluvial environment is shown in Figure 11. Note that, CWT extracted significantly more multiple returns while almost all discrete returns are single return. Again, more multiple returns in general mean better separation between water surface and benthic layer. The same region growing classification methodology described for the Snake River was also applied to the Blue/Colorado River point clouds.
Figure 11. Distribution of the number of full waveform returns for different peak detection methods on the Blue/Colorado River. CWT and s_G methods are better able to detect multiple returns while almost all discrete points are single return.
Figure 11. Distribution of the number of full waveform returns for different peak detection methods on the Blue/Colorado River. CWT and s_G methods are better able to detect multiple returns while almost all discrete points are single return.
Remotesensing 07 05133 g011

4.2.2. Water Depth Analysis

After extracting benthic returns from the full waveform and discrete point clouds, a water surface was required to retrieve water depths for comparison with the ADCP measurements. In contrast to the Snake River, no effective NIR water surface was acquired during the airborne LiDAR data collection because of high flight altitude (2.6 km above ground) of the NIR collection (see Figure 3b in [14]). Therefore, instead of using a NIR water surface we have used a field measured RTK water surface. The RTK water surface locations were recorded during the ADCP water depth collection as shown in Figure 5b. In addition, the water surface returns from the discrete bathymetric points proved to have extremely low density, and therefore no water surface was estimated from the discrete returns. Therefore, for the Blue/Colorado River only four sets of water depths were compared with the green water surface. For each benthic point, the point to plane distance is calculated with a search radius of 10 m for both RTK water surface and green water surface. The comparison between the LiDAR and ADCP depths are given in Figure 12 and Table 4.
Figure 12. Comparison of LiDAR depth to ADCP depth for the Blue/Colorado River ((top) LiDAR depths generated with RTK estimated water surface, (bottom) LiDAR depths with green return water surface). Black lines in the figures are the 1:1 lines while the red lines are regression lines, (a) water depths from discrete points; (b) water depths from s_G points; (c) water depths from c_G points; (d) water depths from CWT points; and (e) water depths from ESR points.
Figure 12. Comparison of LiDAR depth to ADCP depth for the Blue/Colorado River ((top) LiDAR depths generated with RTK estimated water surface, (bottom) LiDAR depths with green return water surface). Black lines in the figures are the 1:1 lines while the red lines are regression lines, (a) water depths from discrete points; (b) water depths from s_G points; (c) water depths from c_G points; (d) water depths from CWT points; and (e) water depths from ESR points.
Remotesensing 07 05133 g012
Table 4. Comparison of LiDAR retrieved water depths to field measured ADCP water depths for the Blue/Colorado River. Results in meters.
Table 4. Comparison of LiDAR retrieved water depths to field measured ADCP water depths for the Blue/Colorado River. Results in meters.
Point Type:Discretes_Gc_GCWTESR
Water Surface:RTKGreenRTKGreenRTKGreenRTKGreenRTKGreen
Mean(Zf−Zr)(m)−0.17N/A−0.030.60−0.040.55−0.160.35−0.100.35
Std.(Zf−Zr)(m)0.29N/A0.270.270.250.240.270.240.280.22
Slope0.67N/A0.540.450.700.630.850.840.390.48
Intercept (m)0.42N/A0.37−0.120.26−0.210.28−0.220.49−0.01
R20.44N/A0.410.290.530.430.570.580.250.39
*Zf is field measurement, Zr is LiDAR derived water depth.
All waveform algorithms performance have degraded in the turbid water of Blue/Colorado River. The mean biases for s_G and c_G water depths with green water surface are significantly higher than that of the Snake River with values of 60 cm and 55 cm, respectively. The Std. for all water depths retrieved with a green water surface is slightly lower than the Std. of water depths with RTK water surface, but with significantly higher mean biases. The highest R2 of 0.58 was achieved by CWT water depths with a green water surface while CWT still gave the highest R2 of 0.57 with the RTK surface. The more consistent results from the purely peak finding CWT algorithm suggests that the water turbidity substantially distorts the return waveform shape, which causes significant problems for algorithms such as Gaussian decomposition or ESR that make assumptions about the shape of the return energy profile. ESR performed relatively poorly in the Blue/Colorado River with only a R2 of 0.25 for water depths with an RTK water surface and R2 of 0.39 for water depths with green water surface.
The overall Std. for the c_G method is slightly better than the s_G method (with RTK water surface: 25 cm vs. 27 cm, with green water surface: 24 cm vs. 27 cm) and has a higher R2 value (with RTK water surface: 0.53 vs. 0.41, with green water surface: 0.43 vs. 0.29). This difference reinforces that accurate initial peak estimates are critical for nonlinear least square Gaussian decomposition.
The differences in depth estimation between an RTK water surface and a green laser water surface necessitates a further assessment of the water surfaces used to infer water depths. Given the water turbidity, we would expect the RTK water surface to have better performance. Therefore, we compare each green LiDAR water surface using the RTK surface as a common reference. For each green water surface point, the RTK points within 10 m are used to form a water surface plane and each green water surface point to plane distance to the RTK surface is defined as the planar uncertainty. Table 5 shows that all green water surfaces from the Blue/Colorado River have high mean error (s_G: 82 cm, c_G: 79 cm, CWT: 72 cm, ESR: 63 cm). The Std. (s_G: 16 cm, c_G: 13 cm, CWT: 17 cm, ESR: 18 cm) of all water surfaces are marginally better than those for the Snake River because the RTK water surface is less noisy than the NIR water surface used for comparison on the Snake River (NIR has 11.76 cm Std., RTK has 4.11 cm Std.). The significant mean vertical biases highlights the overall poorer performance of bathymetric LiDAR for the Blue/Colorado River. By comparing the results from Table 4, water depths calculated by using an RTK water surface have a smaller mean bias than green water surfaces. This suggests that the increasing amount of water volume scattering caused by the turbid water has skewed the mixture of water surface and volume scattering toward the bottom causing a larger mean error for green water surfaces. The relatively poor performance of green water surface extraction is troubling because it suggests that an independent accurate water surface, i.e., NIR water surface, is a necessity for turbid water depth determination.
Table 5. Statistical mean vertical error and Std. for different green water surfaces. RTK water surface has a 4.11 cm Std.
Table 5. Statistical mean vertical error and Std. for different green water surfaces. RTK water surface has a 4.11 cm Std.
s_Gc_GCWTESR
Mean (m)0.820.790.720.63
Std. (m)0.160.130.170.18

4.2.3. Water Surface Detection Performance Analysis

In order to better study the impacts of water turbidity, we collected a few representative waveforms with CWT detected peaks and actual water surface locations calculated from RTK surveyed points. Figure 13 displays these individual bathymetric waveforms under different water conditions, varying from shallower to deeper water and also varying from lower to higher turbidity. A single peak can be detected for shallow water with lower turbidity (Figure 13a,b), shallow water with higher turbidity (Figure 13f–h) and deeper water with higher turbidity (Figure 13i,j). CWT detected peaks are closer to the actual water surface for more turbid water (Figure 13f−h) and they move away from the actual water surface for lower turbidity water (Figure 13a,b). The different behavior of full waveform detection in less turbid and more turbid water suggests that a significant amount of water volume scattering for more turbid water skewed the bathymetric returns toward the actual water surface.
Figure 13. Individual waveform with CWT detected peaks (red lines) and actual water surface location (from RTK—black line) under different water conditions with depth (D, unit: m) and turbidity (T, unit: NTU). (a,b) are shallow water with lower turbidity, (ce) deeper water with lower turbidity, (fh) shallow water with higher turbidity, and (ij) deeper water with higher turbidity.
Figure 13. Individual waveform with CWT detected peaks (red lines) and actual water surface location (from RTK—black line) under different water conditions with depth (D, unit: m) and turbidity (T, unit: NTU). (a,b) are shallow water with lower turbidity, (ce) deeper water with lower turbidity, (fh) shallow water with higher turbidity, and (ij) deeper water with higher turbidity.
Remotesensing 07 05133 g013
However, a further analysis of Figure 13 shows the actual water surface (as measured by RTK) is located at the very beginning of the waveform. Therefore, it would appear that a simple leading edge detection method would be able to accurately estimate the actual water surface. We have set a leading edge detector with an amplitude threshold of 210 to define the water surface. Figure 14 shows the leading edge detected water surface as well as the CWT detected water surface. A significant vertical bias is present for the CWT detected water surface in profile A and profile B. This visual vertical bias confirms the significant increase of water surface error in Table 5. The leading edge detected water surface matches the RTK water surface very well, confirming that leading edge detection is effective for estimating the water surface in the Blue/Colorado River. In order to generalize the leading edge detection, the same method was also applied to the Snake River to independently assess performance. Figure 15 shows two profiles of the Snake River with leading edge water surface detection. A significant vertical bias is present in the Snake River leading edge water surface; the CWT detected water surface agrees much better with the NIR detected water surface. This result confirms that the biases in the waveform water surfaces for the Blue/Colorado River are caused by the increased water turbidity, and not by the waveform processing methodology. The different performance of leading edge water surface detection and the CWT water surface indicates that there may be no single solution that can be applied to all rivers to accurately estimate the water surface for single band LiDAR bathymetry.
Figure 14. Profiles for leading edge detected water surface on the Blue/Colorado River. The CWT detected waveform shows a clear vertical bias from the RTK water surface while the leading edge detected water surface is much closer to the RTK water surface. Coordinates are in UTM 13N (NAD83).
Figure 14. Profiles for leading edge detected water surface on the Blue/Colorado River. The CWT detected waveform shows a clear vertical bias from the RTK water surface while the leading edge detected water surface is much closer to the RTK water surface. Coordinates are in UTM 13N (NAD83).
Remotesensing 07 05133 g014
Figure 15. Profiles for leading edge detected water surface on the Rusty Bend of Snake River. The water surface detected with CWT matches the NIR water surface well. Coordinates are in UTM 12N (NAD83).
Figure 15. Profiles for leading edge detected water surface on the Rusty Bend of Snake River. The water surface detected with CWT matches the NIR water surface well. Coordinates are in UTM 12N (NAD83).
Remotesensing 07 05133 g015
Table 6 lists the statistical results for leading edge water surface detection for both the Blue/Colorado and Snake River. For each leading edge detected point, the point to plane distances (RTK points are reference points for Blue/Colorado River, NIR points are reference points for Snake River) were used to form a plane by least square estimation and the point to water surface plane distance is defined as error. The leading edge detection is poorer than the waveform derived surfaces for the Snake River as the water volume scattering with low turbidity is not significant. However, if the water becomes more turbid, then leading edge detection performs better than the peak detection or waveform fitting methods.
Table 6. Statistical mean vertical error and Std. for leading edge detected water surfaces. RTK water surface and NIR water surface are used as reference for the Blue/Colorado and Snake Rivers, respectively.
Table 6. Statistical mean vertical error and Std. for leading edge detected water surfaces. RTK water surface and NIR water surface are used as reference for the Blue/Colorado and Snake Rivers, respectively.
Blue/Colorado RiverSnake River
Mean (m)−0.01−0.60
Std. (m)0.190.27

4.3. Best Performance for Single Band Bathymetric LiDAR

If we specify only single band (green) LiDAR observations, then Table 3 shows that the most consistent water surface estimate for the Snake River is given by the discrete returns with an 18 cm mean bias and a 10 cm Std., and Table 6 indicates that leading edge detection yields the best representation of the water surface for the Blue/Colorado River with 1 cm of mean bias and 19 cm Std. Therefore, we can assess the best performance for single band green LiDAR in each study by combining the best estimate of water surface with the best discriminator of benthic layer returns. For the Snake River, we combined a discrete water surface with CWT benthic layer returns to infer water depth. For the Blue/Colorado River we used leading edge detection for the water surface and combined it with CWT benthic layer returns. The optimal single band water depth maps are shown in Figure 16. Both optimized estimates of depth were compared to field measured ADCP water depths and the results are shown in Table 7.
Figure 16. Optimal single band water depth map for: (a) the Snake River, coordinates are in UTM 12N (NAD83), and (b) the Blue/Colorado River, coordinates are in UTM 13N (NAD83)).
Figure 16. Optimal single band water depth map for: (a) the Snake River, coordinates are in UTM 12N (NAD83), and (b) the Blue/Colorado River, coordinates are in UTM 13N (NAD83)).
Remotesensing 07 05133 g016
Table 7. Best Performance for single band bathymetric LiDAR for both the Snake River and The Blue/Colorado River.
Table 7. Best Performance for single band bathymetric LiDAR for both the Snake River and The Blue/Colorado River.
Snake RiverBlue/Colorado River
Mean(Zf−Zr) (m)0.06−0.16
Std.(Zf−Zr) (m)0.140.27
Slope1.110.85
Intercept (m)−0.060.27
R20.930.58
The Snake River water depth inferred from a combination of discrete water surface and CWT benthic layer results in a 6 cm mean bias and 14 cm for Std. with an R2 of 0.93. The results are comparable to the CWT water depth using a NIR water surface estimate in Table 2. The water depth inferred from a combination of leading edge detected water surface and CWT detected benthic returns for Blue/Colorado River also shows similar performance to the CWT water depth with RTK water surface given in Table 4 (16 cm for mean bias and 27 cm for Std. with R2 of 0.58). This reinforces the fact that the leading edge detected water surface is close to the RTK water surface and the relatively significant errors present in the Blue/Colorado River results are heavily dependent on the accuracy of the benthic layer estimation.

5. Discussion

Full waveform LiDAR processing is able to produce a significantly denser point cloud with more multiple return reflections than CFD for bathymetry. The ability to recover multiple returns by the waveform methods is especially significant, because the additional returns are more probable benthic returns for fluvial environments. The multiple returns can also benefit the classification of benthic layer as the last return of multiple returns are assigned as the seed benthic positions for the region growing classification algorithms. This algorithm is different from the method proposed by Allouis et al. [24] who used NIR returns to estimate the water surface; here the mixed LiDAR signal produced by water surface and water bottom reflections was directly processed through the CWT to extract both surface and benthic locations. One of the challenges for single band bathymetric LiDAR is to recover both the water surface and bottom position from the full waveform. The longer pulse width laser used in the Aquarius system exacerbated the mixture of water surface, water column and benthic returns. In the future we plan to examine our methodology on short pulse width bathymetric full waveform LiDAR systems such as the Riegl VQ 820-G, AHAB Hawkeye III, EAARL, and Optech Titan.
The results of the study also suggest that there is no superior full waveform processing algorithm for all bathymetric situations which agrees with the conclusions of Parrish et al. [10]. ESR performed the best in the Snake River using a NIR water surface, with an R2 of 0.92 and the lowest Std. of 13 cm. However, the c_G and CWT results for the Snake River with the NIR surface were statistically quite similar to the ESR results. With a green water surface the CWT performed marginally better than ESR with R2 of 0.92 vs. 0.88, both with a Std. of 14 cm vs. 14 cm. LiDAR for the Blue/Colorado River did not perform nearly as well as the Snake River study due to the significant water turbidity. CWT water depths with either an RTK or green water surface gave the best performance (R2 of 0.57 and 0.58, respectively). In general the approaches that model expected signal shape (Gaussian and ESR) performed quite poorly for the Blue/Colorado River, suggesting that the water turbidity causes significant distortion to the return waveform shape. Based on this we can safely conclude that CWT is more stable than the other full waveform processing algorithms for shallow water fluvial environments. Both the ESR and CWT showed good bathymetric performance for difference cases, confirming that it is critical for commercial software to include a variety of full waveform processing strategies. However, unfortunately, the optimal processing strategy is not available a priori, and therefore a certain level of performance assessment is necessary for users to determine the best processing strategy for their study conditions.
We have also compared water surfaces estimated by both NIR and Green LiDAR returns. There is a definite vertical bias between the two surface estimates. The comparison of the NIR and green water surfaces for the Snake River study showed a maximum mean vertical offset of 45 cm and 33 cm of Std. for ESR. The minimum average of 18 cm of vertical offset and 10 cm Std. are observed for the discrete green water surface. Overall, it appears that the NIR water surface gives slightly better results than using a green surface (for clear water). Turbid water greatly degraded the green water surface performance with large mean error (s_G: 82 cm, c_G: 79 cm, CWT: 72 cm, ESR: 63 cm). The deterioration of water surface performance compared with the clear Snake River indicates that turbidity can skew the return full waveform toward the benthic layer. Mckean et al. [25] suggested that suspended sediment and dissolved organic materials can scatter and absorb incidence laser radiation. They also reported that turbid water exacerbates laser penetration for the EAARL system when turbidity reached 4.5 to 12 NTU. This agree with our results, the Blue/Colorado River presented turbidity ranging from 2 to 12, which negatively impacted Aquarius performance due to substantial water column scattering. It also further confirms that a multiple wavelength LiDAR may be essential for bathymetric applications, especially for turbid water. A leading edge detection method was proposed and tested over these two river conditions; it was found that leading edge detection is effective if more water volume scattering is present (i.e., high turbidity), but waveform-fitting methods are more effective at low turbidity due to the identification of more water surface returns.

6. Conclusion

The objective of the study was to evaluate the performance of a single band full waveform bathymetric LiDAR with different processing algorithms and water surface definitions in two distinct fluvial environments. We proposed a novel full waveform processing algorithm based on a continuous waveform transformation; the detected peaks from CWT are used as candidate seed peaks for both Gaussian and ESR decomposition. The wavelet transformation was assessed in comparison with a more standard approach of using Gaussian decomposition with initial peak estimates from a second derivative analysis. Water depths from each waveform method, along with discrete points produced by the real-time constant fraction discriminator have been compared to field measured water depths. All the methods have been applied to two fluvial environments: the clear and shallow (mostly < 2 m) water of the Snake River, and the turbid and shallow (mostly < 1.5 m) fluvial environment of the Blue/Colorado River.
In a summary, full waveform processing can produce more points than discrete CFD processing to provide better coverage and more multiple returns for better discrimination of benthic returns from water surface returns. However, with all approaches it is difficult to acquire good quality data for turbid water, especially when the water is shallow. The proposed CWT method shows better stability through varying water clarity conditions than the Gaussian or ESR decomposition methods also tested. A single band full waveform bathymetric LiDAR does not appear to be as accurate as a two wavelengths system that recovers the water surface using a NIR laser. However, with an appropriate full waveform processing algorithm, the error in determining the water surface from a single band green LiDAR can be mitigated; these results are encouraging because they seem to indicate that with improved detection of the water surface from the green LiDAR we can expect a single band LiDAR bathymetry system to perform similarly to a two band (NIR and green) bathymetric system. For this to be realized, however, we must successfully extract the water surface from the relatively complex backscatter at the air/water interface, which we were unable to do with the waveform processing algorithms presented. In [23], they proposed a quadrilateral signal to model the effect of water column scattering, and show it to be effective with simulated bathymetric LiDAR data. However, our initial analysis of this methodology has not shown a significant improvement in water surface estimation for the Aquarius datasets examined. Future work will therefore focus on decoupling the water surface and water column scattering at the air/water interface.

Acknowledgments

Partial support for authors one, two and four was provided by a United States National Science Foundation facility grant to the National Center for Airborne Laser Mapping (EAR #1339015). Funding for the Blue/Colorado River data collection was provided by the Office of Naval Research (awards N000141010873 and N000141210737) and a supplement from NSF to the National Center for Airborne Laser Mapping (EAR #1339015). The third author (Hartzell) was supported through an appointment to the Student Research Participation Program at the U.S. Army Cold Regions Research and Engineering Laboratory (CRREL) administered by the Oak Ridge Institute for Science and Education through an interagency agreement between the U.S. Department of Energy and CRREL. The National Park Service granted permission to collect field measurements and conduct remote sensing flights within Grand Teton National Park. The University of Wyoming-National Park Service Research Station provided logistical support.

Author Contributions

Zhigang Pan developed the continuous wavelet algorithm and processed the full waveform data and water surface estimates for the study, interpreted the results, prepared the manuscript and coordinated the revisions of the manuscript. Craig Glennie assisted in the overall study design, interpretation of the results and revisions of the manuscript. Preston Hartzell developed the ESR model, processed the full waveform using this approach, and assisted in manuscript review. Juan Carlos Fernandez-Diaz collected the airborne LiDAR data for the study and reviewed the manuscript. Carl Legleiter and Brandon Overstreet collected all of the in situ observations including field water depth data and turbidity measurements and also assisted in review of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  2. Baltsavias, E.P. Airborne laser scanning: Existing systems and firms and other resources. ISPRS J. Photogramm. Remote Sens. 1999, 54, 164–198. [Google Scholar] [CrossRef]
  3. Glennie, C.; Brooks, B.; Ericksen, T.; Hauser, D.; Hudnut, K.; Foster, J.; Avery, J. Compact multipurpose mobile laser scanning system—Initial tests and results. Remote Sens. 2013, 5, 521–538. [Google Scholar] [CrossRef]
  4. Williams, K.; Olsen, M.J.; Roe, G.V.; Glennie, C. Synthesis of transportation applications of mobile LiDAR. Remote Sens. 2013, 5, 4652–4692. [Google Scholar] [CrossRef]
  5. Sheridan, R.D.; Popescu, S.C.; Gatziolis, D.; Morgan, C.L.S.; Ku, N.-W. Modeling forest aboveground biomass and volume using airborne LiDAR metrics and forest inventory and analysis data in the Pacific Northwest. Remote Sens. 2015, 7, 229–255. [Google Scholar] [CrossRef]
  6. Fernandez-Diaz, J.; Carter, W.; Shrestha, R.; Glennie, C. Now you see it…Now you don’t: Understanding airborne mapping LiDAR collection and data product generation for archaeological research in Mesoamerica. Remote Sens. 2014, 6, 9951–10001. [Google Scholar] [CrossRef]
  7. Kim, H.B.; Sohn, G. Point-based classification of power line corridor scene using random forests. Photogramm. Eng. Remote Sens. 2013, 79, 821–833. [Google Scholar] [CrossRef]
  8. Glennie, C.L.; Hinojosa-Corona, A.; Nissen, E.; Kusari, A.; Oskin, M.E.; Arrowsmith, J.R.; Borsa, A. Optimization of legacy lidar data sets for measuring near-field earthquake displacements. Geophys. Res. Lett. 2014, 41, 3494–3501. [Google Scholar] [CrossRef]
  9. Lakowicz, J.R. Principles of Fluorescence Spectroscopy, 3rd ed.; Springer: New York, NY, USA, 2006. [Google Scholar]
  10. Parrish, C.E.; Jeong, I.; Nowak, R.D.; Brent Smith, R. Empirical comparison of full-waveform lidar algorithms: Range extraction and discrimination performance. Photogramm. Eng. Remote Sens. 2011, 77, 825–838. [Google Scholar] [CrossRef]
  11. Mallet, C.; Bretar, F. Full-waveform topographic lidar: State-of-the-art. ISPRS J. Photogramm. Remote Sens. 2009, 64, 1–16. [Google Scholar] [CrossRef]
  12. Irish, J.L.; McClung, J.K.; Lillycrop, W.J.; Chust, G.; Grande, M.; Galparsoro, I.; Uriarte, A.; Borja, Á.; Tuell, G.; Barbor, K.; et al. Airborne lidar bathymetry: The SHOALS system. Bull. Navig. Assoc. 2010, 7695, 43–54. [Google Scholar]
  13. Richardson, J.J.; Moskal, L.M. Assessing the utility of green LiDAR for characterizing bathymetry of heavily forested narrow streams. Remote Sens. Lett. 2014, 5, 352–357. [Google Scholar] [CrossRef]
  14. Fernandez-Diaz, J.C.; Glennie, C.L.; Carter, W.E.; Shrestha, R.L.; Sartori, M.P.; Singhania, A.; Legleiter, C.J.; Overstreet, B.T. Early results of simultaneous terrain and shallow water bathymetry mapping using a single-wavelength airborne LiDAR sensor. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 623–635. [Google Scholar] [CrossRef]
  15. Jutzi, B.; Stilla, U. Range determination with waveform recording laser systems using a Wiener Filter. ISPRS J. Photogramm. Remote Sens. 2006, 61, 95–107. [Google Scholar] [CrossRef]
  16. Hofton, M. a.; Minster, J.B.; Blair, J.B. Decomposition of laser altimeter waveforms. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1989–1996. [Google Scholar] [CrossRef]
  17. Wagner, W.; Ullrich, A.; Ducic, V.; Melzer, T.; Studnicka, N. Gaussian decomposition and calibration of a novel small-footprint full-waveform digitising airborne laser scanner. ISPRS J. Photogramm. Remote Sens. 2006, 60, 100–112. [Google Scholar] [CrossRef]
  18. Chauve, A.; Mallet, C.; Bretar, F.; Durrieu, S.; Deseilligny, M.P.; Puech, W. Processing full-waveform lidar data: Modelling raw signals. In Proceedings of ISPRS Workshop on Laser Scanning 2007, Espoo, Finland, 12 September 2007; Volume 36, Part 3/W52. pp. 102–107.
  19. Roncat, A.; Bergauer, G.; Pfeifer, N. B-spline deconvolution for differential target cross-section determination in full-waveform laser scanning data. ISPRS J. Photogramm. Remote Sens. 2011, 66, 418–428. [Google Scholar] [CrossRef]
  20. Hartzell, P.J.; Glennie, C.L.; Finnegan, D.C. Empirical waveform decomposition and radiometric calibration of a terrestrial full-waveform laser scanner. IEEE Trans. Geosci. Remote Sens. 2014, 53, 162–172. [Google Scholar] [CrossRef]
  21. Wu, J.; van Aardt, J.A.N.; Asner, G.P. A comparison of signal deconvolution algorithms based on small-footprint LiDAR waveform simulation. IEEE Trans. Geosci. Remote Sens. 2011, 49, 2402–2414. [Google Scholar] [CrossRef]
  22. Guenther, G.C.; Cunningham, A.G.; LaRocque, P.E.; Reid, D.J.; Service, N.O.; Highway, E.; Spring, S. Meeting the accuracy challenge in airborne lidar bathymetry. In Proceedings of EARSeL-SIG-Workshop LIDAR, Dresden/FRG, Germany, 16–17 June, 2000.
  23. Abady, L.; Bailly, J.-S.; Baghdadi, N.; Pastol, Y.; Abdallah, H. Assessment of quadrilateral fitting of the water column contribution in Lidar waveforms on bathymetry estimates. IEEE Geosci. Remote Sens. Lett. 2014, 11, 813–817. [Google Scholar] [CrossRef]
  24. Allouis, T.; Bailly, J.S.; Pastol, Y.; Le Roux, C. Comparison of LiDAR waveform processing methods for very shallow water bathymetry using Raman, near-infrared and green signals. Earth Surf. Process. Landf. 2010, 35, 640–650. [Google Scholar]
  25. McKean, J.; Nagel, D.; Tonina, D.; Bailey, P.; Wright, C.W.; Bohn, C.; Nayegandhi, A. Remote sensing of channels and riparian zones with a narrow-beam aquatic-terrestrial LIDAR. Remote Sens. 2009, 1, 1065–1096. [Google Scholar] [CrossRef]
  26. Pfennigbauer, M.; Ullrich, A.; Steinbacher, F.; Aufleger, M. High-resolution hydrographic airborne laser scanner for surveying inland waters and shallow coastal zones. Proc. of SPIE 2011, 8037, 803706. [Google Scholar]
  27. Wang, C.; Li, Q.; Liu, Y.; Wu, G.; Liu, P.; Ding, X. A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry. ISPRS J. Photogramm. Remote Sens. 2015, 101, 22–35. [Google Scholar] [CrossRef]
  28. Persson, Å.; Söderman, U.; Töpel, J.; Ahlberg, S. Visualization and analysis of full-waveform airborne laser scanner data. In Proceedings of International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Enschede, the Netherlands, 12–14 September, 2005; ISPRS 2005. 36.3/W19, pp. 103–108.
  29. Vetterli, M.; Herley, C. Wavelets and filter banks: Theory and design. IEEE Trans. Signal Process. 1992, 40, 2207–2232. [Google Scholar] [CrossRef]
  30. Heil, C.E.; Walnut, D.F. Continuous and discrete wavelet transforms. SIAM Rev. 1989, 31, 628–666. [Google Scholar] [CrossRef]
  31. Gregoire, J.M.; Dale, D.; van Dover, R.B. A wavelet transform algorithm for peak detection and application to powder x-ray diffraction data. Rev. Sci. Instrum. 2011, 82, 015105. [Google Scholar] [CrossRef] [PubMed]
  32. Oliver, J.; Baxter, R.; Wallace, C. Unsupervised learning using MML. In Machine Learning, Proceedings of the Thirteenth International Conference, Bari, Italy, 3–6 July, 1996.
  33. Ghilani, C.D. Adjustment Computations: Spatial Data Analysis; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2010; p. 672. [Google Scholar]
  34. Axelsson, P. DEM generation from laser scanner data using adaptive TIN models. Int. Arch. Photogramm. Remote Sens. 2000, 33, 111–118. [Google Scholar]
  35. Irish, J.L.; Lillycrop, W.J. Scanning laser mapping of the coastal zone: The SHOALS system. ISPRS J. Photogramm. Remote Sens. 1999, 54, 123–129. [Google Scholar] [CrossRef]
  36. Hauser, D.L. Three-Dimensional Accuracy Analysis of a Mapping-Grade Mobile Laser Scanning System. Master’s Thesis, University of Houston, Houston, TX, USA, 2013; p. 94. [Google Scholar]

Share and Cite

MDPI and ACS Style

Pan, Z.; Glennie, C.; Hartzell, P.; Fernandez-Diaz, J.C.; Legleiter, C.; Overstreet, B. Performance Assessment of High Resolution Airborne Full Waveform LiDAR for Shallow River Bathymetry. Remote Sens. 2015, 7, 5133-5159. https://doi.org/10.3390/rs70505133

AMA Style

Pan Z, Glennie C, Hartzell P, Fernandez-Diaz JC, Legleiter C, Overstreet B. Performance Assessment of High Resolution Airborne Full Waveform LiDAR for Shallow River Bathymetry. Remote Sensing. 2015; 7(5):5133-5159. https://doi.org/10.3390/rs70505133

Chicago/Turabian Style

Pan, Zhigang, Craig Glennie, Preston Hartzell, Juan Carlos Fernandez-Diaz, Carl Legleiter, and Brandon Overstreet. 2015. "Performance Assessment of High Resolution Airborne Full Waveform LiDAR for Shallow River Bathymetry" Remote Sensing 7, no. 5: 5133-5159. https://doi.org/10.3390/rs70505133

APA Style

Pan, Z., Glennie, C., Hartzell, P., Fernandez-Diaz, J. C., Legleiter, C., & Overstreet, B. (2015). Performance Assessment of High Resolution Airborne Full Waveform LiDAR for Shallow River Bathymetry. Remote Sensing, 7(5), 5133-5159. https://doi.org/10.3390/rs70505133

Article Metrics

Back to TopTop