Next Article in Journal
Late Tertiary Petrified Wood from Nevada, USA: Evidence of Multiple Silicification Pathways
Next Article in Special Issue
Evaluation of VIIRS Land Surface Temperature Using CREST-SAFE Air, Snow Surface, and Soil Temperature Data
Previous Article in Journal
Geologic History of Eocene Stonerose Fossil Beds, Republic, Washington, USA
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Snow Depth Retrieval with UAS Using Photogrammetric Techniques

1
School of Earth Sciences & Byrd Polar Research Center, The Ohio State University, Columbus, OH 43202, USA
2
School of Land and Food, University of Tasmania, Hobart, TAS 7001, Australia
3
Math and Geospatial Sciences, RMIT, Melbourne, VIC 3000, Australia
*
Author to whom correspondence should be addressed.
Geosciences 2015, 5(3), 264-285; https://doi.org/10.3390/geosciences5030264
Submission received: 8 April 2015 / Accepted: 24 June 2015 / Published: 10 July 2015
(This article belongs to the Special Issue Advances in Remote Sensing and GIS for Geomorphological Mapping)

Abstract

:
Alpine areas pose challenges for many existing remote sensing methods for snow depth retrieval, thus leading to uncertainty in water forecasting and budgeting. Herein, we present the results of a field campaign conducted in Tasmania, Australia in 2013 from which estimates of snow depth were derived using a low-cost photogrammetric approach on-board a micro unmanned aircraft system (UAS). Using commercial off-the-shelf (COTS) sensors mounted on a multi-rotor UAS and photogrammetric image processing techniques, the results demonstrate that snow depth can be accurately retrieved by differencing two surface models corresponding to the snow-free and snow-covered scenes, respectively. In addition to accurate snow depth retrieval, we show that high-resolution (50 cm) spatially continuous snow depth maps can be created using this methodology. Two types of photogrammetric bundle adjustment (BA) routines are implemented in this study to determine the optimal estimates of sensor position and orientation, in addition to 3D scene information; conventional BA (which relies on measured ground control points) and direct BA (which does not require ground control points). Error sources that affect the accuracy of the BA and subsequent snow depth reconstruction are discussed. The results indicate the UAS is capable of providing high-resolution and high-accuracy (<10 cm) estimates of snow depth over a small alpine area (~0.7 ha) with significant snow accumulation (depths greater than one meter) at a fraction of the cost of full-size aerial survey approaches. The RMSE of estimated snow depths using the conventional BA approach is 9.6 cm, whereas the direct BA is characterized by larger error, with an RMSE of 18.4 cm. If a simple affine transformation is applied to the point cloud derived from the direct BA, the overall RMSE is reduced to 8.8 cm RMSE.

1. Introduction

1.1. The Need for Accurate Depth Estimates of Snow Depths and Existing Methods

Major areas in the world depend on snow and snowmelt for the majority of their water needs [1]. Snowmelt runoff is used for crop irrigation, industrial and manufacturing processes, municipal water supply, and recreational purposes. Additionally, snowmelt has been found to impact carbon uptake from vegetated areas, thus playing a role in climate variability [2]. Being able to provide accurate and reliable estimates of snow and its water equivalent (SWE) is attractive for hydrologic modeling, flood prediction, and risk mitigation.
Efforts to measure snow depth and water equivalent have an extensive historical record starting with in situ methods (e.g., [3,4]). In situ sampling methods such as snow courses, snow pillows, and snow pits, while accurate, often only provide point estimates of a spatially varying quantity. Spatial interpolation of in situ measurements has been investigated; however even the most accurate methods (e.g., kriging) only explain 30 percent of the overall variability in snow depth [5]. Additionally, in situ methods are subject to logistical and safety restrictions that limit when and where measurements can be made, especially in mountainous areas which are often characterized by high snowfall. Because of these limitations, there exists a need for direct observations of snow depth with improved spatial coverage and reduced personnel risk as compared to existing conventional methods.
Remote sensing provides a means by which measurements can be made in a temporal and spatially distributed manner, while at the same time minimizing the risk associated with traditional in situ based methods. Historically, most space-borne and airborne remote sensing methods for SWE retrieval have relied on passive microwave (PM) observations, which are sensitive to the overall mass of snow contained within the microwave footprint [6,7,8]. The recent GlobSnow product [9] has produced global maps of SWE. However, due to the challenges posed by vegetation, spatial heterogeneity, and deep snow, GlobSnow does not estimate SWE in mountainous areas. Forested and alpine areas where vegetation and deep snow pack interferes with the PM signal have posed significant challenges for PM-based approaches [10]. Thus, measurement methods that are capable of accurately characterizing and quantifying the snow properties in such environments are desirable as a way to augment the existing coarse resolution PM observations, which typically have pixel resolutions on the order of 10–25 km2 at frequencies used for snow retrieval.
Deems et al. (2013) survey recent progress in development of LiDAR-based snow depth measurements from manned airborne remote sensing techniques. These have the potential to provide both high-accuracy and high-resolution estimates of snow depth with submeter sampling intervals and decimeter level accuracy [11]. However, monitoring snow depth with manned airborne platforms is potentially prohibitively expensive, due to the costs associated with equipping, flying, and maintaining an aircraft outfitted with a mapping quality LiDAR sensor. Thus, a method that produces data with the resolution and accuracy of conventional LiDAR, but with reduced cost would be well received within the snow science community, and is therefore the main motivation of this study.
The objectives of this study are threefold. First, to describe a novel technique for remotely measuring high-resolution spatially continuous snow depths using UAS-based photogrammetry. Second, to accurately estimate snow depths using a low-cost platform with commercial off the shelf (COTS) components. Lastly, to determine the extent to which snow depth information can accurately be obtained using airborne GPS aerotriangulation, as compared to conventional triangulation techniques that incorporate ground control points (GCPs) in the adjustment.
The paper is organized in four sections. The remainder of Section 1 describes the increasing use of UAS in scientific applications, as well as previous efforts that have been made to measure snow depth using photogrammetric methods. In Section 2, we describe the study area, UAS, and COTS sensor components that were used in this study. In Section 3, we explain in detail the two bundle adjustment workflows employed herein, as well the method by which dense point cloud are generated from overlapping imagery. In Section 4 we present the results, including snow depth estimates generated from both methods, and suggest ways in which our methodology can be improved.

1.2. Use of Micro Unmanned Aerial Systems (UAS)

The use of UAS for conducting scientific research is continuing to grow at a rapid pace [12]. There are many reasons for this including the decreased cost of operating and maintaining an UAS for data collection as compared to a space-borne or manned airborne platform, as well as the ease of deployment for time-sensitive campaigns. While current UAS technology is not capable of the same spatial and/or temporal coverage as space-borne or manned airborne platforms, UASs provide high-resolution datasets and create new opportunities for scientific measurement at local scales.
Current research into the use of UAS as a 3D data-capture platform includes archaeological surveys, landslide deformation, glacier dynamics, and vegetation monitoring [13,14,15,16]. With the miniaturization of navigation sensors and laser scanners in recent years, it is now possible to integrate a complete airborne LiDAR mapping system that weighs less than three kilograms [17], and produces data accuracy suitable for scientific studies.
Photogrammetric methodologies have enjoyed renewed scientific interest following the recent proliferation of UAS [12,18]. These studies make use of image matching techniques in conjunction with optimized bundle adjustment (BA) routines, and allow for high-density 3D point clouds to be generated from the multi-view imagery collected by UAS. Due to the lower altitude at which UAS are typically operated, 3D scene reconstruction can be performed with increased resolution as compared to conventional airborne platforms. This increased resolution has facilitated certain forestry metrics to be more easily and accurately measured [19], and thus lends credence to our study methodology.
To our knowledge, this is the first attempt to use imagery obtained from UAS in conjunction with photogrammetric techniques to resolve spatially continuous snow depths.

1.3. Prior Photogrammetric Efforts for Snow Depth Retrieval

Previous efforts have been made to map the snow surface using photogrammetric techniques from imagery captured by airborne platforms; the earliest and most notable of which were [20,21]. Photogrammetric techniques cannot measure snow depth directly, rather they are used to construct two point cloud datasets; one which corresponds to the ground (typically observed in fall), the other/s during the winter snow accumulation season. The points are subsequently differenced to yield snow depths. Both [20,21] relied on the physical installation of ground control points (in the form of wooden posts) throughout the study area to properly orient the stereo model generated from multiple image pairs. Additionally, the snow depth was only determined at discrete locations manually delineated by the photogrammetric analyst that processed the imagery. Subsequent studies [22] have also used networks of ground control points for the computation of the snow surface. However, the accuracy of snow depth retrieval in those studies has not reached the level achieved in [20] or [21], both of which were characterized by considerable error. The snow depths measured in [20] exhibited a consistent bias of ~23 cm, with a 1σ of 23.7 cm. Similarly, after a mean bias of ~15 cm was removed from the differenced surface points, [21] was able to resolve snow depth with a 1σ precision of 21 cm. It should be noted that both the [20,21] studies were completed in the 1965 and 1967, respectively, and to their credit, are still considered the benchmark from which to improve upon. At the time of this writing, another manuscript is also currently in review [23] that utilizes imagery taken with COTS cameras aboard a manned aircraft to map spatially continuous snow depth over an Alaskan field transect. The results are not published at the time of writing, but are the modern follow-up to [20,21] and look to be very encouraging.
As described above, all previous photogrammetric studies have relied on the use of photo-identifiable ground control points (GCPs) distributed throughout the study area to determine the position and orientation parameters of the camera center through Bundle Adjustment (BA) prior to multi-ray intersection for surface computation. Because this represents significant labor effort in the field, it arguably negates the benefits of using a remote sensing methodology. In this study we examine the feasibility of airborne GPS aero-triangulation, in addition to conventional GCP-based techniques. With airborne aero-triangulation, only the trajectory of the platform’s navigation solution is used to provide positional and stochastic constraints to the adjustment. Intuitively, this means that control points are measured in the sky during the UAS flight, rather than on the ground. There has been a considerable amount of published literature on this topic, and an excellent overview can be found in [24]. However, most previous efforts have been made with metric cameras and expensive, high-accuracy inertial navigation systems [25,26], whereas we rely on consumer grade COTS digital cameras for this study. The benefits of a direct photogrammetric approach are clear; if snow depth can be accurately resolved without GCP information, areas could be directly mapped without the need to expose field personnel to dangerous weather conditions.

2. Study Area and Data

The chosen study site was an alpine transect within Mount Field National Park, located in south central Tasmania, Australia (see Figure 1). The study site was located at 42.68°S, 146.6°E, at an elevation of approximately 1150 m above sea level. Because mountainous environments are characterized by variable and sloped terrain, they present unique challenges for photogrammetric measurements, in contrast to planar surfaces [27]. A strong gradient in elevation, as well as thick vegetation and various soil/rock types characterized our study area.
Figure 1. (a) Elevation map of the Mt Field study area in south central Tasmania, with inset of Australia; (b) Dense vegetation and variable terrain were characteristic of our study area.
Figure 1. (a) Elevation map of the Mt Field study area in south central Tasmania, with inset of Australia; (b) Dense vegetation and variable terrain were characteristic of our study area.
Geosciences 05 00264 g001
Because snowfall and subsequent accumulation is quite unpredictable in Tasmania, the snow-covered dataset was collected after the first major snowfall of the season, which occurred in mid-July 2013. The dataset was collected on the eastern (leeward) side of Mt. Mawson, due to favorable weather conditions for UAS flight. The snow-free dataset was collected 5 February 2014, after snow ablation. The total size of the study area was approximately 6900 m2, or 0.007 km2. Thirty-one ground control targets were spread throughout the scene, in order to provide independent validation of the photogrammetric measurements. Following the methods of [28], we used square targets (45 × 45 cm) for the validation. The ground control was surveyed using dual-frequency differential GPS observations, with coordinate accuracies in the range of 2–4 cm [29].
We measured snow depth by observing GPS positions at the snow surface, then extending the survey pole to the ground surface, and observing the position once more. Thirty-seven observations of snow depth were made throughout the study area for validation of our methodology.
The snow depths ranged from 33 to 121 cm, with a mean and standard deviation of 61.8 cm and 24.4 cm, respectively. A small number of measurements were in several wind drifts, which had depths in excess of 1 m (Figure 2). As will be described in Section 3.2, the technique by which the in situ snow depth measurements were made is problematic due to unseen vegetation below the snow surface.
The UAS that was used in the study is an electric powered multi-rotor platform (Droidworx SkyJiB). The octocopter is being developed by the TerraLuma research group at the University of Tasmania and was modified to accommodate the sensor suite that was used in this study (Figure 3). Multi-rotor UAS offer increased stability and maneuverability in comparison to other platforms such as fixed wings or single rotor UAS, as well as low vibration. To reduce blur in the images, the camera payload is mounted on a rigid frame that is isolated from the motor vibration through the use of four silicon mounts. The UAS has 8 brushless motors which operate at different rotor speeds to achieve directional flight. The total sensor payload was ~2.4 kg. A total of three flights were flown to ensure adequate data collection with varying flight lengths of 2.6–3.1 min. The flying height above the terrain varied considerably, but averaged ~29 m, with a corresponding GSD of 6 mm. To minimize the effects of shadows in subsequent processing, the flights were collected at between 11.00 and 14.00 local time.
Figure 2. GPS measured snow depths from our study area that have been interpolated to a uniform grid at 3 m resolution. The black dots indicate image exposure locations along the unmanned aircraft system (UAS) flight trajectory, whereas the black triangles indicate measured snow depths.
Figure 2. GPS measured snow depths from our study area that have been interpolated to a uniform grid at 3 m resolution. The black dots indicate image exposure locations along the unmanned aircraft system (UAS) flight trajectory, whereas the black triangles indicate measured snow depths.
Geosciences 05 00264 g002
Figure 3. The UAS used in this study is shown (left); along with an example image (right) that was characteristic of our dataset. Footprints in the snow provide a sense of scale.
Figure 3. The UAS used in this study is shown (left); along with an example image (right) that was characteristic of our dataset. Footprints in the snow provide a sense of scale.
Geosciences 05 00264 g003
For imagery collection, we used a Canon 550D DSLR camera with a fixed 20-mm lens. The camera recorded 18 megapixel full frame images at an approximate rate of 3.8 Hz. Given the altitude (~30–45 m) and flight speed of the UAS, such a dense sampling of images is not needed. To improve computation times, we reduced the amount of photos used in the bundle adjustment by 50%. While there were slight variations in overlap, forward overlap generally ranged from 85% to 90%, while sidelap ranged from 50% to 75%. Figure 3 shows an example image from our study area during the snow-covered flight. The frame images were synched to GPS time using the camera flash to send an event trigger to the GPS receiver, to which the time information was ascribed [28]. The camera was calibrated prior to flight using standard methods. While the interior orientation parameters of the camera are often included as additional unknowns in the bundle adjustment, the camera and lens was calibrated with sufficient accuracy to carry the interior orientation parameters as observations with small measurement uncertainty.
GPS observations of camera position were recorded at 20 Hz at both L1 and L2 frequencies using a Novatel OEMV1-df receiver. A fixed ambiguity phase solution was obtained using differential processing, with predicted coordinate accuracies of 2–3 cm. These are predicted values, as it is impossible to validate the accuracy of the solution while the platform is in motion. Inertial measurement unit (IMU) data was collected using the Microstrain 3DM-GX3 35 IMU, which records angular rate and acceleration at a rate of 100 Hz. The acceleration and angular rate data streams were time-synchronized with an internal GPS receiver, thus providing the position and orientation data needed for the bundle adjustment methods discussed below. The fusion of the IMU data with observations from the GPS receiver was done within a Sigma Point Kalman Smoother, in order to overcome the presence of large orientation errors which are prevalent when using MEMS-IMU sensors [17]. The state-based estimator used in this work is the Square Root Unscented variant of the Sigma Point Kalman filter (SPKF) outlined in [30]. For more specific details on the fusion algorithm, we refer readers to [17].
One of the benefits of UAS is their affordability and accessibility via COTS sensors. A listing of the platform/sensors used in this study and their corresponding price is found in Table 1. We are not advocating using the manufactures that are listed in Table 1, rather it is only meant to serve as a general cost outline for such a system.
Table 1. Price comparison listing of the components used in this study.
Table 1. Price comparison listing of the components used in this study.
Platform/SensorIndirect Bundle AdjustmentPrice (USD)
Manufacturer/Model
UASDroidworx SkyJiB
CameraCanon 550D DSLR
Total Cost$10,900
UASDroidworx SkyJiB
CameraCanon 550D DSLR
IMUMicrostrain 3DM-GX3 35
ComputerGumstix Verdex Pro
GNSS AntennaMaxtenna L1/L2 helical
GNSS ReceiverNovatel OEMV1-df
Total Cost$17,900

3. Methods

3.1. Photogrammetric Reconstruction

The collinearity equations are the standard model used in photogrammetry [31] to relate 2D image projections to 3D space. The coordinates of an unknown object point A (i.e., snow surface) in a 3D Cartesian coordinate system can be related to a vector of image coordinates in the following manner. Note that equation 1 below is often augmented with terms that model the distortion of the camera lens, as described in [32]. However, for the sake of brevity they are left out.
( x i m y i m ) = ( x 0 + f r 11 ( X A X 0 ) + r 21 ( Y A Y 0 ) + r 31 ( Z Z 0 ) r 13 ( X A X 0 ) + r 23 ( Y A Y 0 ) + r 33 ( Z Z 0 ) y 0 + f r 12 ( X A X 0 ) + r 22 ( Y A Y 0 ) + r 32 ( Z Z 0 ) r 13 ( X A X 0 ) + r 23 ( Y A Y 0 ) + r 33 ( Z Z 0 ) )
where
  • xim, yimmeasured image coordinates
  • f—focal length
  • Xo, Yo , ZoCartesian coordinates of camera perspective center
  • XA, YA, ZACartesian coordinates of object point A
  • rijelements of rotation matrix R of angles (ɷ, ɸ, к)
If only one image is available, then only the direction to the object point A can be determined, but not its absolute position in space. The 3D coordinates of A can only be computed if this spatial direction intersects another ray from at least one different image. The process of ray intersection and the simultaneous refinement of interior and exterior geometry according to an optimality criterion involving the image projections of all points is known as the bundle adjustment [31]. An excellent overview of bundle adjustment is given in [33].
The first stage of this photogrammetric workflow to reconstruct snow depth is to identify image projections of the same features in space from all of the camera views. For image matching, we used the Scale Invariant Feature Transform (SIFT), a technique developed by [34]. SIFT key features are identified in each image frame and then matched between frames using an approximate nearest neighbor kd-Tree approach [35]. After the keypoints have been matched, an iterative Random Sample Consensus (RANSAC) approach following [36] is used to identify and eliminate outliers. After outlier removal, the image matches are stored for subsequent integration into the bundle adjustment.
Two bundle adjustment methodologies are used in the study. The first case employs a methodology similar to that of [20,21], in which GCP information is used to constrain the bundle adjustment; this is termed conventional aerotriangulation or indirect georeferencing. In the indirect case, the ground coordinates of a ground control point (XA, YA, ZA) are assigned high weights to constrain the bundle adjustment. The second case uses solely the perspective center coordinates (Xo, Yo, Zo) obtained from the UAS navigation solution to constrain the weighted bundle adjustment. This method is similar to the one proposed in [28] and is also known as airborne aerotriangulation or alternatively, direct georeferencing. Because there is a spatial offset between the GPS antenna and the perspective center of the camera, a lever arm offset was measured in the laboratory a priori and then applied post-flight. The a priori stochastic weighting criteria for airborne aerotriangulation is based on the uncertainty in the computed positions of the UAS platform using airborne GPS. The second method is more preferable than the first for obvious reasons, as no ground personnel are required to set ground control points within the measured scene. One-sigma values of 2 cm were assigned for the indirect case, as shown in Table 2. In both the direct and indirect BA, we assume that the interior orientation parameters are constant from frame to frame, which is known as the “block invariant” approach [37].
Table 2. The observations and their associated weights used in the bundle adjustment.
Table 2. The observations and their associated weights used in the bundle adjustment.
ObservationIndirect (1σ)Direct (1σ)
Xo, Yo, Zo (UAS GNSS)1 m0.03 m
X, Y, Z (GCP)0.02 mnone
ɷ, ɸ, к
xo, yo3 pixels3 pixels
f3 pixels2 pixels
k1, k20.1 pixels0.1 pixels
Bundle adjustment is a non-linear process that requires reasonable initial approximation to ensure correct convergence. To generate initial approximations for scene structure, we use the well-known structure-from-motion (SfM) pipeline developed by [35] entitled Bundler. A comprehensive description of SfM algorithms can be found in [38]. The main input to Bundler is the set of image coordinates of the previously matched images. The output of the Bundler SfM algorithm is the reconstructed camera poses and a sparse set of scene structure, both of which are generated in an arbitrary coordinate system. To transform the camera positions and initial structure estimated using Bundler into a global coordinate system, we perform a seven-parameter Helmert transformation using the trajectory information provided by the the UAS navigation solution. Once converted into a mapping coordinate system, we use the camera poses and structure as initial approximations into a final bundle adjustment. At this point, stochastic information that exists for different observations is included in the form of a covariance matrix to weight the adjustment. This ensures that the least squares solution converges to the most optimal values based on a priori error models. Should ground control observations be used to constrain the final adjustment, they are included in this step.
To properly integrate all available information about a priori error sources in the covariance matrix (or equivalently, the weight matrix) and generate a linearized approximation of the a posteriori covariance matrix, we utilize the Gauss-Helmert adjustment model, which is of the following form
B l + A x = w
where A is the Jacobian matrix with respect to the parameters, and B is the Jacobian matrix with respect to the observations, and l and x are the observations and parameters, respectively. Assuming a known variance-covariance P for the observations, the weighted least squares solution would be the following.
x = ( A T ( B P 1 B T ) 1 A ) 1 * A T ( B P 1 B T ) 1 l
Or equivalently,
x = N 1 t
where
N = ( A T ( B P 1 B T ) 1 A )
t = A T ( B P 1 B T ) 1 l
Because of the large amount of photos typically encountered in UAS photogrammetric applications, conventional matrix inversions are not optimal for solving the normal equations, due to the sparse nature of the A and B matrices. To improve performance and stability, the normal equations are partitioned into block form consisting of the camera and scene parameters [39], as shown below.
[ N c N c p N c p T N p ] [ x c x p ] = [ t c t p ]
Note that self-calibration parameters (i.e., principle point, focal length, and lens distortion) can also be incorporated into this system of normal equations by extending the block structure above.
It should be noted that Np is a block diagonal matrix, with blocks of size 3 × 3 and thus is trivial to invert for each block. Once Np−1 has been obtained, we obtain a linear system consisting of just the camera parameters, as shown below.
S x c = r
where
r = t c N c p N p 1 t p
S = N c N c p N p 1 N cp T
S, also known as the reduced camera matrix, is symmetric and positive-definite and can be solved using Cholesky factorization, after which the scene points can be estimated through back substitution in the following manner.
x p = N p 1 ( t p N c p T x c )
This method of solving the normal equations is known as the Schur complement trick [29,39]. The model iteratively converges on an exact solution that minimizes the sum of the squares of the residuals. The iteration process stops if a convergence criterion is reached or if the number of iterations exceeds a user-defined maximum. We performed the bundle adjustment in a projected coordinate system, specifically Universal Transverse Mercator Zone 55S.
After the bundle adjustment has been performed, the final stage of this workflow is to perform dense multiview stereo reconstruction of the aligned images to generate the dense 3D scene geometry. Many different approaches can be taken, and an excellent review of multiview stereo techniques can be found in Seitz et al. [40]. For this task, we utilize an existing multiview stereo reconstruction routines, specifically Patch-based Multiview Stereo (PMVS2), developed by Furukawa & Ponce [41]. The algorithm starts by detecting features in each image, matches them across multiple images to form an initial set of patches, after which visibility constraints are applied to filter away false matches. The patch model is then converted to mesh, after which a refinement algorithm is used to achieve higher accuracy. Because snow is a relatively homogeneous surface, we modified PMVS in several ways for better performance. This included raising the normalized cross-correlation threshold for matching to 0.8, as well enlarging the kernel size that is used to compute the photometric consistency score. We were fortunate that most of our images were well-illuminated, sharp, and not overexposed. It should be noted that the image quality had a direct effect on the reconstruction precision, and sub optimal imagery was discarded a priori.
The output of PMVS is a dense point cloud of the estimated scene structure that can then be used for subsequent surface computations. The point clouds are generally quite dense (up to 200–400 points/m2), though certain areas were much sparser due to camera exposure, scene illumination (shadows), and image blur. We used an averaging method to compute the height coordinate at the location of the depth measurement of the surface for differencing. All points falling within a 5 cm radius of the GPS snow depth observation were collected, from which the mean height was computed. This was done for both snow-free and snow-covered scenes. Figure 4 shows the overall work flow employed in this study.
Figure 4. The overall workflow of our study is shown. Note that two different bundle adjustment strategies are implemented; one involving conventional methods (left); the other using airborne aero-triangulation (right).
Figure 4. The overall workflow of our study is shown. Note that two different bundle adjustment strategies are implemented; one involving conventional methods (left); the other using airborne aero-triangulation (right).
Geosciences 05 00264 g004
To ensure that our techniques and workflow is commensurate in accuracy with the current state of the art, we compared our results to those obtained using Agisoft Photoscan, a commercial software that is used extensively within the UAS scientific community. We sampled both dense point clouds at 5 cm resolution using nearest neighbor interpolation and computed the resulting difference in Z. As shown in Figure 5, our results are very similar to those obtained using Agisoft. The mean of the difference was −0.01 m, and the standard deviation was 0.04 m, after outliers are removed.

3.2. Error Sources: Snow and Vegetation

The vegetation in the study area consists of eucalyptus and spruce trees, along with considerable dense shrubbery ranging from 0.3–1.2 m in height (e.g., Figure 1). During the snow-covered data collection, the GPS observations of snow depth were made without a priori knowledge of underlying vegetation. When making many of the depth measurements, the survey pole was inadvertently placed directly within the middle of shrubs lying below the snow surface. Thus the in situ depth measurements were measuring the combined snow depth and shrub height. However, the photos and photogrammetry techniques will effectively be measuring the depth of snow above the shrub layer. Such in situ measurements should not be used to evaluate photogrammetric depth estimate performance.
Figure 5. Comparison of our workflow as compared to the current commercial state of the art (left) using differenced DEMs. Histogram of the Z differences (right).
Figure 5. Comparison of our workflow as compared to the current commercial state of the art (left) using differenced DEMs. Histogram of the Z differences (right).
Geosciences 05 00264 g005
We used two methods to reduce the effect of vegetation on the retrieved snow depths. First, we eliminated observations of snow depth that were taken in the midst of thick stands of shrubs based on visual analysis of the point cloud, due to the higher likelihood of incorrect measurement of in situ measured GPS depths. While the removal of measurements is not an ideal scenario, it is nonetheless necessary because there is no guarantee of accuracy on the “truth” in situ observations. This reduced the overall number of in situ measured snow observations from 37 to 20.
Secondly, we filtered the raw point cloud of in situ measurements of the snow-free and snow-covered dataset to create a bare earth point cloud model. To filter the vegetation from the point cloud and create a bare earth model, we used the Multi Scale Curvature Classification (MCC) filtering technique [42]. MCC uses a thinplate spline to filter point clouds based on an iterative multi-scale assessment of non-ground curvature. While the filtering was effective at removing larger trees, small shrubs and grass patches are non-trivial to remove because they have very similar characteristics to the overall terrain variability.
While our study area required filtering of the point cloud for bare earth calculation, this may not be required for areas that have little to no vegetation. Such study sites are desirable for snow depth retrieval based on surface differencing, but are not always available in practice.

4. Results

4.1. Predicted Snow Depth Accuracy

Snow depths determined photogrammetrically are computed by differencing two or more surface datasets. Because each dataset is calculated independently, the accuracy of the estimated snow depth is thus a function of the accuracy of the datasets from which it is computed. Assuming the snow-free (SF) and snow-covered (SC) surfaces have an unbiased, normally distributed measurement error, the predicted error in snow depth can be estimated from the overall error of the two surface datasets, σ S F 2 , σ S C 2 , in the following manner.
σ P E = σ S F 2     +   σ S C 2
To estimate the error of the different photogrammetric bundle adjustments ( σ S F 2 , σ S C 2 ) used in this study, we use a jackknife validation [43]. Five measured ground control points were left out of a conventional BA constrained by ground control points, and the GPS field-measured coordinates were compared to those estimated from the 3D point cloud that resulted from the reconstruction process, as shown in Figure 6. The overall error estimate is calculated by calculating the RMSE based on ∆X, ∆Y, and ∆Z. The errors for the two surface datasets reconstructed using conventional BA are 6.2 cm and 5.9 m, respectively. Using Equation (4), the predicted error ( σ P E ) for the snow depth reconstruction is 8.6 cm.
Figure 6. Bootstrap validation was done by finding the photo identifiable control points in the reconstructed point cloud (shown above), and comparing the estimated coordinates with those measured from RTK GPS.
Figure 6. Bootstrap validation was done by finding the photo identifiable control points in the reconstructed point cloud (shown above), and comparing the estimated coordinates with those measured from RTK GPS.
Geosciences 05 00264 g006

4.2. Snow Depth via Conventional BA with GCPs

The snow-free and snow-covered datasets in the conventional case correspond to bundle adjustments that incorporated ground control points (GCPs) to constrain the 3D reconstruction. Figure 7 shows the in situ observed snow depth plotted against the photogrammetrically reconstructed snow depth estimated using the conventional BA workflow (Section 3). The root mean square error (RMSE) of the conventional BA derived snow depths is 9.6 cm, with a bias of 3.4 cm. The true RMSE is 1 cm (12%) larger than the predicted error in the two point cloud datasets calculated via Equation (4). This is likely due to residual uncertainties due to vegetation that characterizes our study area.
Additionally, the predicted error computed in Equation (4) did not include the measurement errors uncertainty associated with the in situ snow depths calculated from differenced GPS observations, which is an incorrect assumption, as these observations are also characterized by random errors. If we assume the GPS observations have a 1σ of 2–4 cm, then the overall predicted error is in the order of 10 cm, which is closer to the RMSE that was observed. One interesting statistic to note is the relative error of the estimates as a function of snow depth. At the lower depths (e.g., 35 cm), the relative error is 27.6% of total snow depth. However at higher snow depths (e.g., 120 cm) the relative error only amounts to 8% of the total snow depth. Thus it is reasonable to assume this technique is advantageous for areas characterized by steady snowfall accumulation and high snow depth.
Figure 7. Scatter plot of observed snow depth vs. estimated snow depth calculated from conventional BA techniques and GCPs (n = 20). The RMSE was 9.59 cm. The R2 value was 0.91.
Figure 7. Scatter plot of observed snow depth vs. estimated snow depth calculated from conventional BA techniques and GCPs (n = 20). The RMSE was 9.59 cm. The R2 value was 0.91.
Geosciences 05 00264 g007
The correlation coefficient (r) between the GPS observed in situ snow depth observations and the photogrammetrically estimated snow depths from the conventional BA workflow was 0.95. This means the photogrammetrically resolved snow depths explain 91% (r2) of the variance in the in situ observed GPS snow depths.

4.3. Snow Depth via Direct BA Using Airborne GPS

The snow-covered dataset in the direct case corresponds to a bundle adjustment which incorporated no GCPs to constrain the 3D reconstrunction; only trajectory information from the kinematic airborne GPS observations was used. Figure 8 shows the GPS-observed snow depth plotted against the photogrammetrically reconstructed snow depth estimated using the direct BA workflow described in Section 3 above. There is a clear negative bias in the error associated with snow depth retrieval using the GPS-based aerial triangulation methods, and the RMSE is 18.4 cm. The relative error of the snow depth estimates determined via Direct BA with bias ranged from 52.7% to 15% at depths of 35 and 120 cm, respectively.
The reason for this bias is unknown. Initially, it was assumed that the error was in the estimate of focal length calculated as part of the adjustment. However, when the focal length was constrained to the same calibrated values determined in the GCP-based adjustment described in Section 4.1, the vertical bias still remained. Theoretically, this problem should not exist if proper care is taken to ensure that the correct stochastic information associated with the observations and parameters is incorporated into the bundle adjustment. Given that the integer ambiguities have been correctly resolved in the GPS solution process, and that the lever arm offset has also been completed, airborne GPS-based AT should provide sufficient accuracy without the addition of ground control features [23]. In practice however, the vertical bias exhibited in our data is commonly encountered GPS-based AT projects. The most common way to resolve this is by introducing a minimal number of ground control points into the BA solution. A description of such a process is found in [31]. Further analysis into why this systematic error exists remains a future area of research.
Figure 8. Scatter plot of the observed snow depth vs. estimated snow depth calculated using the direct BA techniques. The cyan dots represent the original estimated snow depths (with bias), whereas the blue dots represent snow estimates when the affine transformation is applied. The RMSE was 18.4 cm with using only onboard GPS; after the transformation was applied, the RMSE is 8.8 cm.
Figure 8. Scatter plot of the observed snow depth vs. estimated snow depth calculated using the direct BA techniques. The cyan dots represent the original estimated snow depths (with bias), whereas the blue dots represent snow estimates when the affine transformation is applied. The RMSE was 18.4 cm with using only onboard GPS; after the transformation was applied, the RMSE is 8.8 cm.
Geosciences 05 00264 g008

4.4. Co-Registration Using Affine Transformation

For estimation of snow depth in many areas, it turns out that the bias introduced by direct BA is of no consequence. This is because we are only interested in relative changes between the snow-free and snow-covered surfaces. Thus, if a fixed point/s in space is visible in both datasets, they can be co-registered by means of an affine transformation [44]. This is done by determining the 3D difference of the fixed surface point/s, and adding a fixed translation (the difference) to each point in the corresponding dataset. The implied assumption is that the two surface datasets are conformal and do not have warping.
In our experiment, we used a single fixed point on the corner of a large boulder on the southern end of our test sight, as shown in Figure 9. The difference of the boulder corner is determined between the snow-covered and snow-free point clouds, after which the translation is applied to co-register the two surfaces. After co-registration, the overall RMSE of the reconstructed snow depth is 8.8 cm (Figure 7), with an r2 value of 0.96. Such results are attainable because relative accuracy of photogrammetrically derived surfaces is very high due to the reprojection cost function that is minimized in the bundle adjustment. Thus, while absolute errors such as those described in Section 4.3 above may be difficult to mitigate, computations based on the relative relationships between the two surfaces can be quickly and accurately made.
Figure 9. For co-registering the two datasets, we used the corner of a large boulder (red star) within the study site to estimate the translational components for the affine transformation.
Figure 9. For co-registering the two datasets, we used the corner of a large boulder (red star) within the study site to estimate the translational components for the affine transformation.
Geosciences 05 00264 g009

4.5. High-Resolution Snow Depth Estimates

The spatially continuous snow depths over our study area are shown in Figure 10. The spatial variation in snow depth is readily apparent in photogrammetrically estimated snow depths in Figure 10, while the overall trends are consistent when compared to Figure 3. The mean snow depth from our study area according to the in situ measured GPS observations was 61.8 cm, with standard deviation of 14.3 cm. The mean value of the spatially continuous snow depths within our study area computed from the indirect BA workflow is 61.9 cm. While this minor difference is somewhat unrealistic given the overall magnitude of the error budget, it nonetheless provides additional methodological validation.
Figure 10. High-resolution (50 cm) snow depths over our study area, as computed from the differenced point cloud measurements. Units of depth are in cm.
Figure 10. High-resolution (50 cm) snow depths over our study area, as computed from the differenced point cloud measurements. Units of depth are in cm.
Geosciences 05 00264 g010

5. Discussion and Conclusions

5.1. Discussion

The experiences gained so far from extracting snow depth from optical imagery via BA show that the two methodologies outlined herein have different strengths and weaknesses for the various data processing tasks. Conventional BA routines with ground control points provide accurate resolution of snow depths, with an overall RMSE of 9.6 cm. Direct BA mitigates the need for obtaining ground control points (GCPs) to constrain the adjustment, but the overall accuracy (RMSE) is 18.4 cm. However, if a fixed surface point is co-visible in both the snow-free and snow-covered datasets, the bias can be removed and the point clouds aligned via an affine transformation. In our case, this effectively increased the accuracy to 8.8 cm RMSE. If co-registration is possible, we hypothesize that navigation solutions based on psuedurange measurement alone (~3 m accuracy) could be used to successfully measure spatially continuous snow depths, thus further lowering the cost associated with this type of measurement. Both direct and indirect reconstructions exceeded the snow depth accuracy, precision, and resolution of prior efforts [19,20,21].
While the spatial scale of our study site is relatively small, the techniques and methods described in this study are well-suited for integration at larger scales with platforms that have longer flight time capabilities, such as fixed-wing UAS. The overall workflow is relatively straightforward, and few physical assumptions need to be made in the retrieval process, in contrast to e.g., passive microwave-based retrievals (e.g., [45,46]). This simple, straightforward approach is advantageous for government agencies looking for accurate, high-resolution measurement techniques that can be used at a moments notice. It should be mentioned that snow depth estimates made via photogrammetric surface differencing cannot not automatically be converted to the snow water equivalent (SWE) amount that most forecasting agencies desire, as this requires knowledge of snow density. However, recent literature has suggested it is possible to characterize SWE based on depth measurements alone, due to the fact that SWE is more closely linked to depth than it is to density [47].
In our study area, vegetation has adverse effects on the accuracy of snow depth measurements made using photogrammetric techniques because of its correlation with terrain variability. One way to potentially mitigate the vegetation problem encountered in surface differencing approaches such as this is to augment the photogrammetric observations with co-registered LiDAR measurements. With the miniaturization of LiDAR sensors in recent years [17], a combined UAS-based LiDAR/photogrammetric system could provide ground returns in areas obscured by vegetation to the photogrammetric reconstruction, along with providing general redundancy to the overall surface dataset. This is an exciting possibility and should be investigated in future studies.
Additionally, the photogrammetric method described herein can be combined with existing remote sensing platforms and retrieval approaches to enhance snow property estimates at larger scales using data assimilation techniques. Data assimilation optimally combines the large spatial coverage of spaceborne measurements, physically-based land surface models for prediction, and the high accuracy and resolution of UAS snow depth retrievals to generate highest likelihood estimates of snow states based on the stochastic properties of each input. This is the current state of the art in snow forecasting, and is used globally with encouraging results [9]. Future studies should examine the feasibility of integrating snow depth estimates obtained from photogrammetric measurements on-board UAS platforms into such large-scale retrieval algorithms.
While our results indicate that this method holds great promise, we believe there is still considerable improvement to be made. For discussion sake, we have identified three areas which we believe deserve further exploration. The first is understanding how the physical state of the snow itself effects the reconstruction quality. Does older metamorphosed snow (sublimation, melt/refreeze, etc.) provide more accurate results than freshly fallen flakes, because of the increased spectral variability? In the case of this study, the snow was quite fresh (2 days) but had significant water content and was beginning to melt, which may have been beneficial to this study. The second area is gaining a better understanding of how the overall accuracy of the snow surface reconstruction is affected by the acquisition process. For example, do sunny days provide the best results, or is the overall accuracy invariant to weather conditions? What will be the tradeoff between snow depth accuracy and flying height? In general, these concepts are well understood within the photogrammetric community, but perhaps have yet to be applied to the specific application of snow depth retrieval. The third and final area is simply improvement and innovation in the workflow. Custom algorithms specifically tailored to this application are certain to perform with higher reliability as compared to generalized techniques. This is why we believe it is in the community’s best interest to become less dependent on commercial software offerings. In that respect, this proposed workflow provides an excellent starting point for continued exploration and innovation.

5.2. Conclusion

A novel technique has been described to measure high-resolution spatial distributions of snow depths accurately and affordably. While it will not provide the global spatial and temporal coverage of existing airborne and spaceborne sensors (e.g., passive microwave), it can nonetheless provide reliable information about snow processes at local measurement scales. This represents an important contribution to the snow science community, as accurate representations of snow distribution are needed for improvements to hydrological forecasts, climate models, and for the future testing and validation of remote-sensing retrieval algorithms. Smaller-scale process-oriented snow studies could benefit from such observations, especially as difficulty persists in existing methods to create spatially-continuous snow depth maps from point observations [5].
We successfully estimated snow depth at high spatial resolution in mountainous terrain with thick vegetation cover based on photogrammetrically derived 3D surface models from aerial photography acquired by a micro unmanned aircraft system (UAS). Snow depths were estimated using conventional BA techniques with an overall RMSE of 9.6 cm, whereas direct BA techniques without GCP information yielded an RMSE of 18.4 cm. If an affine transformation is applied using a fixed surface point that is co-visible in both the snow-free and snow-covered datasets, the RMSE is reduced to 8.4 cm. This is the most convenient way to resolve systematic error that may be present; however it is dependent on having points on both surfaces that can be co-registered. Physical characteristics of this environment (e.g., variable terrain, deep snow, etc.) which can be highly problematic for other remote sensing based snow retrieval methods do not significantly affect the accuracy of the estimated snow depth, in our case.
The research results demonstrate that airborne remote sensing using UAS platforms which implement low-cost COTS digital camera systems and standard photogrammetric techniques can provide snow depth retrieval information at a level of accuracy (<10 cm) that can effectively support those in the snow community which have a need for high resolution measurements of snow depth at local measurement scales. This technology will only continue to improve in reliability, accuracy, and spatial scalability. The proliferation of UAS technology and improved sensor capabilities should provide for rapid advances in abilities to retrieve snow properties in mountainous areas which are often characterized by high snowfall amounts, but dangerous and remote measurement requirements.

Acknowledgments

This project was funded through a National Science Foundation East Asia and Pacific Studies Institute (EAPSI) fellowship, as well as National Aeronautics and Space Administration NESSF fellowship grant NNX11AL41H. Additionally, the authors acknowledge the Winfred Violet Scott Trust for funding support in purchasing the OktoKopter that was used in this study. Helge Aasen and Juliane Bendig are acknowledged for their help with the GPS survey during the February 2014 survey.

Author Contributions

Ben Vander Jagt, Arko Lucieer, Luke Wallace, and Darren Turner conceived the experiment. Ben Vander Jagt, Arko Lucieer, Luke Wallace, and Darren Turner perfomed the data collection. Ben Vander Jagt, Arko Lucieer, Luke Wallace, and Michael Durand contributed to the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Barnett, T.P.; Adam, J.C.; Lettenmaier, D.P. Potential impacts of a warming climate on water availability in snow-dominated regions. Nature 2005, 438, 303–309. [Google Scholar] [CrossRef] [PubMed]
  2. Hu, J.I.A.; Moore, D.J.; Burns, S.P.; Monson, R.K. Longer growing seasons lead to less carbon sequestration by a subalpine forest. Glob. Chang. Biol. 2010, 16, 771–783. [Google Scholar] [CrossRef]
  3. Peck, E.L. Snow measurement predicament. Water Resour. Res. 1972, 8, 244–248. [Google Scholar] [CrossRef]
  4. Hall, D.K.; Sturm, M.; Benson, C.S.; Chang, A.T.C.; Foster, J.L.; Garbeil, H.; Chacho, E. Passive microwave remote and in situ measurements of artic and subarctic snow covers in Alaska. Remote Sens. Environ. 1991, 38, 161–172. [Google Scholar] [CrossRef]
  5. Erxleben, J.; Elder, K.; Davis, R. Comparison of spatial interpolation methods for estimating snow distribution in the Colorado Rocky Mountains. Hydrol. Process. 2002, 16, 3627–3649. [Google Scholar] [CrossRef]
  6. Chang, A.T.C.; Foster, J.L.; Hall, D.K. Nimbus-7 SMMR derived global snow cover parameters. Ann. Glaciol. 1987, 9, 39–44. [Google Scholar]
  7. Foster, J.L.; Hall, D.K.; Chang, A.T.C.; Rango, A. An overview of passive microwave snow research and results. Rev. Geophys. 1984, 22, 195–208. [Google Scholar] [CrossRef]
  8. Foster, J.L.; Chang, A.T.C.; Hall, D.K. Comparison of snow mass estimates from a prototype passive microwave snow algorithm, a revised algorithm and a snow depth climatology. Remote Sens. Environ. 1997, 62, 132–142. [Google Scholar] [CrossRef]
  9. Takala, M.; Luojus, K.; Pulliainen, J.; Derksen, C.; Lemmetyinen, J.; Kärnä, J.P.; Bojkov, B. Estimating northern hemisphere snow water equivalent for climate research through assimilation of space-borne radiometer data and ground-based measurements. Remote Sens. Environ. 2011, 115, 3517–3529. [Google Scholar] [CrossRef]
  10. Vander Jagt, B.J.; Durand, M.T.; Margulis, S.A.; Kim, E.J.; Molotch, N.P. The effect of spatial variability on the sensitivity of passive microwave measurements to snow water equivalent. Remote Sens. Environ. 2013, 136, 163–179. [Google Scholar] [CrossRef]
  11. Deems, J.S.; Painter, T.H.; Finnegan, D.C. Lidar measurement of snow depth: A review. J. Glaciol. 2013, 59, 467–479. [Google Scholar] [CrossRef]
  12. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  13. Eisenbeiss, H.; Zhang, L. Comparison of DSMs generated from mini UAS imagery and terrestrial laser scanner in a cultural heritage application. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 90–96. [Google Scholar]
  14. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens. 2010, 65, 514–522. [Google Scholar] [CrossRef]
  15. Lucieer, A.; de Jong, S.; Turner, D. Mapping landslide displacements using Structure from Motion (SfM) and image correlation of multi-temporal UAS photography. Progress Phys. Geogr. 2013. [Google Scholar] [CrossRef]
  16. Ryan, J.C.; Hubbard, A.L.; Todd, J.; Carr, J.R.; Box, J.E.; Christoffersen, P.; Snooke, N. Repeat UAV photogrammetry to assess calving front dynamics at a large outlet glacier draining the Greenland Ice Sheet. Cryosphere Discuss. 2014, 8, 2243–2275. [Google Scholar] [CrossRef]
  17. Wallace, L.; Lucieer, A.; Watson, C.; Turner, D. Development of a UAS-LiDAR system with application to forest inventory. Remote Sens. 2012, 4, 1519–1543. [Google Scholar] [CrossRef]
  18. Harwin, S.; Lucieer, A. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAS) imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  19. Tao, W.; Lei, Y.; Mooney, P. Dense point cloud extraction from UAV captured images in forest area. In Proceedings of the 2011 IEEE International Conference on Spatial Data Mining and Geographical Knowledge Services (ICSDM), Fuzhou, China, 29 June–1 July 2011; pp. 389–392.
  20. Cooper, C.F. Snow cover measurement. Photogramm. Eng. 1965, 31, 611–619. [Google Scholar]
  21. Smith, F.M.; Cooper, C.F.; Chapman, E.G. Measuring Snow Depths by Aerial Photography. In Proceedings of the 35th Annual Western Snow Conference, Boise, ID, USA, 18–20 April 1967; pp. 66–72.
  22. Nolan, M.; Larsen, C.F.; Sturm, M. Mapping snow-depth from manned-aircraft on landscape scales at centimeter resolution using Structure-from-Motion photogrammetry. Cryosphere Discuss. 2015, 9, 333–381. [Google Scholar] [CrossRef]
  23. Ledwith, M.; Lundén, B. Digital photogrammetry for air-photo-based construction of a digital elevation model over snow-covered areas-Blamannsisen, Norway. Nor. J. Geogr. 2001, 55, 267–273. [Google Scholar] [CrossRef]
  24. Ebadi, H. A Comprehensive Study on GPS-Assisted Aerial Triangulation. Ph.D. Dissertation, University of Calgary, Calgary, AB, Canada, 1997. [Google Scholar]
  25. Cramer, M.; Stallmann, D.; Haala, N. Direct georeferencing using GPS/inertial exterior orientations for photogrammetric applications. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2000, 33, 198–205. [Google Scholar]
  26. Mostafa, M.M.; Schwarz, K.P. Digital image georeferencing from a multiple camera system by GPS/INS. ISPRS J. Photogramm. Remote Sens. 2001, 56, 1–12. [Google Scholar] [CrossRef]
  27. Lee, C.Y.; Jones, S.D.; Bellman, C.J.; Buxton, L. DEM creation of a snow covered surface using digital aerial photography. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 18–27. [Google Scholar]
  28. Lemmon, T.R.; Gerdan, G.P. The Influence of the Number of Satellites on the Accuracy of RTK GPS Positions. Aust. Surv. 1990, 44, 64–70. [Google Scholar] [CrossRef]
  29. Turner, D.; Lucieer, A.; Wallace, L. Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2738–2745. [Google Scholar] [CrossRef]
  30. Van der Merwe, R.; Wan, E.A. Sigma-point Kalman filters for integrated navigation. In Proceedings of the 60th Annual Meeting of the Institute of Navigation, Dayton, OH, USA, 7–9 June 2004; pp. 641–654.
  31. Mikhail, E.M.; Bethel, J.S.; McGlone, J.C. Introduction to Modern Photogrammetry; John Wiley & Sons Inc.: New York, NY, USA, 2001; Volume 1. [Google Scholar]
  32. Brown, D. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
  33. Triggs, B.; McLauchlan, P.F.; Hartley, R.I.; Fitzgibbon, A.W. Bundle adjustment—A modern synthesis. In Vision Algorithms: Theory and Practice; Springer: Berlin/Heidelberg, Germany, 2000; pp. 298–372. [Google Scholar]
  34. Zhang, L.; Gruen, A. Multi-image matching for DSM generation from IKONOS imagery. ISPRS J. Photogramm. Remote Sens. 2006, 60, 195–211. [Google Scholar] [CrossRef]
  35. Scambos, T.A.; Dutkiewicz, M.J.; Wilson, J.C.; Bindschadler, R.A. Application of image cross-correlation to the measurement of glacier velocity using satellite image data. Remote Sens. Environ. 1992, 42, 177–186. [Google Scholar] [CrossRef]
  36. Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 1150–1157.
  37. Snavely, N.; Seitz, S.M.; Szeliski, R. Photo tourism: Exploring photo collections in 3D. ACM Trans. Graph. 2006, 25, 835–846. [Google Scholar] [CrossRef]
  38. Nistér, D. Preemptive RANSAC for live structure and motion estimation. Mach. Vis. Appl. 2005, 16, 321–329. [Google Scholar] [CrossRef]
  39. Brown, D.C. The bundle adjustment—Progress and prospects. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 1976, 21, 1. [Google Scholar]
  40. Huang, T.S.; Netravali, A.N. Motion and structure from feature correspondences: A review. Proc. IEEE 2004, 82, 252–268. [Google Scholar] [CrossRef]
  41. Agarwal, S.; Snavely, N.; Seitz, S.M.; Szeliski, R. Bundle adjustment in the large. In Computer Vision–ECCV 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 29–42. [Google Scholar]
  42. Seitz, S.M.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A comparison and evaluation of multi-view stereo reconstruction algorithms. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Rio de Janeiro, Brazil, 17–22 June 2006; Volume 1, pp. 519–528.
  43. Furukawa, Y.; Ponce, J. Accurate, dense, and robust multiview stereopsis. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1362–1376. [Google Scholar] [CrossRef] [PubMed]
  44. Evans, J.S.; Hudak, A.T. A multiscale curvature algorithm for classifying discrete return Lidar in forested environments. IEEE Trans. Geosci. Remote Sens. 2007, 45, 1029–1038. [Google Scholar] [CrossRef]
  45. Efron, B.; Gong, G. A leisurely look at the bootstrap, the jackknife, and cross-validation. Am. Stat. 1983, 37, 36–48. [Google Scholar]
  46. Howat, I.M.; Smith, B.E.; Joughin, I.; Scambos, T.A. Rates of southeast Greenland ice volume loss from combined ICESat and ASTER observations. Geophys. Res. Lett. 2008, 35, L17505. [Google Scholar] [CrossRef]
  47. Durand, M.; Kim, E.J.; Margulis, S.A. Radiance assimilation shows promise for snowpack characterization. Geophys. Res. Lett. 2009, 36. [Google Scholar] [CrossRef]
  48. Sturm, M.; Taras, B.; Liston, G.E.; Derksen, C.; Jonas, T.; Lea, J. Estimating snow water equivalent using snow depth data and climate classes. J. Hydrometeorol. 2010, 11, 1380–1394. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Vander Jagt, B.; Lucieer, A.; Wallace, L.; Turner, D.; Durand, M. Snow Depth Retrieval with UAS Using Photogrammetric Techniques. Geosciences 2015, 5, 264-285. https://doi.org/10.3390/geosciences5030264

AMA Style

Vander Jagt B, Lucieer A, Wallace L, Turner D, Durand M. Snow Depth Retrieval with UAS Using Photogrammetric Techniques. Geosciences. 2015; 5(3):264-285. https://doi.org/10.3390/geosciences5030264

Chicago/Turabian Style

Vander Jagt, Benjamin, Arko Lucieer, Luke Wallace, Darren Turner, and Michael Durand. 2015. "Snow Depth Retrieval with UAS Using Photogrammetric Techniques" Geosciences 5, no. 3: 264-285. https://doi.org/10.3390/geosciences5030264

APA Style

Vander Jagt, B., Lucieer, A., Wallace, L., Turner, D., & Durand, M. (2015). Snow Depth Retrieval with UAS Using Photogrammetric Techniques. Geosciences, 5(3), 264-285. https://doi.org/10.3390/geosciences5030264

Article Metrics

Back to TopTop