Next Article in Journal
Photochemical Reflectance Index (PRI) for Detecting Responses of Diurnal and Seasonal Photosynthetic Activity to Experimental Drought and Warming in a Mediterranean Shrubland
Previous Article in Journal
Wave Height Estimation from First-Order Backscatter of a Dual-Frequency High Frequency Radar
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Photogrammetric UAV Mapping of Terrain under Dense Coastal Vegetation: An Object-Oriented Classification Ensemble Algorithm for Classification and Terrain Correction

1
Department of Geography & Anthropology, Louisiana State University, Baton Rouge, LA 70803, USA
2
Coastal Studies Institute, Louisiana State University, Baton Rouge, LA 70803, USA
3
Department of Oceanography and Coastal Sciences, Louisiana State University, Baton Rouge, LA 70803, USA
4
School of Environment and Natural Resources, Ohio Agriculture Research and Development Center, Ohio State University, Wooster, OH 44691, USA
5
Department of Geography, Geology and Planning, Missouri State University, Springfield, MO 65897, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(11), 1187; https://doi.org/10.3390/rs9111187
Submission received: 31 August 2017 / Revised: 9 November 2017 / Accepted: 13 November 2017 / Published: 19 November 2017
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Photogrammetric UAV sees a surge in use for high-resolution mapping, but its use to map terrain under dense vegetation cover remains challenging due to a lack of exposed ground surfaces. This paper presents a novel object-oriented classification ensemble algorithm to leverage height, texture and contextual information of UAV data to improve landscape classification and terrain estimation. Its implementation incorporates multiple heuristics, such as multi-input machine learning-based classification, object-oriented ensemble, and integration of UAV and GPS surveys for terrain correction. Experiments based on a densely vegetated wetland restoration site showed classification improvement from 83.98% to 96.12% in overall accuracy and from 0.7806 to 0.947 in kappa value. Use of standard and existing UAV terrain mapping algorithms and software produced reliable digital terrain model only over exposed bare grounds (mean error = −0.019 m and RMSE = 0.035 m) but severely overestimated the terrain by ~80% of mean vegetation height in vegetated areas. The terrain correction method successfully reduced the mean error from 0.302 m to −0.002 m (RMSE from 0.342 m to 0.177 m) in low vegetation and from 1.305 m to 0.057 m (RMSE from 1.399 m to 0.550 m) in tall vegetation. Overall, this research validated a feasible solution to integrate UAV and RTK GPS for terrain mapping in densely vegetated environments.

Graphical Abstract

1. Introduction

Accurate topographic mapping is critical for various environmental applications in many low-lying coasts including inter-tidal zones as elevation affects hydrodynamics and vegetation distributions [1]. Small elevation changes can alter sediment stability, nutrient, organic matters, tides, salinity, and vegetation growth and therefore may cause significant vegetation transition in relatively flat wetlands [2,3]. Furthermore, topographic information is a prerequisite for extracting vegetation structural characteristics such as canopy height, vegetation cover, and biomass from remote sensing data [4,5,6,7,8].
For decades, researchers have used various remote sensing techniques such as RADAR, Light Detection And Ranging (LiDAR), stereo photogrammetric mapping, altimetry, and GPS to map landscapes of various scales [9,10,11,12,13,14,15]. However, accurate topographic mapping in coastal areas remains challenging due to complication of hydrodynamics, ever-changing landscapes, low elevation, and dense vegetation covers [2,3]. As a common tool for topographic mapping, airborne LiDAR has typical sensor measurement accuracies between 0.10 m to 0.20 m [3,7]. Previous studies based on airborne LiDAR commonly reported mean terrain errors from 0.07 m to 0.17 m in marshes [3] and as high as 0.31 m in areas with relatively tall vegetation [2,16]. Errors further increase with denser and taller vegetation conditions and may reach a challenging “dead zone” if the marsh vegetation height is close to or beyond 2 m [3]. Since many airborne LiDAR missions collect data during winter seasons for better laser penetration when many vegetation species die off or have sparser and flagging conditions, terrain mapping in seasons or wetlands types with fuller vegetation biomass will produce even lower accuracies. These studies prove the existence of severe errors in current coastal topographic mapping, which will have significant impacts on broad applications such as wetland habitat mapping [2], change monitoring [9], modeling of flood [17], inundation [10], storm surge, and sea level rise [17].
Terrain mapping under dense vegetation in wetlands remains challenging as the characteristics of coastal environments can complicate its applications in vegetated areas. First, the low-relief topography under tidal and weather induced variations of water level can affect the amount of land exposure to air. Second, the dense vegetation with a large coverage often results in a lack of exposed ground for ground filtering and terrain interpolation. Third, the limited spectral information from high-density or high-resolution remote sensors combined with the presence of water and high soil moisture makes land cover mapping extremely difficult. Fourth, the intense land dynamics and lack of ability to obtain data promptly prevent many applications such as wetland restorations. Therefore, there is a great need for more accurate and rapid topographic mapping in coastal environments.
In recent years, terrain correction emerges as a new trend to seek improvement of terrain mapping in areas with dense vegetation. Hladik, Schalles and Alber [2] demonstrated an approach to correct wetland terrain from airborne LiDAR data by integrating with hyperspectral images and GPS surveys of terrain samples. However, terrain correction based on multiple historical data faces limitations of data availability, resolution differences, time discrepancies, and restriction to unchanged areas. Discrepancies among the data or seasonal and annual landscape changes can introduce significant errors in the ever-changing coastal environments affected by hydrodynamics, hazards, and human activities. Furthermore, LiDAR boasts the best for high-resolution terrain mapping [18,19] among current mapping technologies, but the equipment is costly. Frequent deployment of airborne LiDAR for surveys is cost prohibitive, urging for affordable method for rapid and accurate measurements without relying on out-of-dated historical data [2,3].
Recent developments of unmanned aerial vehicle (UAV) provide a promising solution to general mapping applications. Salient features of UAV systems and data include low costs, ultra-high spatial resolution, non-intrusive surveying, high portability, flexible mapping schedule, rapid response to disturbances, and ease for multi-temporal monitoring [20]. In particular, the expedited data acquisition processes and no requirement of ground mounting locations make UAV a favorable surveying method considering the challenging mobility and accessibility in wetland environments.
In the past few decades, the UAV communities have made significant progresses in light-weighted sensor and UAV system developments [21,22], improving data preprocessing [23], registration [24,25,26,27], and image matching [28,29,30]. As a result, the low-cost system and automated mapping software have allowed users to conveniently apply UAV to various applications including change detection [31], leaf area index (LAI) mapping [32], vegetation cover estimation [33], vegetation species mapping [34], water stress mapping [35], power line surveys [36], building detection and damage assessment [37,38], and comparison with terrestrial LiDAR surveys [39]. For more details, Watts et al. [40] gave a thorough review about UAV technologies with an emphasis on hardware and technique specs. Colomina and Molina [20] reviewed general UAV usage in photogrammetry and remote sensing. Klemas [4] reviewed UAV applications in coastal environments.
Despite the various successful applications, challenges remain, especially when deploying UAV to map densely vegetated areas. Difficulties in deriving under-canopy terrain are common for point-based ground-filtering processes. Use of UAV to map terrain, therefore, is best suited to areas with sparse or no vegetation, such as sand dunes and beaches [41], coastal flat landscapes dominated by beaches [42], and arid environments [43]. Thus far, the examination of the efficacy of UAV for topographic mapping over vegetated areas has been limited to only a few studies. Jensen and Mathews [5], for instance, constructed a digital terrain model (DTM) from UAV images for a woodland area and reported an error of 0.31 m overestimation. Their study site was relatively flat with sparse vegetation where the ground-filtering algorithm picked up points from nearby exposed ground areas. Simpson et al. [44] used LAStools suite to conduct ground filtering of UAV data in a burned tropical forest, which showed that the impact of DTM resolutions varied with vegetation types and high errors remained in tall vegetation. More recently, Gruszczyński, Matwij and Ćwiąkała [39] introduced a ground filtering method for UAV mapping in areas covered by low vegetation in a partially mowed grass field. Successful applications of these ground-filtering methods depend on the assumption that adjusting the size of search windows can help pick up nearby points on ground in sparsely vegetated and relatively flat areas with fragmented vegetation distribution. However, ground filtering in dense and tall vegetation was considered “pointless” due to the difficulty to penetrate and a lack of points from ground [39]. Thus, current developments in UAV communities provide no solution to the above challenging issue of terrain mapping in densely vegetated coastal environments.
The goal of this study is to alleviate the difficulties and challenges in terrain mapping under dense vegetation cover and develop a rapid mapping solution without depending on historical data. Since researchers often use external GPS units with higher accuracy for registering point clouds from UAV based on targets and vegetation crown is often visible under most hydrological conditions, this research proposes an algorithmic framework to correct terrain based on land cover classifications through the integration of GPS samples with points from vegetation crown measured from UAV. Among the three main UAV types of photogrammetric, multispectral, and hyperspectral UAV systems, photogrammetric UAV is the most commonly used type with relatively low cost and a natural color camera. This means that many users have access to photogrammetric UAV systems and often face the challenge of high-resolution mapping with limited spectral bands, which has a significant impact on classification accuracies. On the other hand, high-resolution images contain tremendous amount of textural information, which are not always directly usable as input bands for traditional classification but are feasible by logical reasoning based on texture and object-oriented analyses. Therefore, to tackle this problem, this research developed a novel object-oriented classification ensemble algorithm to integrate texture and contextual indicators and multi-input classifications. Evaluated using data for a coastal region in Louisiana, the validated approach provides a field survey solution to apply UAV for terrain mapping in densely vegetated environments.

2. Study Site and Data Collection

2.1. Study Site

The study site is located at the Buras Boat Harbor, Plaquemines Parish, Louisiana (Figure 1). Much of coastal Louisiana has been experiencing rapid land loss in the past few decades, accounting for the majority of wetland loss in the United States [45,46]. Located in well-known areas with severe wetland loss, the study site suffered serious wetland degradation, as illustrated in Figure 1b,c, from 1998 to 2013. The process started from bank erosion and degraded gradually inward toward the main Mississippi River levee. To alleviate further wetland degradation and enhance levee safety, government agencies reconstructed sand berms along the previous wetland banks through dredging sediment from the bottom of nearby water in 2014. Figure 1d shows the status after near one-year reconstruction.
To prevent sediment erosion from waves and tides, the restoration project placed sand bags and eco-friendly protective mats of EcoShieldTM with planted spartina alterniflora (smoothcord grass) and panicum vaginatum sw (seashore paspalum) on the southwestern half of the first sand berm in August 2014 as an artificial shoreline for the wetlands and levees. After one year (Figure 1e), spartina alterniflora has grown to an average height of 1.63 m in dense clusters on both the southwest and northeast edges of the berm, dominating areas with relatively low elevation and frequent tides of brackish water. Low vegetation of panicum vaginatum sw had an average height of 0.37 m and was up to 0.5 m. The vegetation grew densely in areas with relatively higher elevation. A few stems of naturally grown local species with approximate height of 0.5 to 1 m sparsely grew in areas with relatively higher elevations. A narrow zone of bare ground surface remained in the center zone of the berm with the highest elevation along the curved sand berm. Lack of exposed ground in vegetated areas in this site makes ground filtering difficult for traditional methods. Based on the need for terrain correction, the surface covers of the berm contain four categories: low vegetation, tall vegetation, ground, and water. Low vegetation refers to areas dominated by panicum vaginatum sw. Tall vegetation includes spartina alterniflora and few tall local species due to their limited number and similar height to spartina alterniflora.

2.2. UAV and GPS Field Data Collection

To assess the reliability of mapping densely vegetated coastal landscapes through field collection of high-resolution data, we conducted UAV flights over the study site on 1 October 2015 (Figure 1e). The flight time was around 6:00 p.m. and the wind speed was 4.8 kph. The mapped segment of the sand berm is approximately 195.5 m long and 30 m wide and has elevation range between −24.21 m (submerged in water tides) and −22.975 m (Louisiana South State Plane in NAD83) (−0.29 m to 0.945 m in NAVD 88) based on GPS surveys. The UAV carried a Sony Nex-5 camera and powered by a hexacopter drone using the Pixhawk flight controller system (Figure 2c). This system has a battery time of approximately 20 min and can resist wind speeds up to 15 mph. The research team used a Mission Planner (version 1.3.48) ground control station of Ardu Pilot System to plan a round-trip flight line along the sand berm at a flight height of 42 m, collecting images at every 10–15 m in ground distance with an image dimension of 4592 × 3056 pixels. The overlap along the flight lines is around 74%, and the sidelap between flight lines is around 55%. To cover the entire study site and ensure image quality, the operator conducted four trial flights with a total flight length of 800 m. The final coastal morphological mapping process used thirty-seven images acquired during the last flight.
For accurate georeferencing and uncertainty assessment of the UAV image-registration process, the team distributed ten targets as ground control points (GCPs) along the sand berm in bare ground areas. These targets were 0.44 m-wide boards painted in black and white in a diagonal chessboard pattern. After the flight, the team surveyed the targets with an RTK GPS Trimble R10 GNSS unit with RMSE of 0.015 m for horizontal accuracy and 0.021 m for vertical accuracy. The coordinate system for all the GPS surveys was Louisiana South State Plane in NAD83.

3. Methodology

3.1. Overview of an Object-Oriented Classification Ensemble Algorithm for Class and Terrain Correction

This section presents an object-oriented classification ensemble algorithm to apply photogrammetric UAV and GPS sampling for improved terrain mapping in densely vegetated environments. The conceptual workflow includes four main stages as illustrate in Figure 3. Stage 1 conducts UAV mapping and generate initial DTM using the well-established Structure from Motion (SfM) method. Stage 2 compares the classification result based on orthophoto with those based on orthophoto and a point statistic layer to derive two optimal classification results. Stage 3 uses the object-oriented classification ensemble method to conduct classification correction based on a set of spectral, texture and contextual indicators. Stage 4 assigns a terrain correction factor to each class based on elevation samples from GPS surveys and conducts final terrain correction. The ensemble method used here refers to several integrations at different stages: the multiple classification results, the analytical use of indicators, and the mechanism of integrating UAV and GPS surveys. The following sections illustrate and validate the algorithm through a densely vegetated coastal wetland restoration site.

3.2. Stage 1: Generation of Initial DTM

To ensure the quality of matching points from stereo mapping, a data preprocess first screens the images by removing images with bad qualities (e.g., blurry images due to motion or out of focus) and conducting image enhancement of sharpness, contrast and saturation. The Pix4Dmapper software then processes the images to construct matching points, orthophoto, and DTM through combining the SfM algorithm with the Multi-View Stereo photogrammetry algorithm (SfM-MVS) [47]. The SfM-MVS method first applies the Scale Invariant Feature Transform (SIFT) to detect features in images with wide bases and then identifies the correspondences between features through high-dimensional descriptors from the SIFT algorithm. After elimination of erroneous matches, SfM uses bundle-adjustment algorithms to construct the 3D structure. The purpose of using MVS is to increase the density of the point clouds through use of images from wider baselines. In addition, Pix4Dmapper supports 3D georeferencing through manual input of GCP coordinates acquired from the GPS surveys. In the first step, the software automatically aligns the UAV images and generates DSM, orthophoto, and matching points based on the SfM-MVS algorithms. Pix4Dmapper software provides an interactive approach to the selection of corresponding targets in the 3D point clouds for registration. A classification function then classifies matching points into terrain vs. object based on the minimum object height, minimum object length, and maximum object length and interpolates identified points on terrain into a DTM model. In some situations, terrains in water areas may contain abnormal elevation outliers due to mismatched noisy points. Users should inspect the quality of DTM and eliminate these outliers. For detailed information about SfM-MVS and Pix4Dmapper software, please refer to Smith, Carrivick and Quincey [47].

3.3. Stage 2: Multi-Input Classification Comparison

One major challenge in this study is to conduct land classification based on the photogrammetric UAV data without depending on historical data, which limits data inputs to three bands of orthophoto and matching points. To solve this limitation and utilize the detailed textural and context characteristics in high-resolution and high-density data, this research explores the use of statistics and morphological characteristics including point density, maximum, minimum, maximum–minimum, and mean elevation from the matching points to enhance classification.
For classification algorithms, Ozesmi and Bauer [48] provided a thorough review about various classification methods for coastal and wetland studies. Among these methods, the machine-learning algorithm of SVM classifier demonstrates high robustness and reliability through numerous successful applications even with limited sample size, which is critical for sample-based studies considering the challenging field accessibility in wetland environments [49,50,51,52]. Therefore, this research selected SVM to classify UAV-based orthophoto and compare the results with those by adding one of the statistical layers into the classification inputs to identify two optimal classification results for the multi-classification ensemble in the following steps.

3.4. Stage 3: Classification Correction

Stage 3 uses an object-oriented classification ensemble method to improve classification based on classifications from multiple inputs and a set of indicators. The classification results from Stage 2 using a traditional classification method may produce classification accuracies that are acceptable for many classification applications (e.g., over 80%). However, when applied to coastal topographic mapping for terrain correction, the statistically acceptable percentage of errors may produce widely distributed artificial spikes and ditches on terrain and therefore needs correction. A recent trend to improve classification is to utilize classification results from multiple classifiers [53,54]. A commonly used strategy to ensemble the classification results is through majority voting by assigning the final class type to the one receiving the majority number of votes from the classifiers [50]. However, limited spectral bands in photogrammetric UAV restrict its application as changing classifiers is not likely to significantly correct the spectrally confused areas. Therefore, this research explores a comprehensive approach to integrate spectral, texture, and contextual statistics, as well as results from multiple classifications, to improve classification.
For UAV applications, orthophoto is the main source for land cover classification. Adding layers from topographic statistics may improve classification in certain areas but reduce accuracies in other areas, especially when large gaps exist in the matching points due to lack of texture or shadow effects resulting from low contrast or sunlight illumination. Comparing to traditional pixel-based classification, the object-oriented ensemble approach provides a promising solution because of its flexibility in integration and logical realization. Figure 4 conceptualizes the method to integrate multiple classification results with other indicators. Figure 5 illustrates the need for classification correction and the effect of analytical steps on six representative plots to demonstrate the scenarios and procedures in the following sections.
Let Classification 1 represent the classification based on orthophoto and Classification 2 denote the other optimal classification result, which is the classification result based on orthophoto and mean statistic layer in this experiment. The other useful information derivable from matching points for the following steps includes the point density, maximum, minimum, maximum–minimum, and mean elevations, texture (variance) from the natural color images, and the band with high contrast in brightness values between vegetated and shadowed areas. Among the three RGB bands, the red band in orthophoto provides the highest contrast in troubled areas based on visual examination of individual bands in this experiment and hence differentiates dark and bright objects.
After comparative classification experiments, Classification 1 based on orthophoto and Classification 2 based on orthophoto and mean elevation serve as the multi-classification inputs. Four main steps of corrections shown on the right side of Figure 4 correspond to four main types of class confusions observed from Classification 2 in this experiment.
(1) Correction of water
In coastal environments, water may have a frequent presence in wetlands and areas with high moisture saturation in soil. Water related errors often result from the following phenomena. First, users should carefully examine DTM quality especially in water areas and remove outliers as stated in the UAV mapping section. For example, two mismatched pixels created two sharp artificial drops in water areas in this experiment. Because of lack of nearby matching points and the interpolation process, the DTM, classification and object-oriented statistics further propagated and expanded these errors. Therefore, a manual preprocess removed these points before DTM interpolation. Second, in order to acquire high quality pictures, UAV field data collections usually occur under a sunny weather condition with relatively calm water surfaces, which results in no matching points (no point density value) in some water areas due to lack of texture. After removing the mismatched points, the interpolation process may derive a reasonable elevation values based on nearby points. Therefore, part of these water areas may have correct classification label, and no point density may be a useful indicator for other misclassified water areas if applicable. Third, due to the confusion between shallow water and dark ground surface with high moisture as demonstrated in Plots 1 and 2 in Figure 5, many misclassified water pixels may scatter on land with fragmented shapes or salt-and-pepper patterns. These errors usually locate in dark soil or shadowed areas between vegetation leaves in this study site and need correction first as the later processes may compare object elevations with nearby ground surfaces. Users can extract and correct these errors based on object size, elevation and other characteristics such as no point density. This study used region group tool in ArcGIS software to form objects. To estimate elevation of nearby ground, this research uses the zonal statistics tool based on the objects and the mean ground elevation calculated with a 3 × 3 kernel from the ground elevation image. This mean elevation calculation will expand the edges of ground elevation layer into the boundary of water objects in order to return valid values from the zonal statistics. Fourth, some areas with shallow water (few centimeters) next to land have dense matching points from the bathymetry and therefore may cause misclassification as ground. Nevertheless, visual inspection of the DTM in these areas reflecting reasonable topography in this experiment, indicating no need for class and terrain correction. However, users should assess the needs in their sites as water condition may be different due to color, clarity, waves, and bed texture etc.
Users can apply the above characteristics to correct water related classification errors. The analytical processes in Figure 4 demonstrate the effective ones in this experiment. From the result of Classification 2, the correction procedures first extract those pixels labeled as water to form objects and then change an object to ground if the object is too small, too high, or has no matching points (point density ≥ 1). Observing from the water pixels and terrain model, this experiment applied five pixels for object size and 0.22 m (NAVD 88) for elevation threshold. The no point density condition has no impact on large objects of classified water from actual water areas as part of the object areas likely has some matching points. Corrected Classification 3 is the corrected classification result for the next process.
(2) Correction of ground
When correcting confusions between vegetation types, the procedure needs to compare its object elevation to nearby ground pixels to estimate vegetation height. However, some shadowed pixels in tall vegetation misclassified as ground as demonstrated in Plot 2 in Figure 5 resulted in underestimated vegetation height and hence failure to separate tall and low vegetation. Therefore, the goal of this step is to correct these ground-labeled pixels as vegetation. Unlike the typical larger size of exposed ground areas, these objects from misclassified ground pixels are fragmental in shape and scatter between leaves with a much higher elevation value than nearby ground surfaces but close to those of the tall vegetation. Therefore, this step change objects from ground to tall vegetation if an object is too small and their elevation difference to nearby tall vegetation is smaller than a threshold. Observing from the extracted objects of ground, this experiment used ten pixels for object size and 0.2 m for the elevation threshold.
(3) Correction of tall vegetation with low contrast
One type of severe confusion between tall vegetation and low vegetation are in clusters of tall vegetation as illustrated in Plots 1 and 2 in Figure 5. Further examination discovers a common characteristic of low contrast due to shadow or sparse leaves on edges, which often cause gaps in point clouds with no point density. The spectral band with larger contrast, which is the red band in this true-color orthophoto, helps differentiate objects with contrast difference. However, ground areas among tall vegetation may have no point density and result in overestimation of elevations in topographic statistics through interpolation. These errors lead to the misclassification as low vegetation in Classification 2. However, when elevation statistic layer is not involved, these areas will be classified as ground or water in Classification 1. Therefore, the method extracts those pixels that are classified as low vegetation in current result but are classified as ground or water in Classification 1 for further analysis.
If an object of low vegetation in this collection has no nearby ground pixels (no value in nearby ground elevation), these objects are likely isolated ground areas among tall vegetation that are causing the problem above. Therefore, these pixels of low vegetation are first corrected as ground. The corrected classification is then used to extract low vegetation with no point density, and the objects with mean spectral values in red band less than seventy are reclassified as tall vegetation. Note that the contrast threshold is for an extracted subset of potentially troubled areas instead of the entire image, and correctly classified healthy low vegetation among tall vegetation is usually not in this subset and hence will not cause problem. Corrected Classification 5 concludes the correction of tall vegetation with low contrast.
(4) Correction of taller low vegetation
One type of widely distributed errors in low vegetation as demonstrated in Plots 3 and 4 in Figure 5 are that low vegetation with relatively higher elevation are classified as tall vegetation when a topographic layer is used in classification. However, these areas cause no trouble when only orthophoto is used. Thus, the correction procedures extract those pixels classified as tall vegetation but with a value of low vegetation in Classification 1. Among these tall vegetation candidates, the objects with variance less than or equal to a threshold are corrected as low vegetation. This experiment used twenty as the threshold by examining the data. These procedures will correct significant amount of errors in this category.
(5) Correction of low vegetation labeled as tall vegetation
In this last step, the classification correction processes gradually extract and correct classification labels by examining different spectral and contextual information available from photogrammetric UAV imagery. Similar to the above analyses, these thresholds are derivable from the subset areas of interests. One major type of error remaining in the classification is the low vegetation mislabeled as tall vegetation, as shown in Plot 5 in Figure 5. Many errors in this category share a common characteristic of relatively dark color in contrast to well-grown tall vegetation, and some are low vegetation located in shadows of tall vegetation. However, the white-color parts of the targets produce high reflectance and confused with tall vegetation, which needs correction before correction of low vegetation. Thus, the method corrects tall vegetation objects with mean reflectance values in red band greater than or equal to 200 as ground. Considering that tall vegetation in this study usually presents in larger clusters, tall vegetation objects smaller or equal to 400 pixels should be low vegetation. This procedure corrects some portion of well-grown low vegetation. The followed procedure confirms some well-grown tall vegetation objects with variances greater than or equal to 100 as tall vegetation and excludes them from further examination. From the remaining collection, tall vegetation objects with average elevation difference to nearby low vegetation greater than or equal to 0.03 m are excluded as tall vegetation since low vegetation usually has similar elevation to nearby low vegetation. With the updated low vegetation and tall vegetation, the objects with elevation difference to nearby low vegetation less than −0.09 m (lower than nearby low vegetation) will be corrected as low vegetation. Finally, the method changes tall vegetation objects with variance less than 40 or reflectance values in red band smaller than 30 or larger than 72 to low vegetation. This concludes the classification correction processes as no significant errors remain according to visual inspection.

3.5. Stage 4: Terrain Correction

Theoretically, there are two main approaches to integrate field surveys with remotely sensed images for terrain corrections in vegetated areas. The first approach relies on dense and well-distributed GPS surveys to generate interpolated DEM or TIN model to be used within ground filtering process. In densely vegetated area, this approach requires full coverage and dense GPS survey of the study site for better control, which can be challenging for wetland field survey considering the limitation on accessibility, weather, and cost. The second approach is to run ground-filtering process first based on the matching points to generate a DTM and then use land-cover classification and terrain correction factors derived from GPS points for each class to conduct terrain correction. The second approach allows flexible sampling numbers and locations, has a potential for mapping of large areas and upscaling, and is proven effective [2]. Therefore, this research adopts terrain correction factors based on land-cover classification.
Given a set of field samples for each class, the terrain correction procedure starts from calculating the average elevation difference from GPS surveys to a terrain model. This terrain model can be a DTM derived directly from the UAV terrain mapping process or statistic maps. Since the goal of this research is to refine the poorly mapped DTM from the UAV system, the terrain difference is calculated by subtracting initial DTM values from the GPS survey values, so that negative values indicate lower ground than mapped values in DTM and hence a subtractive terrain correction factor. The final terrain correction factor for a class is the mean elevation difference for all samples located in a class. Finally, the terrain correction process applies the terrain correction factor in reference to the classification map and produces a final corrected DTM.

4. Results

4.1. Data Processing and DTM Construction from UAV Images

From the thirty-seven images obtained through the final flight line, Pixel4D software generated a point cloud composed of 24,234,768 points covering an approximate area of 4600 m2 with an average ground sampling distance of 1.37 cm. After the generation of point clouds, the process conducted point cloud registration based on the GCP coordinates of the ten targets manually added to the corresponding points in the point cloud for registration. The overall RMSE value of the registration was 0.008 m as reported by the target registration process. The point cloud classification and DTM extraction functions provided by Pix4Dmapper could filter the point clouds and create the DTM (Figure 6). However, the direct output of DTM from the UAV mapping software showed significant overestimation of topography under dense vegetation with a mean error of 1.305 m, which were most obvious in the red zone on the southwest side of the sand berm dominated by tall spartina alterniflora. These errors in vegetated areas are significant and need further correction.

4.2. Land Cover Classification Comparison

The SVM classified the data based on 109 samples for bare ground, 126 samples for low vegetation, 102 samples for tall vegetation, and 30 samples for water. A rectangular area with around six to nine pixels surrounding each training sample location are used for training statistics in wetlands, a strategy recommended by Congalton and Green [55]. The accuracy assessment used a separate set of samples with 58 bare ground samples, 68 low vegetation samples, 46 tall vegetation samples, and 34 water samples. The ground and vegetation samples came from profile-based surveys that are evenly distributed on the sand berm, and water samples were from randomly selected and interpreted points from the orthophoto. The profiles for training and accuracy assessment samples are from separate and independent sources for different uses.
For single classification approach, this research assessed the classification results based on orthophoto and then compared the results to those with one additional statistical layer (e.g., mean, maximum, minimum, maximum–minimum, and point density). Figure 7 demonstrates the comparative result of selected classifications. The classification results in Table 1 show that classification based on orthophoto has an overall accuracy of 83.98% and a Kappa value of 0.7806. Adding mean, maximum, or minimum statistics all significantly improve classification to be over 92.72% with Kappa value over 0.9005. Experiment of adding the maximum–minimum layer slightly improves the classification of orthophoto but is significantly lower than the above three. Adding point density significantly reduces classification accuracy to 62.62% and drops Kappa value to 0.5266. This result proves that point density data are not a suitable input layer for methods based on single classification due to widely distributed gaps and non-distinctive density values corresponding to cover types.
Among the first six classification results, classification based on orthophoto and mean elevation yields the highest result with an overall accuracy of 93.2% and a Kappa value of 0.9072. However, significant errors are visible through visual inspection of both classification results. The new classification method based on multiple classifications and object-oriented analyses further improves the overall accuracy to 96.12% and the Kappa value to 0.9468. Especially, the corrected result from Classification 7 significantly improves bare ground, low vegetation and tall vegetation but has a relatively lower accuracy in water comparing to Classification 2. The confusion in water was mostly from the shallow land-water transition zones and the resulted terrain is bathometry beneath the water surface. Therefore, this type of classification errors in water needs no correction. In addition, visual inspection confirmed the effective removal of significant errors in Classifications 1 and 2, proving successful use of this object-oriented classification ensemble algorithm for photogrammetric UAV.
It appears that, in this research, Classification 7 corrected significant amount of errors in Classification 2. However, the accuracy assessments show only moderate improvement (2.92% for overall accuracy and 0.0396 for Kappa value). Further examination of samples proves dominance effect of samples without class changes (Table 2). These results indicate underestimation of the improvement. To leverage this effect, a visual inspection on the six plots in Figure 5 shows the comparative results of classification accuracies. Plots 1 and 2 in Figure 5 prove significant correction of water on land and tall vegetation with low contrast. Plots 3, 4, and 6 prove significant correction of tall-labeled low vegetation. Plot 6 shows additional feature of vertical bank corrected from tall vegetation to low vegetation. Plot 5 shows correction of low vegetation in tall vegetation due to low contrast in orthophoto. In conclusion, the proposed new method based on multi-classification and classification ensemble significantly improves results from single classification.

4.3. Terrain Correction under Dense Vegetation

This research integrated GPS measurements of 361 samples in low vegetation and 219 samples in tall vegetation to estimate terrain correction factors based on mean errors between the initial DTM and GPS surveys, which results −0.302 m for low vegetation and −1.283 m for tall vegetation. The terrain correction factors with consideration of subclasses are −0.306 m and −0.310 m (subclass) for low vegetation and −1.342 m and −1 m (subclass) for tall vegetation. Figure 8 shows the comparison of DTM mapping and correction results. Table 3 shows topographic mapping products before and after classification correction and with or without subclass considerations based on accuracy assessment through 61 samples in low vegetation and 55 samples in tall vegetation. The results demonstrated that the direct terrain mapping result from the UAV mapping software has mean errors of 0.302 m for low vegetation (RMSE = 0.342 m) and 1.305 m for tall vegetation (RMSE = 1.399 m) and mean absolute errors of 0.302 m for low vegetation and 1.306 m for tall vegetation. Considering the mean vegetation height of 0.37 m for low vegetation and 1.63 m for tall vegetation based on field survey of vegetation plots, these absolute errors were 81.6% of low vegetation height and 80.1% of the tall vegetation height, indicating significant overestimation of terrains under dense vegetation canopy.
Conducting terrain correction based on the best classification result (Classification 2) through single classification significantly reduced mean errors to −0.034 m for low vegetation (RMSE = 0.255 m) and 0.093 m tall vegetation (RMSE = 0.546 m) and mean of absolute errors to 0.155 m for low vegetation and 0.420 for tall vegetation. The terrain correction based on the new method further improves the accuracies to −0.002 m for low vegetation (RMSE = 0.187 m) and 0.075 m for tall vegetation (RMSE = 0.562 m) without subclasses and −0.002 m for low vegetation (RMSE = 0.177 m) and 0.057 m (RMSE = 0.550 m) for tall vegetation with subclass consideration. These results confirm that it is feasible to conduct land cover classification and significantly correct under canopy terrain by integrating UAV with GPS training samples collected in the field. It is worth noting that many corrected classifications by Classification 7 are located near edges of objects where samplings of accuracy assessment typically avoid due to class ambiguity, which therefore leads to underestimation of improvement. Further examination confirms that majority of the training samples (58 out of the 61 samples in low vegetation and 52 out of the 55 samples in tall vegetation) are located in the unchanged areas and dominated the accuracy assessment results.
The traditional accuracy assessment results tend to underestimate the improvement of the introduced new method due to similar limitation of sampling methods in remote sensing as described by Congalton and Green [55]. This research conducted visual inspection based on six representative plots (Figure 9) to compensate this problem. Impacted by the poor capability to filter dense vegetation, the DTM before correction showed large artificial height contrasts among tall vegetation, low vegetation and ground surface. Terrain correction significantly reduced these errors. Comparing to the corrected terrain based on single classification, the multi-classification and object-oriented correction method significantly reduced the amount of noise in low vegetation as demonstrated in Plots 3, 4 and 6. In addition, Correction 7 successfully corrected the type of errors at vertical bank as illustrated in Plot 6. However, the results in Plots 1 and 2 show over correction comparing to the landscape when applying the global terrain correction factor of tall vegetation to places where tall vegetation are shorter than the average height. These usually occur on the edges of tall vegetation, especially the spartina alterniflora sparsely grown on the northeast side. Comparing the terrain correction with subclass to those without subclass, the effects of classification correction in Plots 3, 4 and 6 are obvious. However, the effects of terrain correction are difficult to observe visually at this scale, mainly because the difference between the terrain correction factors in low vegetation is minor (0.011 m) in this study site. The difference in tall vegetation has a larger value of 0.5 m as shown in Plots 1 and 5 in Figure 9 after a careful examination. The combined accuracy assessment and visual inspection confirm significant improvement based on the new ensemble method.

5. Discussion

With a flight height of 42 m, the mapping based on UAV generated dense point clouds with an average point sampling distance of 1.37 cm and a registration with RMSE of 0.008 m. The standard deviation in bare ground area is 0.029 m, which is comparable to the approximate 0.029 m by Mancini, Dubbini, Gattelli, Stecchi, Fabbri and Gabbianelli [42] and better than the 3.5–5.0 standard deviation achieved by Goncalves and Henriques [41]. These results indicate a reliable mapping accuracy and accurate registration between GPS measurements and DTM. In addition, the experiment proves that dense point mapping from UAV can produce DTM models with 2 cm resolution and centimeter level vertical accuracy that is comparable to RTK GPS control in non-vegetated areas. However, significant overestimation that accounts for approximately 80% of vegetation heights presented in densely vegetated areas (80.1% of tall vegetation and 81.6% of low vegetation). This proves that large areas of dense vegetation are problematic when applying UAV for coastal environments and therefore need further correction.
Before assigning terrain correction factors, this research compares multiple classification methods to identify the optimal classification method based on limited spectral information from photogrammetric UAV. Classifications through single SVM classification method yielded 83.98% accuracy based on orthophoto. Adding the mean elevation statistic layer significantly improves the accuracy to 93.2%. This result proves that adding topographic information to spectral information can significantly improve classification solely based on orthophoto. In addition, adding the minimum or maximum statistics layers produced comparable classification accuracies due to their high correlation and the small sampling size of 2 cm used in this research. Which of these three statistic layers provides the best classification could be site-dependent. The proposed new method based on multi-classification and object-oriented analytical method further improves the classification to 96.12%. More corrections are observable in some particular areas, such as edges of land cover objects and areas with low contrast that represents areas spectrally confused by the traditional classification methods but omitted by the assessment results due to limitation of sampling in remote sensing. Furthermore, the developed hybrid analytical method differs from those existing ensemble methods [53,54] by incorporating comprehensive texture, contextual, and logic information, which allows correction of spectral confusion created by traditional classification methods.
Terrain correction based on the best single classification method significantly corrects the overestimation in both low and tall vegetation, and the proposed new method further improves the accuracies especially in those areas with spectral confusion. The strategy of extracting subclasses from the analytical procedures for adjusted terrain correction factors shows slight improvement in topographic mapping. The final corrected DTM based on the new method produces mean accuracy of −2 cm for low grass areas, which is better than the 5 cm achieved by Gruszczyński, Matwij and Ćwiąkała [39]. Correction in tall vegetation successfully reduced mean terrain errors from 1.305 m to 0.057 m. Furthermore, it is worth noting that the spartina alterniflora in this site grows on banks in much denser and taller condition than most other reported experiments in frequently inundated wetlands. Areas with taller and denser vegetation are relatively more challenging for topographic mapping than areas covered by low and sparse vegetation.
While solving the challenge to classify UAV data with limited spectral information, some useful findings and challenges are worth of discussion. The following texture or contextual statistics demonstrate usefulness for UAV data from natural color cameras: maximum elevation, minimum elevation, vegetation height (maximum–minimum), mean elevation, point density, the spectral band with relatively larger contrast, and variance calculated from orthophoto. The first three elevation statistics are useable as direct input for pixel-based classification, while others may be more meaningful at object level or in analytical steps. Under such dense point condition from UAV mapping, point density appeared to be not helpful for class classification when directly used as an additional input band. However, areas without matching points on land are useful indicators to extract troubled areas for correction due to shadow, low contrast, or lack of texture. Among the three RGB bands from most commonly used natural color cameras, red band shows the biggest contrast in brightness in this experiment. When applying UAV for densely vegetated coastal environments, another major challenge is the lack of matching points (gaps) in shadows and areas with low contrast. The most representative locations are on the edges of tall vegetation with relatively darker tone or shadowed areas due to the presence of nearby taller objects.

6. Conclusions

This research aims to solve the problems of topographic mapping under dense vegetation through a cost-effective photogrammetric UAV system. By integrating GPS surveys with the UAV-derived points from vegetation crown, this research developed an object-oriented classification ensemble algorithm for classification and terrain correction based on multiple classifications and a set of indicators derivable from the matching points and orthophoto. The research validated the method at a wetland restoration site, where dense tall vegetation and water covers the lower portion of the curvature surface and dense low vegetation spreads in the relatively taller areas.
Overall, the research results prove that it is feasible to map terrain under dense vegetation canopy by integrating UAV-derived points from vegetation crown with GPS field surveys. The developed method provides a solution to apply UAV for coastal topographic mapping under dense vegetation as a timely and cost-effective solution. The most intriguing applications of this method are the high-resolution monitor of sediment dynamics due to hazards, wetland restorations, and human activities and the multi-disciplinary integrations with wetland ecologists and sedimentologists. The sizes of study sites with intense surveys often match with their sampling sites. For example, users can apply UAV mapping to mangrove and salt marsh wetlands for high-resolution 3D inundation modeling by integrating with the hourly water level recorders installed by wetland ecologists.
Terrain estimation under dense vegetation is one of the most challenging issues in terrain mapping. At this early stage, the goal of this research is test the hypothesis whether it is possible to derive meaningful terrain estimation by applying photographic UAV to densely vegetated wetlands and demonstrate a successful solution with strategic references to many other users. To apply the method to other study sites, users can apply the outlined general processes and tools as in Figure 3. To tailor classification correction to their classification schemes, users can adopt the method by generating the two optimal classification results and all the listed indicators, observing confusions patterns between each pair of classes, and analyze how to use those indicators to correct classification through referencing to those logics used in this research. Since UAV provides super high resolution mapping with detailed texture, users can determine all thresholds by observing the derived subset images. The examine methods may include but not limited to displaying the map of statistics with classified or contrasting colors, visual comparison with orthophoto images, trial and error experiments, histogram examination, and examining values of selected samples (such as water and targets). In addition, replicating this method to other sites is not a success or fail issue but rather the amount of invested effort matching with the desired improvement level. For example, users may choose to apply the method to correct one or two main troubling classification errors depending on the needs.
A potential limitation of the past data collection as some may point out is the limited number of images and overlap. However, although the more overlapping the better, there is rarely quantified studies about the minimum overlap rates for UAV. Traditional photogrammetric stereo mapping requires 60% overlap along flight lines and 30–40% overlap between flight lines. This experiment has overlapping rates around 74% and 55% correspondingly and an average of five images for each matching points. On the other hand, visual inspection of the point matching results indicates no problem in areas with clear images and areas with no matching points are mostly shadowed areas or areas with low contrast or high moisture, which is not an issue solvable by increasing the number of images. However, the plan for future study is to conduct an experiment to quantify the uncertainties induced from data acquisition settings such as overlap rates, flight height, fly speed, and imaging time etc. Other directions for future studies include validations of the method in different types of landscapes, development of local terrain correction factors based on local vegetation status, nearby elevation samples or bare ground elevations, and the impact of resampling cell sizes of these elevation models to alleviate the heterogeneity of crown structure. Another direction is to develop more robust and global solutions for future studies.

Acknowledgments

This work was supported by Martin Ecosystems through the Buras wetland restoration project. Acquisition of the terrestrial LiDAR system was supported by the Office of Research & Economic Development (ORED), Louisiana State University. The authors sincerely thank for the comments from anonymous reviewers and members of the editorial team.

Author Contributions

Xuelian Meng developed the algorithm, guided the classification analysis and wrote the initial draft; Nan Shang and Xukai Zhang helped processing the data, assisted classification and terrain correction, and contributed to the writing. Chunyan Li supported the study through leading the collaborative wetland restoration project and contributed to the manuscript writing; Kaiguang Zhao and Xiaomin Qiu contributed to the intellectual design of the research and writing; Eddie Weeks constructed UAV system and helped collecting and processing the data for the experiment site.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kulawardhana, R.W.; Popescu, S.C.; Feagin, R.A. Airborne lidar remote sensing applications in non-forested short stature environments: A review. Ann. For. Res. 2017. [Google Scholar] [CrossRef]
  2. Hladik, C.; Schalles, J.; Alber, M. Salt marsh elevation and habitat mapping using hyperspectral and Lidar data. Remote Sens. Environ. 2013, 139, 318–330. [Google Scholar] [CrossRef]
  3. Hladik, C.; Alber, M. Accuracy assessment and correction of a Lidar-derived salt marsh digital elevation model. Remote Sens. Environ. 2012, 121, 224–235. [Google Scholar] [CrossRef]
  4. Klemas, V.V. Coastal and environmental remote sensing from unmanned aerial vehicles: An overview. J. Coast. Res. 2015, 1260–1267. [Google Scholar] [CrossRef]
  5. Jensen, J.; Mathews, A. Assessment of image-based point cloud products to generate a bare earth surface and estimate canopy heights in a woodland ecosystem. Remote Sens. 2016, 8, 50. [Google Scholar] [CrossRef]
  6. Meng, X.L.; Wang, L.; Silvan-Cardenas, J.L.; Currit, N. A multi-directional ground filtering algorithm for airborne Lidar. ISPRS J. Photogramm. Remote Sens. 2009, 64, 117–124. [Google Scholar] [CrossRef]
  7. Zhao, K.; García, M.; Liu, S.; Guo, Q.; Chen, G.; Zhang, X.; Zhou, Y.; Meng, X. Terrestrial Lidar remote sensing of forests: Maximum likelihood estimates of canopy profile, leaf area index, and leaf angle distribution. Agric. For. Meteorol. 2015, 209–210, 100–113. [Google Scholar] [CrossRef]
  8. Zhao, K.; Popescu, S.; Meng, X.; Pang, Y.; Agca, M. Characterizing forest canopy structure with Lidar composite metrics and machine learning. Remote Sens. Environ. 2011, 115, 1978–1996. [Google Scholar] [CrossRef]
  9. Huang, C.; Peng, Y.; Lang, M.; Yeo, I.-Y.; McCarty, G. Wetland inundation mapping and change monitoring using landsat and airborne Lidar data. Remote Sens. Environ. 2014, 141, 231–242. [Google Scholar] [CrossRef]
  10. Lang, M.W.; McCarty, G.W. Lidar intensity for improved detection of inundation below the forest canopy. Wetlands 2009, 29, 1166–1178. [Google Scholar] [CrossRef]
  11. Klemas, V. Remote sensing techniques for studying coastal ecosystems: An overview. J. Coast. Res. 2011, 27, 2–17. [Google Scholar] [CrossRef]
  12. Rundquist, D.C.; Narumalani, S.; Narayanan, R.M. A review of wetlands remote sensing and defining new considerations. Remote Sens. Rev. 2001, 20, 207–226. [Google Scholar] [CrossRef]
  13. Klemas, V. Remote sensing of emergent and submerged wetlands: An overview. Int. J. Remote Sens. 2013, 34, 6286–6320. [Google Scholar] [CrossRef]
  14. Zhu, H.; Jiang, X.; Meng, X.; Qian, F.; Cui, S. A quantitative approach to monitoring sand cay migration in Nansha Qundao. Acta Oceanol. Sin. 2016, 35, 102–107. [Google Scholar] [CrossRef]
  15. Guan, X.; Huang, C.; Liu, G.; Meng, X.; Liu, Q. Mapping rice cropping systems in vietnam using an NDVI-based time-series similarity measurement based on DTW distance. Remote Sens. 2016, 8, 19. [Google Scholar] [CrossRef]
  16. McClure, A.; Liu, X.; Hines, E.; Ferner, M.C. Evaluation of error reduction techniques on a lidar-derived salt marsh digital elevation model. J. Coast. Res. 2015, 424–433. [Google Scholar] [CrossRef]
  17. Neumann, B.; Vafeidis, A.T.; Zimmermann, J.; Nicholls, R.J. Future coastal population growth and exposure to sea-level rise and coastal flooding–A global assessment. PLoS ONE 2015, 10. [Google Scholar] [CrossRef] [PubMed]
  18. Meng, X.L.; Currit, N.; Zhao, K.G. Ground filtering algorithms for airborne Lidar data: A review of critical issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef]
  19. Meng, X.; Zhang, X.; Silva, R.; Li, C.; Wang, L. Impact of high-resolution topographic mapping on beach morphological analyses based on terrestrial Lidar and object-oriented beach evolution. ISPRS Int. J. Geo-Inf. 2017, 6, 147. [Google Scholar] [CrossRef]
  20. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef]
  21. Kim, J.; Lee, S.; Ahn, H.; Seo, D.; Park, S.; Choi, C. Feasibility of employing a smartphone as the payload in a photogrammetric UAV system. ISPRS J. Photogramm. Remote Sens. 2013, 79, 1–18. [Google Scholar] [CrossRef]
  22. Daakir, M.; Pierrot-Deseilligny, M.; Bosser, P.; Pichard, F.; Thom, C.; Rabot, Y.; Martin, O. Lightweight UAV with on-board photogrammetry and single-frequency gps positioning for metrology applications. ISPRS J. Photogramm. Remote Sens. 2017, 127, 115–126. [Google Scholar] [CrossRef]
  23. Sieberth, T.; Wackrow, R.; Chandler, J.H. Automatic detection of blurred images in UAV image sets. ISPRS J. Photogramm. Remote Sens. 2016, 122, 1–16. [Google Scholar] [CrossRef]
  24. Yang, B.S.; Chen, C. Automatic registration of UAV-borne sequent images and Lidar data. ISPRS J. Photogramm. Remote Sens. 2015, 101, 262–274. [Google Scholar] [CrossRef]
  25. Yahyanejad, S.; Rinner, B. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs. ISPRS J. Photogramm. Remote Sens. 2015, 104, 189–202. [Google Scholar] [CrossRef]
  26. Rango, A.; Laliberte, A.; Herrick, J.E.; Winters, C.; Havstad, K.M.; Steele, C.M.; Browning, D.M. Unmanned aerial vehicle-based remote sensing for rangeland assessment, monitoring, and management. J. Appl. Remote Sens. 2009, 3. [Google Scholar] [CrossRef]
  27. Wan, X.; Liu, J.; Yan, H.; Morgan, G.L.K. Illumination-invariant image matching for autonomous UAV localisation based on optical sensing. ISPRS J. Photogramm. Remote Sens. 2016, 119, 198–213. [Google Scholar] [CrossRef]
  28. Tsai, C.-H.; Lin, Y.-C. An accelerated image matching technique for UAV orthoimage registration. ISPRS J. Photogramm. Remote Sens. 2017, 128, 130–145. [Google Scholar] [CrossRef]
  29. Turner, D.; Lucieer, A.; Watson, C. An automated technique for generating georectified mosaics from ultra-high resolution Unmanned Aerial Vehicle (UAV) imagery, based on Structure from Motion (SFM) point clouds. Remote Sens. 2012, 4, 1392–1410. [Google Scholar] [CrossRef]
  30. Xu, Z.; Wu, L.; Gerke, M.; Wang, R.; Yang, H. Skeletal camera network embedded structure-from-motion for 3d scene reconstruction from UAV images. ISPRS J. Photogramm. Remote Sens. 2016, 121, 113–127. [Google Scholar] [CrossRef]
  31. Fytsilis, A.L.; Prokos, A.; Koutroumbas, K.D.; Michail, D.; Kontoes, C.C. A methodology for near real-time change detection between unmanned aerial vehicle and wide area satellite images. ISPRS J. Photogramm. Remote Sens. 2016, 119, 165–186. [Google Scholar] [CrossRef]
  32. Mathews, A.; Jensen, J. Visualizing and quantifying vineyard canopy lai using an unmanned aerial vehicle (UAV) collected high density structure from motion point cloud. Remote Sens. 2013, 5, 2164–2183. [Google Scholar] [CrossRef]
  33. Breckenridge, R.P.; Dakins, M.E. Evaluation of bare ground on rangelands using unmanned aerial vehicles: A case study. Gisci. Remote Sens. 2011, 48, 12. [Google Scholar] [CrossRef]
  34. Lu, B.; He, Y. Species classification using unmanned aerial vehicle (UAV)-acquired high spatial resolution imagery in a heterogeneous grassland. ISPRS J. Photogramm. Remote Sens. 2017, 128, 73–85. [Google Scholar] [CrossRef]
  35. Stagakis, S.; Gonzalez-Dugo, V.; Cid, P.; Guillen-Climent, M.L.; Zarco-Tejada, P.J. Monitoring water stress and fruit quality in an orange orchard under regulated deficit irrigation using narrow-band structural and physiological remote sensing indices. ISPRS J. Photogramm. Remote Sens. 2012, 71, 47–61. [Google Scholar] [CrossRef]
  36. Matikainen, L.; Lehtomaki, M.; Ahokas, E.; Hyyppa, J.; Karjalainen, M.; Jaakkola, A.; Kukko, A.; Heinonen, T. Remote sensing methods for power line corridor surveys. ISPRS J. Photogramm. Remote Sens. 2016, 119, 10–31. [Google Scholar] [CrossRef]
  37. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of damage in buildings based on gaps in 3d point clouds from very high resolution oblique airborne images. ISPRS J. Photogramm. Remote Sens. 2015, 105, 61–78. [Google Scholar] [CrossRef]
  38. Gevaert, C.M.; Persello, C.; Sliuzas, R.; Vosselman, G. Informal settlement classification using point-cloud and image-based features from UAV data. ISPRS J. Photogramm. Remote Sens. 2017, 125, 225–236. [Google Scholar] [CrossRef]
  39. Gruszczyński, W.; Matwij, W.; Ćwiąkała, P. Comparison of low-altitude UAV photogrammetry with terrestrial laser scanning as data-source methods for terrain covered in low vegetation. ISPRS J. Photogramm. Remote Sens. 2017, 126, 168–179. [Google Scholar] [CrossRef]
  40. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned aircraft systems in remote sensing and scientific research: Classification and considerations of use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef]
  41. Goncalves, J.A.; Henriques, R. UAV photogrammetry for topographic monitoring of coastal areas. ISPRS J. Photogramm. Remote Sens. 2015, 104, 101–111. [Google Scholar] [CrossRef]
  42. Mancini, F.; Dubbini, M.; Gattelli, M.; Stecchi, F.; Fabbri, S.; Gabbianelli, G. Using unmanned aerial vehicles (UAV) for high-resolution reconstruction of topography: The structure from motion approach on coastal environments. Remote Sens. 2013, 5, 6880–6898. [Google Scholar] [CrossRef] [Green Version]
  43. Oleire-Oltmanns, S.; Marzolff, I.; Peter, K.; Ries, J. Unmanned aerial vehicle (UAV) for monitoring soil erosion in morocco. Remote Sens. 2012, 4, 3390–3416. [Google Scholar] [CrossRef]
  44. Simpson, J.; Wooster, M.; Smith, T.; Trivedi, M.; Vernimmen, R.; Dedi, R.; Shakti, M.; Dinata, Y. Tropical peatland burn depth and combustion heterogeneity assessed using UAV photogrammetry and airborne lidar. Remote Sens. 2016, 8, 1000. [Google Scholar] [CrossRef]
  45. Morton, R.A.; Bernier, J.C.; Barras, J.A. Evidence of regional subsidence and associated interior wetland loss induced by hydrocarbon production, gulf coast region, USA. Environ. Geol. 2006, 50, 261–274. [Google Scholar] [CrossRef]
  46. Penland, S.; Beall, A.D.; Britsch, L.D.I.; Williams, S.J. Geological classification of coastal land loss between 1932 and 1990 in the Mississippi river delta plain, southeastern Louisiana. Gulf Coast Assoc. Geol. Soc. Trans. 2002, 52, 799–807. [Google Scholar]
  47. Smith, M.W.; Carrivick, J.L.; Quincey, D.J. Structure from motion photogrammetry in physical geography. Prog. Phys. Geogr. 2016, 40, 247–275. [Google Scholar] [CrossRef]
  48. Ozesmi, S.; Bauer, M. Satellite remote sensing of wetlands. Wetl. Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  49. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  50. Zhang, C.; Xie, Z. Data fusion and classifier ensemble techniques for vegetation mapping in the coastal everglades. Geocarto Int. 2013, 29, 228–243. [Google Scholar] [CrossRef]
  51. Niu, X.; Ban, Y.F. Multi-temporal radarsat-2 polarimetric sar data for urban land-cover classification using an object-based support vector machine and a rule-based approach. Int. J. Remote Sens. 2013, 34, 1–26. [Google Scholar] [CrossRef]
  52. Liu, Q.J.; Jing, L.H.; Wang, M.F.; Lin, Q.Z. Hyperspectral remote sensing image classification based on SVM optimized by clonal selection. Spectrosc. Spect. Anal. 2013, 33, 746–751. [Google Scholar]
  53. Angulo, C.; Ruiz, F.J.; Gonzalez, L.; Ortega, J.A. Multi-classification by using tri-class SVM. Neural. Process. Lett. 2006, 23, 89–101. [Google Scholar] [CrossRef]
  54. Wang, X.Y.; Zhang, B.B.; Yang, H.Y. Active SVM-based relevance feedback using multiple classifiers ensemble and features reweighting. Eng. Appl. Artif. Intel. 2013, 26, 368–381. [Google Scholar] [CrossRef]
  55. Congalton, R.G.; Green, K. Assessing the accuracy of remotely sensed data: Principles and practices; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
Figure 1. (a) Study site at the Buras Boat Harbor, Plaquemines Parish, Louisiana. The images in (b) and (c) are aerial photographs from the USGS website and demonstrate the wetland degradation phenomena from 1998 to 2013. The 2015 Google Earth image (d) shows the landscape after sand berm reconstruction. Image (e) is the orthophoto from UAV data collected on 1 October 2015.
Figure 1. (a) Study site at the Buras Boat Harbor, Plaquemines Parish, Louisiana. The images in (b) and (c) are aerial photographs from the USGS website and demonstrate the wetland degradation phenomena from 1998 to 2013. The 2015 Google Earth image (d) shows the landscape after sand berm reconstruction. Image (e) is the orthophoto from UAV data collected on 1 October 2015.
Remotesensing 09 01187 g001
Figure 2. (a) Ground view from the sand berm center with dense Panicum vaginatum Sw (Seashore Paspalum) at elevated zones and Spartina alterniflora (Smoothcord Grass) in low relief zones. (b) Spartina alterniflora in reference to human height. (c) The hexacopter UAV system used in this research.
Figure 2. (a) Ground view from the sand berm center with dense Panicum vaginatum Sw (Seashore Paspalum) at elevated zones and Spartina alterniflora (Smoothcord Grass) in low relief zones. (b) Spartina alterniflora in reference to human height. (c) The hexacopter UAV system used in this research.
Remotesensing 09 01187 g002
Figure 3. Conceptual overview of the object-oriented classification ensemble algorithm. The three colors demonstrated in the first three tiers represent input, process, and output result of the flowchart.
Figure 3. Conceptual overview of the object-oriented classification ensemble algorithm. The three colors demonstrated in the first three tiers represent input, process, and output result of the flowchart.
Remotesensing 09 01187 g003
Figure 4. The flowchart of classification correction based on confusions between classes.
Figure 4. The flowchart of classification correction based on confusions between classes.
Remotesensing 09 01187 g004
Figure 5. Representative plots for landscape scenarios and classification comparison. Classification 1 is the classification result based on orthophoto. Classification 2 is the classification result based on orthophoto and mean elevation statistic layer. Classification 7 is the classification based on the ensemble method.
Figure 5. Representative plots for landscape scenarios and classification comparison. Classification 1 is the classification result based on orthophoto. Classification 2 is the classification result based on orthophoto and mean elevation statistic layer. Classification 7 is the classification based on the ensemble method.
Remotesensing 09 01187 g005
Figure 6. Orthophoto and DTM generated from UAV images acquired on 1 October 2015.
Figure 6. Orthophoto and DTM generated from UAV images acquired on 1 October 2015.
Remotesensing 09 01187 g006
Figure 7. The comparative results of selected classifications.
Figure 7. The comparative results of selected classifications.
Remotesensing 09 01187 g007
Figure 8. The comparative results of DTM mapping and correction.
Figure 8. The comparative results of DTM mapping and correction.
Remotesensing 09 01187 g008
Figure 9. Comparison of terrain correction effects based on different classification methods. Correction 2 and Correction 7 correspond to the correction results based on Classification 2 and Classification 7, respectively.
Figure 9. Comparison of terrain correction effects based on different classification methods. Correction 2 and Correction 7 correspond to the correction results based on Classification 2 and Classification 7, respectively.
Remotesensing 09 01187 g009
Table 1. Comparison of classification accuracies based on different input layers and support vector machine classifier.
Table 1. Comparison of classification accuracies based on different input layers and support vector machine classifier.
Classification MethodBare GroundLow VegetationTall VegetationWaterOverallKappa
Classification 1Orthophoto94.83%85.29%65.22%88.24%83.98%0.781
Classification 2 Orthophoto + Mean96.55%95.59%82.61%97.06%93.20%0.907
Classification 3Orthophoto + Max96.55%95.59%80.43%97.06%92.72%0.901
Classification 4 Orthophoto + Min96.55%94.12%82.61%97.06%92.72%0.901
Classification 5 Orthophoto + (Max-min)94.83%88.23%76.09%85.29%86.89%0.821
Classification 6 Orthophoto + Point Density96.55%79.41%41.30%0.00%62.62%0.527
Classification 7 Corrected100.00%100.00%89.10%91.20%96.12%0.947
Table 2. Comparison of samples with class change based on accuracy assessment between Classification 2 and Classification 7 (without subclass).
Table 2. Comparison of samples with class change based on accuracy assessment between Classification 2 and Classification 7 (without subclass).
ClassTotal Number of SamplesClass ChangedPercentage of Unchanged Samples
Bare Ground58296.55%
Low Vegetation68395.59%
Tall Vegetation46686.96%
Water34294.12%
Table 3. Comparison of terrain correction results before and after classification correction and with or without subclass correction. “Before correction” refers to the SVM classification result based on orthophoto and mean statistics. “After correction (without subclass)” is the terrain correction result based on the corrected classification using the proposed new method. “After correction (with subclass)” further adds an additional subclass from both low and tall vegetation classes and applies adjusted terrain factors accordingly.
Table 3. Comparison of terrain correction results before and after classification correction and with or without subclass correction. “Before correction” refers to the SVM classification result based on orthophoto and mean statistics. “After correction (without subclass)” is the terrain correction result based on the corrected classification using the proposed new method. “After correction (with subclass)” further adds an additional subclass from both low and tall vegetation classes and applies adjusted terrain factors accordingly.
DTM ProductClassCorrection FactorMinimum ErrorMaximum ErrorMean ErrorStandard DeviationMinimum of Absolute ErrorMaximum of Absolute ErrorMean of Absolute ErrorStandard Deviation of Absolute ErrorRMSE
Before CorrectionBare GroundN/A−0.1320.104−0.0190.0290.0010.1320.0270.0220.035
Low Vegetation0.0020.5910.3020.1600.0020.5910.3020.1600.342
Tall Vegetation0.0292.1541.3050.5020.0272.1541.3060.5001.399
Correction based on Classification 2Low Vegetation−0.302−0.9750.289−0.0340.2520.0020.9750.1550.2020.255
Tall Vegetation−1.283−1.3121.8240.0930.5380.0461.8240.4200.3480.546
Correction based on Classification 7 (Without Subclass)Low Vegetation−0.302−0.7760.289−0.0020.1870.0020.7760.1250.1400.187
Tall Vegetation−1.283−1.3121.2580.0750.5290.0461.3120.4290.3190.562
Correction based on Classification 7 (With Subclass)Low Vegetation−0.306−0.8340.285−0.0020.1770.0000.8340.1200.1300.177
Tall Vegetation−1.342−1.3711.2540.0570.5470.0451.3710.4360.3350.550

Share and Cite

MDPI and ACS Style

Meng, X.; Shang, N.; Zhang, X.; Li, C.; Zhao, K.; Qiu, X.; Weeks, E. Photogrammetric UAV Mapping of Terrain under Dense Coastal Vegetation: An Object-Oriented Classification Ensemble Algorithm for Classification and Terrain Correction. Remote Sens. 2017, 9, 1187. https://doi.org/10.3390/rs9111187

AMA Style

Meng X, Shang N, Zhang X, Li C, Zhao K, Qiu X, Weeks E. Photogrammetric UAV Mapping of Terrain under Dense Coastal Vegetation: An Object-Oriented Classification Ensemble Algorithm for Classification and Terrain Correction. Remote Sensing. 2017; 9(11):1187. https://doi.org/10.3390/rs9111187

Chicago/Turabian Style

Meng, Xuelian, Nan Shang, Xukai Zhang, Chunyan Li, Kaiguang Zhao, Xiaomin Qiu, and Eddie Weeks. 2017. "Photogrammetric UAV Mapping of Terrain under Dense Coastal Vegetation: An Object-Oriented Classification Ensemble Algorithm for Classification and Terrain Correction" Remote Sensing 9, no. 11: 1187. https://doi.org/10.3390/rs9111187

APA Style

Meng, X., Shang, N., Zhang, X., Li, C., Zhao, K., Qiu, X., & Weeks, E. (2017). Photogrammetric UAV Mapping of Terrain under Dense Coastal Vegetation: An Object-Oriented Classification Ensemble Algorithm for Classification and Terrain Correction. Remote Sensing, 9(11), 1187. https://doi.org/10.3390/rs9111187

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop