Next Article in Journal
Context-Dependence of Urban Forest Vegetation Invasion Level and Alien Species’ Ecological Success
Next Article in Special Issue
Robinia pseudoacacia L. in Short Rotation Coppice: Seed and Stump Shoot Reproduction as well as UAS-based Spreading Analysis
Previous Article in Journal
Modeling Post-Fire Tree Mortality Using a Logistic Regression Method within a Forest Landscape Model
Previous Article in Special Issue
Automatic Segmentation of Mauritia flexuosa in Unmanned Aerial Vehicle (UAV) Imagery Using Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Effectiveness of Unmanned Aerial Systems (UAS) for Collecting Thematic Map Accuracy Assessment Reference Data in New England Forests

by
Benjamin T. Fraser
* and
Russell G. Congalton
Department of Natural Resources and the Environment, University of New Hampshire, 56 College Road, Durham, NH 03824, USA
*
Author to whom correspondence should be addressed.
Forests 2019, 10(1), 24; https://doi.org/10.3390/f10010024
Submission received: 27 November 2018 / Revised: 18 December 2018 / Accepted: 27 December 2018 / Published: 3 January 2019
(This article belongs to the Special Issue Forestry Applications of Unmanned Aerial Vehicles (UAVs) 2019)

Abstract

:
Thematic mapping provides today’s analysts with an essential geospatial science tool for conveying spatial information. The advancement of remote sensing and computer science technologies has provided classification methods for mapping at both pixel-based and object-based analysis, for increasingly complex environments. These thematic maps then serve as vital resources for a variety of research and management needs. However, to properly use the resulting thematic map as a decision-making support tool, an assessment of map accuracy must be performed. The methods for assessing thematic accuracy have coalesced into a site-specific multivariate analysis of error, measuring uncertainty in relation to an established reality known as reference data. Ensuring statistical validity, access and time constraints, and immense costs limit the collection of reference data in many projects. Therefore, this research proposes evaluating the feasibility of adopting the low-cost, flexible, high-resolution sensor-capable Unmanned Aerial Systems (UAS, UAV, or Drone) platform for collecting reference data to use in thematic map accuracy assessments for complex environments. This pilot study analyzed 377.57 ha of New England forests, over six University of New Hampshire woodland properties, to compare the similarity between UAS-derived orthomosaic samples and ground-based continuous forest inventory (CFI) plot classifications of deciduous, mixed, and coniferous forest cover types. Using an eBee Plus fixed-wing UAS, 9173 images were acquired and used to create six comprehensive orthomosaics. Agreement between our UAS orthomosaics and ground-based sampling forest compositions reached 71.43% for pixel-based classification and 85.71% for object-based classification reference data methods. Despite several documented sources of uncertainty or error, this research demonstrated that UAS are capable of highly efficient and effective thematic map accuracy assessment reference data collection. As UAS hardware, software, and implementation policies continue to evolve, the potential to meet the challenges of accurate and timely reference data collection will only increase.

Graphical Abstract

1. Introduction

Growing dissidence over the causes and impacts of environmental change in the modern era has forced an ever-increasing need for data accuracy and certainty. Studied patterns of global change such as habitat augmentation, loss of biodiversity, invasive species spread, and other systems imbalances have designated humans as a ubiquitous disturbance for the natural world, leading to the current ‘Anthropocene’ era [1,2,3]. Degrading natural systems also causes noted pressures on human economies and quality of life through diminished potential for ecosystem services [4]. These services include life sustaining functions such as nutrient regulation, primary production products in agriculture and forestry, water quality management, and disease mitigation [1,2,5]. Modeling natural systems requires us to undergo the inherently difficult task of finding representative characteristics. Forested landscapes comprising high compositional and structural diversity (i.e., complexity), such as those in the Northeastern United States, further impede these efforts [6]. In many cases, land cover allows us this ability to represent fundamental constructs of the earth’s surface [7]. We can then employ remote sensing as a tool to collect land cover data at scales sufficient to overcome environmental issues [8,9,10].
Remote sensing provides the leading source of land use and land cover data, supported by its scales of coverage, adaptability, and prolific modifications [7,11,12]. The classification of remote sensed imagery traditionally referred to as thematic mapping, labels objects and features in defined groups based on the relationship of their attributes [13,14]. This process incorporates characteristics reflected within the source imagery and motivations of the project, to recognize both natural and artificial patterns and increase our ability to make informed decisions [13,15,16].
In the digital age, the process of image classification has most often been performed on a per pixel basis. Pixel-based classification (PBC) algorithms utilize spectral reflectance values to assign class labels based on specified ranges. More refined classification techniques have also been formed to integrate data such as texture, terrain, and observed patterns based on expert knowledge [17,18,19].
Technologies have recently advanced to allow users a more holistic, human vision matching, approach to image analysis in the form of object-based image analysis (OBIA). Object-based classification (OBC) techniques work beyond individual pixels to distinguish image objects (i.e., polygons, areas, or features), applying additional data parameters to each individual unit [10,20,21]. OBC methods can also benefit users by reducing the noise found in land cover classifications at high spatial resolution using class-defining thresholds of spectral variability and area [22]. The specific needs of the project and the characteristics of the remote sensing data help guide the decision between which classification method would be most appropriate for creating a desired thematic layer [15,23].
Outside of the progression of classification algorithms, novel remote sensing and computer vision technologies have inspired new developments in high-resolution three-dimensional (3D) and digital planimetric modeling. Photogrammetric principles have been applied to simultaneously correct for sensor tilt, topographic displacement in the scene, relief displacement, and even lens geometric distortions [24,25]. To facilitate this process, Structure from Motion (SfM) software packages isolate and match image tie points (i.e., keypoints) within high-resolution images with sizeable latitudinal and longitudinal overlap to form 3D photogrammetric point clouds and orthomosaic models [25,26,27]. Techniques for accurate and effective SfM modeling have been refined, even in complex natural environments, to expand the value of these products [28,29,30].
The appropriate use of these emergent remote sensing data products establishes a need for understanding their accuracy and sources of error. Validating data quality is a necessary step for incorporating conclusions drawn from remote sensing within the decision-making process. Spatial data accuracy is an aggregation of two distinct characteristics: positional accuracy and thematic accuracy [10]. Positional accuracy is the difference in locational agreement between a remotely sensed data layer and known ground points, calculated through Root Mean Square Error (RMSE) [31]. Thematic accuracy expresses a more complex measure of error, evaluating the agreement for the specific labels, attributes, or characteristics between what is on the ground and the spatial data product, typically in the form of an error matrix [10].
The immense costs and difficulty of validating mapping projects have brought about several historic iterations of methods for quantitatively evaluating thematic accuracy [10]. Being once an afterthought, the assessment of thematic accuracy has matured from a visual, qualitative process into a multivariate evaluation of site-specific agreement [10]. Site-specific thematic map accuracy assessments utilize an error matrix (i.e., contingency table or confusion matrix) to evaluate individual class accuracies and the relations among sources of uncertainty [23,32]. While positional accuracy holds regulated standards for accuracy tolerance, thematic mapping projects must establish their own thresholds for amount and types of justifiable uncertainty. Within thematic accuracy two forms of error exist: errors of commission (i.e., user’s accuracy) and errors of omission (i.e., producer’s accuracy) [33]. Commission errors represent the user’s ability to accurately classify ground characteristics [10]. Omission errors assesses if the known ground reference points have been accurately captured by the thematic layer for each class [33]. For most uses, commission errors are favored because the false addition of area to classes of interest is of less consequence than erroneously missing critical features [10]. The error matrix presents a robust quantitative analysis tool for assessing thematic map accuracy of classified maps created through both pixel-based and object-based classification methods.
Collecting reference data, whether using higher-resolution remotely sensed data, ground sampling, or previously produced sources, must be based on a sound statistical sample design. Ground sampling stands out as the most common reference data collection procedure. However, such methods generally come with an inherent greater associated cost. During the classification process, reference data can be used for two distinct purposes, depending on the applied classification algorithm. Reference data can be used to train the classification (training data), generating the decision tree ruleset which forms the thematic layer. Secondly, reference data are used as the source of validation (validation data) during the accuracy assessment. These two forms of reference data must remain independent to ensure the process is statistically valid [10].
There are also multiple methods for collecting ground reference data, such as: visual interpretation of an area, GPS locational confirmation, or full-record data sampling with precise positioning. The procedures of several professional and scientific fields have been adopted to promote the objective and efficient collection of reference data. Forest mensuration provides such a foundation for obtaining quantifiable information in forested landscapes, with systematic procedures that can mitigate biases and inaccuracies of sampling [34,35]. For many decades now, forest mensuration (i.e., biometrics) has provided the most accurate and precise observations of natural characteristics through the use of mathematical principles and field-tested tools [34,35,36]. To observe long-term or large area trends in forest environments, systematic Continuous Forest Inventory (CFI) plot networks have been established. Many national agencies (e.g., the U.S. Forest Service) have such a sampling design (e.g., Forest Inventory and Analysis (FIA) Program) for monitoring large land areas in a proficient manner [37]. Despite these sampling designs for efficient and effective reference data collection, the overwhelming costs of preforming a statistically valid accuracy assessment is a considerable limitation for most projects [10,23].
The maturation of remote sensing technologies in the 21st century has brought with it the practicality of widespread Unmanned Aerial Systems (UAS) applications. This low-cost and flexible platform generates on-demand, high-resolution products to meet the needs of society [38,39]. UAS represent an interconnected system of hardware and software technologies managed by a remote pilot in command [30,40]. Progressing from mechanical contraptions, UAS now assimilate microcomputer technologies that allow them to operate for forestry sampling [29,41], physical geography surveys [42], rangeland mapping [43], humanitarian aid [44], precision agriculture [45], and many other applications [12,39,46].
The added potential of the UAS platform has supported a wide diversity of data collection initiatives. UAS-SfM products provide analytical context beyond that of traditional raw imagery, with products including photogrammetric point clouds, Digital Surface Models (DSM), and planimetric (or orthomosaic) surfaces. While it is becoming increasingly common to use high-spatial resolution satellite imagery for reference data to assess maps generated from medium to coarse resolution imagery, UAS provide a new opportunity at even higher spatial resolutions. To properly apply the practice of using high-resolution remote sensing imagery as a source of validation data [36,47,48], our research focuses on if UAS provide the potential for collecting thematic map accuracy assessment reference data of a necessary quality and operational efficiency to endorse their use. To do this, we evaluated the agreement between the UAS-collected samples and the ground-based CFI plot composition. Specifically, this pilot study investigated if UAS are capable of effectively and efficiently collecting reference data for use in assessing the accuracy of thematic maps created from either a (1) pixel-based or (2) object-based classification approach.

2. Materials and Methods

This research conducted surveys of six woodland properties comprising 522.85 ha of land, 377.57 ha of which were forest cover, in Southeastern New Hampshire (Figure 1). The University of New Hampshire (UNH) owns and manages these six properties, as well as many others, to maintain research integrity for natural communities [49]. These properties contain a wide diversity of structural and compositional diversity, ranging in size from 17 ha to 94.7 ha of forested land cover. Each property also contains a network of CFI plots for measuring landscape scale forest characteristics over time.
The systematic network of CFI ground sampling plots was established for each of the six woodland properties to estimate landscape level biophysical properties. These plot networks are sampled on a regular interval, not to exceed 10 years in reoccurrence. Kingman Farm presents the oldest data (10 years since previous sampling) and East Foss Farm, West Foss Farm, and Moore Field each being sampled most recently in 2014. CFI plots were located at one plot per hectare (Figure 2), corresponding to the minimum management unit size. Each plot location used an angle-wedge prism sampling protocol to identify the individual trees to be included in the measurement at that location. Those trees meeting the optical displacement threshold (i.e., “in” the plot) were then measured for diameter at breast height (dbh), a species presence count, and the tree species itself, through horizontal point sampling guidelines [35]. Prism sampling formed variable radius plots in relation to the basal area factor (BAF) applied. The proportional representation of species under this method is not unbiased with the basal area of the species with a larger dbh being overestimated, while those with smaller dbh are underestimated. Since photo interpretation of the plots is also performed from above this bias tends to hold here as well since the largest canopy trees are the ones most viewed. Therefore, the use of this sampling method is effective here.
The UNH Office of Woodland and Natural Areas forest technicians used the regionally recommended BAF 4.59 m2 (or 20 ft2) prism [50]. Additionally, a nested plot “Big BAF” sampling integration applied a BAF 17.2176 m2 (or 75 ft2) prism to identify a subset of trees for expanded measurements. These ‘measure’ trees for had their height, bearing from plot center, distance from plot center, crown dimensions, and number of silvicultural logs present recorded.
Basal area was used to characterize species distributions and proportions throughout the woodland properties [34,35]. For our study, this meant quantifying the percentage of coniferous species basal area comprising each sample. For the six observed study areas a total of 31 tree species were observed (Table S1). Instead of a species specific classification, our analysis centered on the conventional Deciduous Forest, Mixed Forest, and Coniferous Forest partitioning defined by Justice et al., [5] and MacLean et al., [6]. Here we used the Anderson et al., [7] classification scheme definition for forests, being any area with 10 percent or greater aerial tree-crown density, which has the ability to produce lumber, and influences either the climate or hydrologic regime. From this scheme we defined:
  • “Coniferous” as any land surface dominated by large forest vegetation species, and managed as such, comprising an overstory canopy with a greater than or equal to 65% basal area per unit area coniferous species composition
  • “Mixed Forest” being any land surface dominated by large forest vegetation species, and managed as such, comprising an overstory canopy, which is less than 65% and greater than 25% basal area per unit area coniferous species in composition.
  • “Deciduous”, any land surface dominated by large forest vegetation species, and managed as such, comprising an overstory canopy, which is less than or equal to 25% basal area per unit area coniferous species in composition.
The presented classification ensured that samples were mutually exclusive, totally exhaustive, hierarchical, and produced objective repeatability [7,14].
The original ground-based datasets were collected for general-purpose analysis and research and so, needed to be cleaned, recoded, and refined using R Studio, version 3.3.2 [51]. We used R Studio to isolate individual tree dbh measurements in centimeters and then compute basal area for the deciduous or coniferous species in centimeters squared. Of the original 359 CFI plots, six contained no recorded trees and were removed from the dataset, leaving 353 for analysis. Additionally, standing dead trees were removed due to the time lag between ground sampling and UAS operations. Percent coniferous composition by plot was calculated for the remaining locations based on the classification scheme.
Once classified individually as either Coniferous, Mixed, or Deciduous in composition, the CFI plot network was used to delineate forest management units (stands). Leaf-off, natural color, NH Department of Transportation imagery with a 1-foot spatial resolution (0.3 × 0.3 m) [52] provided further visual context for delineating the stand edges (Figure 3). Non-managed forests and non-forested areas were also identified and removed from the study areas.
UAS imagery was collected using the eBee Plus (SenseFly Parrot Group, Cheseaux-sur-Lausanne, Switzerland), fixed-wing platform, during June and July 2017. The SenseFly eBee Plus operated under autonomous flight missions, in eMotion3 software version 3.2.4 (senseFly SA, Cheseaux-Lausanne, Switzerland), for approximately 45 minutes per battery. This system deployed the sensor optimized for drone applications (S.O.D.A), a 20 megapixel, 1in (2.54 cm) global shutter, natural color, proprietary sensor designed for photogrammetric analysis. In total, the system weighed 1.1 kg (Figure 4).
UAS mission planning was designed to capture plot- and stand- level forest composition. Our team predefined mission blocks which optimized image collection while minimizing time outside of the study area. For larger properties (e.g., College Woods) up to six mission blocks were required based on legal restrictions to comprehensively image the study area. We used the maximum allowable flying height of 120 m above the forest canopy with an 85% forward overlap, and 75% side overlap for all photo missions [30,53]. This flying height was set relative to a statewide LiDAR dataset canopy height model provided by New Hampshire GRANIT [54]. Further characteristics such as optimal sun angle (e.g., around solar noon), perpendicular wind directions, and consistent cloud coverage were considered during photo missions to maintain image quality and precision [28,30].
Post-flight processing began with joining the spatial data contained within the onboard flight log (.bb3 or .bbx) to each individual captured image. Next, we used Agisoft PhotoScan 1.3.2 [55] for a high accuracy photo alignment, image tie point calibration, medium-dense point cloud formation, and planimetric model processing workflow [30]. For all processing, we used a Dell Precision 7910, running an Intel Xeon E5-2697 v4 (18 core) CPU, with 64 GB of RAM, and a NVIDIA Quadro M4000 graphics card. Six total orthomosaics were created.
For each classification method, UAS reference data were extracted from the respective woodland property orthomosaic. West Foss Farm was used solely for establishing training data samples to guide the photo interpretation processes. In total, there were six sampling methods for comparing the ground-based and UAS derived reference data (Table 1) (Figures S1–S6).
For the first pixel-based classification reference data collection method (method one), 90 × 90 m extents were partitioned into 30 × 30 m grids, and positioned at the center of each forest stand. The center 30 × 30 m area then acted as the effective area for visually classifying the given sample. Using an effective area in this way both precluded misregistration errors between the reference data and the thematic layer, and ensured that the classified area was fully within the designated stand boundary [10]. The second PBC reference data collection method (method two) used this same 90 × 90 m partitioned area but positionally aligned it with CFI-plot locations, avoiding overlaps with boundaries and other samples.
The first of four object-based classification reference data collection methods (method three) used a stratified random distribution for establishing a maximum number of 30 × 30 m interpretation areas (subsamples) within each forest stand. In total, 268 of these samples were created throughout 35 forest stands while remaining both spatially independent and maintaining at least two samples per forest stand. Similar to both PBC sampling methods, this and other OBC samples used 30 × 30 m effective areas for visually interpreting their classification. The second OBC reference data collection method (method four) used these previous 30 × 30 m classified areas as subsamples to represent the compositional heterogeneity at the image object (forest stand) level [5,10]. Forest stands which did not convey a clear majority, based on the subsamples, were classified based on a decision ruleset shown in Table 2.
For the remaining two OBC reference data collection methods, we assessed individual 30 × 30 m samples (method five) and the overall forest stand classifications (method six) by direct comparison with the CFI-plots location compositions. An internal buffer of 21.215 m (the hypotenuse of the 30 × 30 m effective area) was applied to each forest stand to eliminate CFI-plots that were subject to stand boundary overlap. This process resulted in 202 subsamples for 28 stands within the interior regions of the five classified woodland properties.
For each of the six orthomosaic sampling procedures we relied on photo interpretation for deriving their compositional cover type classification. Using a confluence of evidence within the imagery, including morphological and spatial distribution patterns, the relative abundance of coniferous and deciduous species was identified [24,56]. Supporting this process was the training data collected from West Foss Farm (Figure S7). A photo interpretation key was generated for plots with distinct compositional proportions, set at the distinctions between coniferous, deciduous, and mixed forest classes. During the visual classification process, a blind interpretation method was used so that ground data bias or location was not influential.
Error matrices were used to quantify the agreement between the UAS orthomosaic and ground-based thematic map reference data samples. Sample units for both PBC and OBC across all six approaches followed this method. These site-specific assessments reported producer’s, user’s, and overall accuracies for the five analyzed woodland properties [33].

3. Results

UAS imagery across the six woodland properties was used to generate six orthomosaics with a total land cover area of 398.71 ha. These UAS-SfM models represented 9173 images (Figure 5). The resulting ground sampling distances (gsd) were: Kingman Farm at 2.86 cm, Moore Field at 3.32 cm, East Foss Farm at 3.54 cm, West Foss Farm at 3.18 cm, Thompson Farm at 3.36 cm, and College Woods at 3.19 cm, for an average pixel size of 3.24 cm. The use of Agisoft PhotoScan for producing these orthomosaics does not report XY positional errors. Additional registration of the woodland areas modeled to another geospatial data layer could determine relative error.
In our first analysis of pixel-based classification thematic map accuracy assessment reference data agreement, 29 sample units were located at the center of the forest stands. This method represented the photo interpretation potential of classifying forest stands from UAS image products. Overall agreement between ground-based and UAS-based reference data samples was 68.97% (Table 3). Producer’s accuracy was highest for deciduous stands, while user’s accuracy was highest for coniferous forest stands.
For our second PBC reference data analysis, in which orthomosaic samples were registered with CFI-plot locations, 19 samples were assessed. Reference data classification agreement was 73.68% (Table 4), with both user’s and producer’s accuracies highest for coniferous forest stands.
Four total OBC reference data error matrices were generated; two for the individual subsamples and two for the forest stands or image objects. Using the stratified random distribution for subsamples, our analysis showed an overall agreement of 63.81% between the ground-based forest stands and UAS orthomosaics across 268 samples. Producer’s accuracy was highest for deciduous forests while user’s accuracy was highest for mixed forest (Table 5).
At the forest stand level, the majority agreement of the stratified randomly distributed subsamples presented a 71.43% agreement when compared to the ground-based forest stands (Table 6). For the 35 forest stands analyzed, user’s accuracy was 100% for coniferous forest stands. Producer’s accuracy was highest for deciduous stands at 81.82%.
Next, UAS orthomosaic subsamples that were positionally aligned with individual CFI plots were assessed. A total of 202 samples were registered, with a 61.88% classification agreement (Table 7). User’s accuracy was again highest for coniferous stands at 91.80%. Producer’s accuracy for these subsamples was highest in mixed forest, with an 80.85% agreement.
Forest stand level classification agreement, based on the positionally registered orthomosaic samples was 85.71%. In total, 28 forest stands were assessed (Table 8). User’s and producer’s accuracies for all three classes varied marginally, ranging from 84.62% to 87.51%. Commission and omission error were both lowest for deciduous forest stands.

4. Discussion

This research set out to gauge whether UAS could adequately collect reference data for use in thematic map accuracy assessments, of both pixel-based and object-based classifications, for complex forest environments. To create UAS based comparative reference data samples, six independent orthomosaic models, totaling 398.71 ha of land area were formed from 9173 images (Figure 5). The resulting average gsd was 3.24 cm. For the six comparative analyses of UAS and ground-based reference data (Table 1), 581 samples were used.
Beginning with PBC, the resulting agreement for stratified randomly distributed samples was 68.97% (Table 3). For this sampling technique, we experienced high levels of commission errors, especially between the coniferous and mixed forest types. One reason for this occurrence could have been the perceived dominance, visual bias, of the conifer canopies within the orthomosaic samples. Mixed forests experienced the greatest mischaracterization here. The CFI plot-registered PBC method generated a slightly higher overall accuracy at 73.68% (Table 4). The mixed forest samples still posed issues for classification. Coniferous samples however, showed much improved agreement with ground-based classifications.
Next, we looked at the object-based classification reference data samples. Stratified randomly distributed subsamples had an agreement of 63.81% (Table 5). While at the forest stand level agreement to the ground-based composition was 71.43% (Table 6). As before, mixed forest samples showed the highest degree of error. CFI plot-registered OBC subsamples have a 61.88% agreement (Table 7). For forest stand classifications based on these plot-registered subsamples, agreement was 85.71% (Table 8). Mixed forests once again led to large amounts of both commission and omission error. Other than OBC subsample assessments, our results showed a continuously lower accuracy for the stratified randomly distributed techniques. The patchwork composition of the New England forest landscape could be a major reason for this difficulty.
As part of our analysis we wanted to understand the sources of intrinsic uncertainty for UAS reference data collection [10,18]. The compositional and structural complexity, although not to the degree of tropical forests, made working with even the three classes difficult. Visual interpretation was especially labored by this heterogeneity. To aid the interpretation process, branching patterns and species distribution trends were used [24,56]. All visual based classification was performed by the same interpreter, who has significant experience in remote sensing photo interpretation as well as local knowledge of the area. Another source of error could have been from setting fixed areas for UAS-based reference data samples while the CFI plots established variable radius areas [35]. Our 30x30m effective areas looked to capture the majority of ground measured trees, providing snapshots of similarly sized sampling areas. Lastly, there were possible sources of error stemming from the CFI plot ground sampling procedures. Some woodlots, such as Kingman Farm, were sampled up to 10 years ago. Slight changes in composition could have occurred. Also, GPS positional error for the CFI plots was a considerable concern given the dense forest canopies. Error in GPS locations were minimized by removing points close to stand boundaries and by using pixel clusters when possible.
One of the first difficulties encountered in this project was in the logistics of flight planning. While most practitioners may strive for flight line orientation in a cardinal direction, we were limited at some locations due to FAA rules and abutting private properties [30,57]. As stated in the methods, UAS training missions and previously researched advice were used to guide comprehensive coverage of the woodland properties [28]. A second difficulty in UAS reference data collection was that even with a sampling area of 377.57 ha, the minimum statistically valid sample size for a thematic mapping accuracy assessment was not reached [10]. Forest stand structure and arrangement limited the number of samples for most assessments to below the recommended samples size of approximately 30 per class. A considerably larger, preferably continuous, forested land area would be needed to generate a sufficient sampling design. Limited sample sizes also brought into perspective the restriction from a more complex classification scheme. Although some remote sensing studies have performed to a species-specific classification, Justice et al., [5] and MacLean et al., [6] have both shown that a broader, three class, scheme has potential for understanding local forest composition.
Despite the still progressing nature of UAS data collection applications, this study has made the potential for cost reductions apparent. The volume of data collected and processed in only a few weeks opened the door for potential future research in digital image processing and computer vision. Automated classification processing, multiresolution segmentation [20], or machine learning were a consideration but could not be implemented in this study. A continuing goal is to integrate the added context of the digital surface model (DSM), texture, and multispectral image properties into automated forest classifications. We hope that in future studies more precise ground data can be collected, to alleviate the positional registration error and help match exact trees. Additionally, broader analyses should be conducted to establish a comparison for UAS-based reference data to other forms of ground-based sampling protocols (e.g., FIA clustered sampling or fixed-area plots). Lastly, multi-temporal imagery could benefit all forms of UAS classification and should be studied further.
In well under a months’ time, this pilot study collected nearly 400 ha of forest land cover data to a reasonable accuracy. With added expert knowledge-driven interpretation or decreased landscape heterogeneity, this platform could prove to be a significant benefit to forested area research and management. Dense photogrammetric point clouds and ultra-high-resolution orthomosaic models were obtained, with the possibility of incorporating multispectral imagery in the future. These ultra-high resolution products have the potential now to provide an accessible alternative to reference data collected using high-spatial resolution satellite-based imagery. For the objective of collecting reference data which can train and validate environmental models, it must be remembered that reference data itself is not without intrinsic error [58]. As hardware and software technologies continue to improve, the efficiency and effectiveness of these methods will continue to grow [39]. UAS positional accuracy assessment products are gaining momentum [12,59,60]. Providing examples to the benefits of UAS should also support further legislative reform, better matching the needs of practitioners. FAA RPIC guidelines remain a sizeable limitation for UAS mapping of continuous, remote, or structurally complex areas [39,57,61]. We should also remember that these technologies should be used to augment and enhance data collection initiatives, and not replace the human element in sampling.

5. Conclusions

The collection of reference data for the training and validation of earth systems models bares considerable costs yet remains an essential component for prudent decision-making. The objectives of this pilot study were to determine if the application of UAS could enhance or support the collection of thematic map accuracy assessment reference data for both pixel-based and object-based classification of complex forests. Comparative analyses quantified the level of agreement between ground-based CFI plot compositions and that of UAS-SfM orthomosaic samples. Despite diminished agreement from mixed forest areas, PBC showed 68.97% agreement for stratified randomly distributed samples and 73.86% for CFI plot-registered samples. For OBC classifications, forest stands reached 71.43% agreement for stratified randomly distributed samples and 85.71% for CFI plot-registered samples. Our results demonstrated the ability to comprehensively map nearly 400 ha of forest area, using a UAS, in only a few weeks’ time. They also showed the significant benefit that could be gained from deploying UAS to capture forest landscape composition. Low sample sizes, positional error in the CFI plot measurements, and photo interpretation insensitivity could have led to heightened commission and omission errors. Along with these sources of uncertainty, our results should be considered with the understanding that all reference data has intrinsic error and that UAS are not presented to be total replacements of in situ data collection initiatives. The continual advancement of the platform however, should be the basis for investigating their use in a greater number of environments, for the comparison to more varied ground-based reference data frameworks, and with the inclusion of more technologically advanced classification procedures.

Supplementary Materials

The following are available online at https://www.mdpi.com/1999-4907/10/1/24/s1, Figure S1: Pixel-based classification, Stratified Random Distribution sampling design diagram; Figure S2: Pixel-based classification, CFI-plot Positionally Dependent sampling design diagram; Figure S3: Object-based Classification, Stratified Random, Individual Subsamples sampling design diagram; Figure S4: Object-based Classification, Stratified Random Image Object Majority Agreement sampling design diagram; Figure S5: Object-based Classification, CFI-plot Dependent Individual Subsamples sampling design diagram; Figure S6: Object-based Classification, CFI-plot Dependent, Image Objects Majority Agreement sampling design diagram.

Author Contributions

B.T.F. and R.G.C. conceived and designed the experiments; B.T.F. performed the experiments and analyzed the data with guidance from R.G.C.; R.G.C. provided the materials and analysis tools; B.T.F. wrote the paper; R.G.C. revised the paper and manuscript.

Funding

Partial funding was provided by the New Hampshire Agricultural Experiment Station. The Scientific Contribution Number is 2798. This work was supported by the USDA National Institute of Food and Agriculture McIntire Stennis Project #NH00077-M (Accession #1002519).

Acknowledgments

These analyses utilized processing within the Agisoft PhotoScan Software Package with statistical outputs generated from its results. All UAS operations were conducted on University of New Hampshire woodland properties with permission from local authorities and under the direct supervision of pilots holding Part 107 Remote Pilot in command licenses.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chapin, F.S., III; Zavaleta, E.S.; Eviner, V.T.; Naylor, R.L.; Vitousek, P.M.; Reynolds, H.L.; Hooper, D.U.; Lavorel, S.; Sala, O.E.; Hobbie, S.E.; et al. Consequences of changing biodiversity. Nature 2000, 405, 234. [Google Scholar] [CrossRef] [PubMed]
  2. Pejchar, L.; Mooney, H.A. Invasive species, ecosystem services and human well-being. Trends Ecol. Evol. 2009, 24, 497–504. [Google Scholar] [CrossRef] [PubMed]
  3. McGill, B.J.; Dornelas, M.; Gotelli, N.J.; Magurran, A.E. Fifteen forms of biodiversity trend in the Anthropocene. Trends Ecol. Evol. 2015, 30, 104–113. [Google Scholar] [CrossRef] [PubMed]
  4. Kareiva, P.; Marvier, M. Conservation Science: Balancing the Needs of People and Nature, 1st ed.; Roberts and Company Publishing: Greenwood Village, CO, USA, 2011; p. 543. ISBN 1936221063, 9781936221066. [Google Scholar]
  5. MacLean, M.G.; Campbell, M.J.; Maynard, D.S.; Ducey, M.J.; Congalton, R.G. Requirements for labeling forest polygons in an object-based image analysis classification. Int. J. Remote Sens. 2013, 34, 2531–2547. [Google Scholar] [CrossRef]
  6. Justice, D.; Deely, A.K.; Rubin, F. Land Cover and Land Use Classification for the State of New Hampshire, 1996–2001; ORNL DAAC: Oak Ridge, TN, USA, 2016. [Google Scholar] [CrossRef]
  7. Anderson, J.R.; Hardy, J.T.; Roach, J.T.; Witmer, R.E. A Land Use and Land Cover Classification System for Use with Remote Sensor Data. Geological Survey Professional Paper 964; 1976. Available online: https://landcover.usgs.gov/pdf/anderson.pdf (accessed on July 2017).
  8. Field, C.B.; Randerson, J.T.; Malmström, C.M. Global net primary production: Combining ecology and remote sensing. Remote Sens. Environ. 1995, 51, 74–88. [Google Scholar] [CrossRef] [Green Version]
  9. Ford, E.D. Scientific Method for Ecological Research, 1st ed.; Cambridge University Press: Cambridge, UK, 2000; p. 564. ISBN 0521669731. [Google Scholar]
  10. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 2nd ed.; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA, 2009; p. 183. ISBN 978-1-4200-5512-2. [Google Scholar]
  11. Turner, M.G. Landscape Ecology: What Is the State of the Science? Annu. Rev. Ecol. Evol. Syst. 2005, 36, 319–344. [Google Scholar] [CrossRef]
  12. Whitehead, K.; Hugenholtz, C.H. Remote sensing of the environment with small unmanned aircraft systems (UASs), part 1: A review of progress and challenges. J. Unmanned Veh. Syst. 2014, 02, 69–85. [Google Scholar] [CrossRef]
  13. Sokal, R.R. Classification: Purposes, Principles, Progress, Prospects. Science 1974, 185, 1115–1123. [Google Scholar] [CrossRef]
  14. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 4th ed.; Pearson Education Inc.: Glenview, IL, USA, 2016; p. 544. ISBN 978-0134058160. [Google Scholar]
  15. Pugh, S.A.; Congalton, R.G. Applying Spatial Autocorrelation Analysis to Evaluate Error in New England Forest-Cover-Type maps derived from Landsat Thematic Mapper Data. Photogramm. Eng. Remote Sens. 2001, 67, 613–620. [Google Scholar]
  16. Kerr, J.T.; Ostrovsky, M. From space to species: Ecological applications for remote sensing. Trends Ecol. Evol. 2003, 18, 299–305. [Google Scholar] [CrossRef]
  17. Harris, P.M.; Ventura, S.J. The integration of geographic data with remotely sensed imagery to improve classification in an urban area. Photogramm. Eng. Remote Sens. 1995, 61, 993–998. [Google Scholar]
  18. Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef] [Green Version]
  19. Caridade, C.M.R.; Marçal, A.R.S.; Mendonça, T. The use of texture for image classification of black & white air photographs. Int. J. Remote Sens. 2008, 29, 593–607. [Google Scholar] [CrossRef]
  20. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  21. Radoux, J.; Bogaert, P.; Fasbender, D.; Defourny, P. Thematic accuracy assessment of geographic object-based image classification. Int. J. Geogr. Inf. Sci. 2011, 25, 895–911. [Google Scholar] [CrossRef]
  22. Robertson, L.D.; King, D.J. Comparison of pixel- and object-based classification in land cover change mapping. Int. J. Remote Sens. 2011, 32, 1505–1529. [Google Scholar] [CrossRef]
  23. Foody, G.M. Status of land cover classification accuracy assessment. Remote Sens. Environ. 2002, 80, 185–201. [Google Scholar] [CrossRef]
  24. Avery, T.E.; Berlin, G.L. Interpretation of Aerial Photographs, 4th ed.; Burgess Publishing Company: Minneapolis, MN, USA, 1985; p. 554. ISBN 978-0808700968. [Google Scholar]
  25. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  26. Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
  27. Micheletti, N.; Chandler, J.H.; Lane, S.N. Structure from motion (SFM) photogrammetry. In Geomorphological Techniques; Cook, S.J., Clarke, L.E., Nield, J.M., Eds.; British Society for Geomorphology: London, UK, 2012; Chapter 2, Section 2.2; ISSN 2047-0371. [Google Scholar]
  28. Dandois, J.P.; Olano, M.; Ellis, E.C. Optimal Altitude, Overlap, and Weather Conditions for Computer Vision UAV Estimates of Forest Structure. Remote Sens. 2015, 7, 13895–13920. [Google Scholar] [CrossRef] [Green Version]
  29. Mikita, T.; Janata, P.; Surový, P. Forest Stand Inventory Based on Combined Aerial and Terrestrial Close-Range Photogrammetry. Forests 2016, 7, 165. [Google Scholar] [CrossRef]
  30. Fraser, B.T.; Congalton, R.G. Issues in Unmanned Aerial Systems (UAS) Data Collection of Complex Forest Environments. Remote Sens. 2018, 10, 908. [Google Scholar] [CrossRef]
  31. Bolstad, P. GIS Fundamentals: A First Text on Geographic Information Systems, 4th ed.; Eider Press: White Bear Lake, MN, USA, 2012; 688p, ISBN 978-0971764736. [Google Scholar]
  32. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  33. Story, M.; Congalton, R.G. Accuracy Assessment: A User’s Perspective. Photogramm. Eng. Remote Sens. 1986, 52, 397–399. [Google Scholar]
  34. Husch, B.; Miller, C.I.; Beers, T.W. Forest Mensuration, 2nd ed.; Ronald Press Company: New York, NY, USA, 1972. [Google Scholar]
  35. Kershaw, J.A.; Ducey, M.J.; Beers, T.W.; Husch, B. Forest Mensuration, 5th ed.; John Wiley and Sons: Hoboken, NJ, USA, 2016; 632p, ISBN 9781118902035. [Google Scholar]
  36. Spurr, S.H. Forest Inventory; Ronald Press Company: New York, NY, USA, 1952; 476p. [Google Scholar]
  37. Smith, W.B. Forest inventory and analysis: A national inventory and monitoring program. Environ. Pollut. 2002, 116, S233–S242. [Google Scholar] [CrossRef]
  38. Marshall, D.M.; Barnhart, R.K.; Shappee, E.; Most, M. Introduction to Unmanned Aircraft Systems, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2016; p. 233. ISBN 978-1482263930. [Google Scholar]
  39. Cummings, A.R.; McKee, A.; Kulkarni, K.; Markandey, N. The Rise of UAVs. Photogramm. Eng. Remote Sens. 2017, 83, 317–325. [Google Scholar] [CrossRef]
  40. Colomina, I.; Molina, P. Unmanned aerial systems for photogrammetry and remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2014, 92, 79–97. [Google Scholar] [CrossRef] [Green Version]
  41. Tang, L.; Shao, G. Drone remote sensing for forestry research and practices. J. For. Res. 2015, 26, 791–797. [Google Scholar] [CrossRef]
  42. Smith, M.W.; Carrivick, J.L.; Quincey, D.J. Structure from motion photogrammetry in physical geography. Prog. Phys. Geogr. 2016, 40, 247–275. [Google Scholar] [CrossRef]
  43. Laliberte, A.S.; Goforth, M.A.; Steele, C.M.; Rango, A. Multispectral Remote Sensing from Unmanned Aircraft: Image Processing Workflows and Applications for Rangeland Environments. Remote Sens. 2011, 3, 2529–2551. [Google Scholar] [CrossRef] [Green Version]
  44. Kakaes, K.; Greenwood, F.; Lippincott, M.; Dosemagen, S.; Meier, P.; Wich, S. Drones and Aerial Observation: New Technologies for Property Rights, Human Rights, and Global Development a Primer; New America: Washington, DC, USA, 22 July 2015; p. 104. [Google Scholar]
  45. Primicerio, J.; Di Gennaro, S.F.; Fiorillo, E.; Genesio, L.; Lugato, E.; Matese, A.; Vaccari, F.P. A flexible unmanned aerial vehicle for precision agriculture. Precis. Agric. 2012, 13, 517–523. [Google Scholar] [CrossRef]
  46. Watts, A.C.; Ambrosia, V.G.; Hinkley, E.A. Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef] [Green Version]
  47. McRoberts, R.E.; Tomppo, E.O. Remote sensing support for national forest inventories. Remote Sens. Environ. 2007, 110, 412–419. [Google Scholar] [CrossRef]
  48. Yadav, K.; Congalton, R.G. Issues with Large Area Thematic Accuracy Assessment for Mapping Cropland Extent: A Tale of Three Continents. Remote Sens. 2018, 10, 53. [Google Scholar] [CrossRef]
  49. University of New Hampshire, Office of Woodlands and Natural Areas, General Information. Available online: https://colsa.unh.edu/woodlands/general-information (accessed on 24 July 2017).
  50. Ducey, M.J. Pre-Cruise Planning. In Workshop Proceedings: Forest Measurements for Natural Resource Professionals; Natural Resource Network: Connecting Research, Teaching and Outreach; University of New Hampshire Cooperative Extension: Durham, NH, USA, October 2001. [Google Scholar]
  51. RStudio Team. RStudio: Integrated Development for R; RStudio Inc.: Boston, MA, USA, 2016; Available online: http://www.rstudio.com/ (accessed on July 2017).
  52. New Hampshire GRANIT: New Hampshire Statewide GIS Clearinghouse, 2015, Aerial Photography. Available online: http://granit.unh.edu/resourcelibrary/specialtopics/2015aerialphotography/index.html (accessed on June 2017).
  53. Pix4DMapper User Manual Version 3.2; Pix4D SA: Lausanne, Switzerland, 2017.
  54. New Hampshire GRANIT LiDAR Distribution Site. Available online: http://lidar.unh.edu/map/ (accessed on 5 June 2017).
  55. Agisoft PhotoScan Professional Edition. Version 1.3.2 Software. Available online: http://www.agisoft.com/downloads/installer/ (accessed on July 2017).
  56. Avery, T.E. Interpretation of Aerial Photographs, 3rd ed.; Burgess Publishing Company: Minneapolis, MN, USA, 1977; p. 393. ISBN1 0808701304. ISBN2 9780808701309. [Google Scholar]
  57. Federal Aviation Administration, Fact Sheet-Small Unmanned Aircraft Regulations (Part 107). Available online: https://www.faa.gov/news/fact_sheets/news_story.cfm?newsId=20516 (accessed on July 2017).
  58. Fitzpatrick-Linz, K. Comparison on sampling procedures and data analysis for a land-use and land-cover map. Photogramm. Eng. Remote Sens. 1981, 47, 343–351. [Google Scholar]
  59. Naumann, M.; Geist, M.; Bill, R.; Niemeyer, F.; Grenzdorffer, G. Comparison on sampling procedures and data analysis for a land-use and land-cover map. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2013, XL-1/W2, Rockstock, Germany, 4–6 September 2013; pp. 281–286. [Google Scholar]
  60. Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. Accuracy of Digital Surface models and Orthophotos Derived from Unmanned Aerial Vehicle Photogrammetry. J. Surv. Eng. 2017, 143. [Google Scholar] [CrossRef]
  61. Rango, A.; Laliberte, A. Impact of flight regulations on effective use of unmanned aircraft systems for natural resources applications. J. Appl. Remote Sens. 2010, 4, 043539. [Google Scholar] [CrossRef]
Figure 1. Woodland property boundaries for the six study areas. From North to South (with total area): Kingman Farm (135.17 ha), Moore Field (47.76 ha), College Woods (101.17 ha), West Foss Farm (52.27 ha), East Foss Farm (62.32 ha), and Thompson Farm (118.17 ha).
Figure 1. Woodland property boundaries for the six study areas. From North to South (with total area): Kingman Farm (135.17 ha), Moore Field (47.76 ha), College Woods (101.17 ha), West Foss Farm (52.27 ha), East Foss Farm (62.32 ha), and Thompson Farm (118.17 ha).
Forests 10 00024 g001
Figure 2. Woodland property continuous forest inventory (CFI) plot networks totaling 354 horizontal point sampling plots over 377.57 ha of forested land. Pictured are (Top left to bottom right): (a) Kingman Farm, (b) Moore Field, (c) Thompson Farm, (d) College Woods, (e) East Foss Farm, and (f) West Foss Farm.
Figure 2. Woodland property continuous forest inventory (CFI) plot networks totaling 354 horizontal point sampling plots over 377.57 ha of forested land. Pictured are (Top left to bottom right): (a) Kingman Farm, (b) Moore Field, (c) Thompson Farm, (d) College Woods, (e) East Foss Farm, and (f) West Foss Farm.
Forests 10 00024 g002
Figure 3. Ground-based forest stands digitized from CFI-plot classifications. Pictured are (Top left to bottom right): (a) Kingman Farm, (b) Moore Field, (c) Thompson Farm, (d) College Woods, (e) East Foss Farm, and (f) West Foss Farm.
Figure 3. Ground-based forest stands digitized from CFI-plot classifications. Pictured are (Top left to bottom right): (a) Kingman Farm, (b) Moore Field, (c) Thompson Farm, (d) College Woods, (e) East Foss Farm, and (f) West Foss Farm.
Forests 10 00024 g003
Figure 4. eBee Plus Unmanned Aerial System (UAS) with the sensor optimized for drone applications (S.O.D.A) and eMotion3 flight planning software.
Figure 4. eBee Plus Unmanned Aerial System (UAS) with the sensor optimized for drone applications (S.O.D.A) and eMotion3 flight planning software.
Forests 10 00024 g004
Figure 5. UAS orthomosaics for the six woodland properties (Top left to Bottom Right): (a) Kingman Farm, (b) Moore Field, (c) Thompson Farm, (d) College Woods, (e) East Foss Farm, (f) West Foss Farm.
Figure 5. UAS orthomosaics for the six woodland properties (Top left to Bottom Right): (a) Kingman Farm, (b) Moore Field, (c) Thompson Farm, (d) College Woods, (e) East Foss Farm, (f) West Foss Farm.
Forests 10 00024 g005
Table 1. Six total methods used for UAS reference data collection, between Pixel-based (PBC) and Object-based (OBC) classification approaches.
Table 1. Six total methods used for UAS reference data collection, between Pixel-based (PBC) and Object-based (OBC) classification approaches.
Classification Approach
Pixel-based ClassificationObject-based Classification
1. Stratified Random Distribution3. Stratified Random, Individual Subsamples
2. CFI-plot Positionally Dependent4. Stratified Random, Image Object Majority Agreement
5. CFI-plot Dependent, Individual Subsamples
6. CFI-plot Dependent, Image Object Majority Agreement
Table 2. Decision support ruleset for forest stands (image objects) classification of split decision areas.
Table 2. Decision support ruleset for forest stands (image objects) classification of split decision areas.
Class 1Class 2Resulting Classification
ConiferousMixedConiferous
DeciduousMixedDeciduous
ConiferousDeciduousMixed
Table 3. Stratified random sampling PBC thematic map error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Table 3. Stratified random sampling PBC thematic map error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Ground Data
UASConiferousMixedDeciduousTotalUser’s Accuracy
Coniferous410580.0%
Mixed3821361.54%
Deciduous0381172.73%
Total7121029
Producer’s Accuracy57.14%66.67%80.0%Overall Accuracy = 20/29 or 68.97%
Table 4. CFI plot-registered PBC thematic map error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Table 4. CFI plot-registered PBC thematic map error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Ground Data
UASConiferousMixedDeciduousTotalUser’s Accuracy
Coniferous5005100%
Mixed152862.5%
Deciduous024666.66%
Total67619
Producer’s Accuracy83.33%71.43%66.66%Overall Accuracy = 14/19 or 73.68%
Table 5. Stratified randomly distributed OBC reference data subsample error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Table 5. Stratified randomly distributed OBC reference data subsample error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Ground Data
UASConiferousMixedDeciduousTotalUser’s Accuracy
Coniferous401815968.0%
Mixed23901512870.31%
Deciduous337418150.62%
Total6614557268
Producer’s Accuracy60.61%62.07%71.93%Overall Accuracy = 171/268 or 63.81%
Table 6. OBC sample unit thematic map error matrix for stratified randomly distributed subsamples. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Table 6. OBC sample unit thematic map error matrix for stratified randomly distributed subsamples. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Ground Data
UASConiferousMixedDeciduousTotalUser’s Accuracy
Coniferous7007100%
Mixed4921560.0%
Deciduous0491369.23%
Total11131135
Producer’s Accuracy63.64%69.23%81.82%Overall Accuracy = 25/35 or 71.43%
Table 7. CFI plot-registered UAS orthomosaic subsample thematic map error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Table 7. CFI plot-registered UAS orthomosaic subsample thematic map error matrix. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Ground Data
UASConiferousMixedDeciduousTotalUser’s Accuracy
Coniferous56146191.80%
Mixed4138179639.58%
Deciduous68314568.89%
Total1034752202
Producer’s Accuracy54.37%80.85%59.62%Overall Accuracy = 125/202 or 61.88%
Table 8. UAS forest stand thematic map error matrix for CFI plot-registered samples. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Table 8. UAS forest stand thematic map error matrix for CFI plot-registered samples. Ground (reference) data are represented by the CFI plots and Unmanned Aerial Systems (UAS) data are derived from the corresponding orthomosaic.
Ground Data
UASConiferousMixedDeciduousTotalUser’s Accuracy
Coniferous610785.71%
Mixed11111384.62%
Deciduous017887.50%
Total713828
Producer’s Accuracy85.71%84.62%87.50%Overall Accuracy = 24/28 or 85.71%

Share and Cite

MDPI and ACS Style

Fraser, B.T.; Congalton, R.G. Evaluating the Effectiveness of Unmanned Aerial Systems (UAS) for Collecting Thematic Map Accuracy Assessment Reference Data in New England Forests. Forests 2019, 10, 24. https://doi.org/10.3390/f10010024

AMA Style

Fraser BT, Congalton RG. Evaluating the Effectiveness of Unmanned Aerial Systems (UAS) for Collecting Thematic Map Accuracy Assessment Reference Data in New England Forests. Forests. 2019; 10(1):24. https://doi.org/10.3390/f10010024

Chicago/Turabian Style

Fraser, Benjamin T., and Russell G. Congalton. 2019. "Evaluating the Effectiveness of Unmanned Aerial Systems (UAS) for Collecting Thematic Map Accuracy Assessment Reference Data in New England Forests" Forests 10, no. 1: 24. https://doi.org/10.3390/f10010024

APA Style

Fraser, B. T., & Congalton, R. G. (2019). Evaluating the Effectiveness of Unmanned Aerial Systems (UAS) for Collecting Thematic Map Accuracy Assessment Reference Data in New England Forests. Forests, 10(1), 24. https://doi.org/10.3390/f10010024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop