Next Article in Journal
Modeling Streamflow and Sediment Loads with a Photogrammetrically Derived UAS Digital Terrain Model: Empirical Evaluation from a Fluvial Aggregate Excavation Operation
Previous Article in Journal
Operational Study of Drone Spraying Application for the Disinfection of Surfaces against the COVID-19 Pandemic
Previous Article in Special Issue
Developing an Introductory UAV/Drone Mapping Training Program for Seagrass Monitoring and Research
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Integrated Spectral–Structural Workflow for Invasive Vegetation Mapping in an Arid Region Using Drones

1
School of Geographical Sciences and Urban Planning, Arizona State University, Tempe, AZ 85281, USA
2
Green Drone AZ, the Grady Lab, Center for Adaptable Western Landscapes, Northern Arizona University, Flagstaff, AZ 86001, USA
3
Spatial Analysis Research Center (SPARC), School of Geographical Sciences and Urban Planning, Arizona State University, Tempe, AZ 85281, USA
*
Author to whom correspondence should be addressed.
Drones 2021, 5(1), 19; https://doi.org/10.3390/drones5010019
Submission received: 28 December 2020 / Revised: 10 February 2021 / Accepted: 22 February 2021 / Published: 8 March 2021
(This article belongs to the Special Issue Drones in Geography)

Abstract

:
Mapping invasive vegetation species in arid regions is a critical task for managing water resources and understanding threats to ecosystem services. Traditional remote sensing platforms, such as Landsat and MODIS, are ill-suited for distinguishing native and non-native vegetation species in arid regions due to their large pixels compared to plant sizes. Unmanned aircraft systems, or UAS, offer the potential to capture the high spatial resolution imagery needed to differentiate species. However, in order to extract the most benefits from these platforms, there is a need to develop more efficient and effective workflows. This paper presents an integrated spectral–structural workflow for classifying invasive vegetation species in the Lower Salt River region of Arizona, which has been the site of fires and flooding, leading to a proliferation of invasive vegetation species. Visible (RGB) and multispectral images were captured and processed following a typical structure from motion workflow, and the derived datasets were used as inputs in two machine learning classifications—one incorporating only spectral information and one utilizing both spectral data and structural layers (e.g., digital terrain model (DTM) and canopy height model (CHM)). Results show that including structural layers in the classification improved overall accuracy from 80% to 93% compared to the spectral-only model. The most important features for classification were the CHM and DTM, with the blue band and two spectral indices (normalized difference water index (NDWI) and normalized difference salinity index (NDSI)) contributing important spectral information to both models.

1. Introduction

Invasive species are a leading cause of biodiversity loss [1] and a threat to many ecosystem services [2]. These species are also referred to as non-native, alien, or exotic, and can cause harm to the environment by outcompeting native species for food and other resources, causing direct impacts on resources and indirect impacts on the growth and vitality of other native species [3]. In arid regions such as the American southwest, controlling certain introduced and invasive species that threaten water resources has become an issue of great concern during recent decades [4,5,6].
Accurately mapping species level vegetation is an essential step in managing invasion risk [7], guiding remediation efforts and intervention strategies [8], monitoring outcomes of management actions [9], and ultimately understanding what processes are facilitating growth or expansion [10]. These activities are particularly important for justifying and sustaining public support of management programs [11], especially on public lands. However, mapping invasive vegetation at the species level using traditional platforms, such as Landsat or MODIS, is difficult due to the coarse spatial resolution of the imagery (i.e., 30–500 m). Many plant species are much smaller than a Landsat pixel, making discrimination difficult [12]. A host of methods have been developed to spectrally unmix signals from platforms such as Landsat [5,13], but these methods are unable to map the spatial distribution of different species at the subpixel scale [14]. Imagery captured by unmanned aircraft systems (UAS, or drones) at high spatial resolutions can support targeted management efforts [15,16,17], but pipelines for capturing, processing, and analyzing these data do not always leverage the full range of possible products, and they can be expensive and time consuming, reducing their effectiveness [18].
Structural characteristics, such as canopy height and terrain, have generally been omitted from remote sensing classification workflows, but they can provide key information on hydrology and geomorphology. UAS provide the ability to capture images with sufficient overlap to apply modern photogrammetric structure from motion (SfM; [19]) workflows to develop high resolution digital elevation and surface models [20]. Several studies have begun integrating 3D structural layers derived from UAS images using an SfM workflow with spectral orthomosaics in the classification process [21,22,23,24]. The benefits of an integrated spectral–structural approach are that the spatial and spectral information from the high resolution UAS images can be leveraged directly for species-level discrimination while also incorporating structural landscape characteristics that may not manifest in spectral signatures but might impact the spatial distribution of species, such as the geomorphology. Additionally, while spectral signatures may change throughout the year based on phenology and environmental changes, structural characteristics will remain more temporally stable and can thus potentially be used to predict which areas may be prone to invasion, overcoming a limitation of a purely spectral approach.
This paper develops a spectral–structural workflow for mapping several invasive species in an arid region that is prone to flooding. The workflow combines drone images with SfM and a machine learning classification approach to map vegetation species. Drone images were captured and processed into a spectral orthomosaic and structural models including a digital terrain model (DTM) and canopy height model (CHM). The spectral orthomosaic was then used to derive vegetation indices, while the DTM was used to derive a hydrology-based flow accumulation model of the site. The spectral and structural data layers were then used as inputs into a spectral-only and spectral–structural random forest classification schemes to map vegetation species. The area of focus is the Lower Salt River area in central Arizona in the Tonto National Forest, which has been the site of both fire and flooding during recent years. The workflow described can be applied regionally or in other similar environments to effectively monitor and manage restoration efforts.

2. Materials and Data

2.1. Study Area and Species of Concern

This study area (33°31′11″ N; 111°40′14″ W) is an approximately 76-hectare site located about 10 km northeast of Mesa, Arizona (Figure 1) in the Tonto National Forest. Climatically, the area is classified as desert [25] and receives 8.4 inches (213.36 mm) average annual rainfall [26]. Water is the main limiting factor for vegetation growth, and river floods and precipitation are the primary water sources. In the summertime, temperatures often reach 43.3 °C (110°F) or more, but this area is generally 5–7 °C cooler than the Phoenix metropolitan area [27]. The Salt River is the major hydrological feature in the region (Figure 1), and there are several dams along its length that feed a system of canals that supply the Phoenix metropolitan area with water. Despite being dammed at various points, the river is subject to occasional flash floods, especially during monsoon storms in late July and August. These floods provide opportunities for invasive vegetation species to establish and proliferate. Some of the invasive vegetation in the area includes giant reed (Arundo donax), stinknet/globe chamomile (Oncosiphon piluliferum), Sahara mustard (Brassica tournefforti), saltcedar (Tamarix spp.), and southern cattail (Typha domingensis), while fremont cottonwood (Populus fremontii), arrow weed (Pluchea sericea), and velvet mesquite (Prosopis velutina) are native. The region is also prone to wildfires, which can exacerbate the effects of flooding on invasive species establishment.
The two species of most concern to land managers are giant reed and saltcedar. Both consume more water than and displace native species and obstruct and narrow water flow channels [14,28]. Giant reed is an invasive grass common to riparian areas throughout the southwestern United States. It is a common hydrophytic plant found along disturbed and undisturbed streambanks, desert springs, flood plains, drainages, and irrigation waterways [29,30,31]. It thrives in moist soils (moderately saline or freshwater), on sand dunes, and in wetland or riparian areas. Saltcedar was introduced into the United States in the mid-19th century as an ornamental shrub and to assist with bank erosion, but it has since become a threat, particularly in southwestern riparian ecosystems [29,32]. Once established, saltcedar is remarkably tolerant to environmental stresses including drought, flood inundation, and high soil salinity [4,33]. Saltcedar is associated with the host of negative impacts mentioned above as well as increasing soil salinity and lowering the water table [4]. Saltcedar will outcompete native vegetation species for water, ultimately replacing native stands with dense, impenetrable thickets. Dense saltcedar stands have also been found to support lower biodiversity than the natural communities they replace [34]. It is widely acknowledged that saltcedar is replacing native vegetation species along major rivers in the American southwest at an alarming rate [32,35].

2.2. Cactus Fire and Flooding Event

In 2017, the Cactus Fire ignited and burned 331 hectares in the Tonto National Forest (Figure 1) along the Salt River [36], creating ideal conditions for colonization by invasive species. Fire creates positive conditions for invasion [37] by eliminating existing native vegetation and promoting propagules, creating a negative feedback loop in the system [25,27]. Since invasive species are often better fire-adapted than the native species they replace [38], and the environment can lack the pathogens and insect enemies that might help to keep these species in check in their home ranges [39], certain invasive species can flourish after fire. As such, the invasion of many ecosystems by fire-adapted non-native plants is a threat to conservation [39].
Recognizing the potential for negative outcomes to the ecosystem after the cactus fire, Tonto National Forest personnel along with agencies such as the National Forest Foundation and Northern Arizona University have been actively managing the site—known as the Lower Salt River Restoration Project—to prevent the spread of species such as giant reed and saltcedar and encourage regrowth of native species. These management activities have included removing and/or treating invasive species through mechanical and chemical means along with planting native species to prevent future colonization by invasive species. However, land managers have been unable to assess the vegetation composition in many parts of the river channel because the terrain is rough and the vegetation is dense. Without first-hand knowledge of the distribution of vegetation in these areas, management activities (e.g., mechanical removal of invasive species) cannot be planned or executed. UAS offer an ideal tool to collect quality data from these inaccessible areas for ecosystem assessment. The pilot study area covers portions of two of the 46 grids under restoration management (Figure 2, grids A3 and A4).

2.3. Data Collection and Processing

2.3.1. Image Data Collection

Imagery was collected in two phases. During Phase 1, basic RGB digital imagery was collected with a DJI Phantom 4 Pro (DJI P4P) sensor (full details in Table A1). The purpose of the Phase 1 RGB imagery was to develop SfM three-dimensional (3D) models of the structural characteristics of the study area in order to analyze terrain elevation, vegetation heights, and flow patterns (discussed below) for classifying invasive species. Phase 1 data were collected on 3 March 2020 between 10:30 am and 12:45 pm local time. Image overlaps (Table A2) are in line with recommendations and provide sufficient overlap to identify key points for creating 3D point clouds for surface models [16]. During Phase 2, multispectral imagery was captured for enhanced vegetation species identification using a DJI Phantom 4 Multispectral (DJI P4M), which has six sensors including blue (450 nm), green (560 nm), red (650 nm), red edge (730 nm), and near-infrared (NIR; 840 nm) along with a visible light (RGB) sensor. The DJI P4M also includes a sun irradiance sensor on top of the platform for improved sensor calibration. Wavelengths are the band center with all bands being ±16 nm, except the NIR, which is ±26 nm. Full details on flight parameters are included in the Appendix A (Table A1). Due to the different sensor specifications of the DJI P4M compared to the P4P (2 Megapixels vs. 20 megapixels, respectively), the Phase 2 data collection required a lower flight altitude to reach a similar ground sampling distance. Thus, five separate campaigns were needed to image the entire area, which were conducted over consecutive days in early April 2020. Again, all flights were conducted between 10:00 am and 1:00 pm local time. Flights were designed to include considerable overlap between the five sections. The images were processed in separate chunks and later stitched together into a single mosaic.
Many drones are not equipped with sufficient on-board Global Navigation Satellite System (GNSS) receivers to enable direct control of imagery through camera positions, so ground control points (GCPs) are needed to improve georeferencing accuracy [40]. Prior to image collection, 28 GCPs were placed throughout the study area following best practices to ensure visibility [41]. The GCPs consisted of round, gray plastic discs, approximately 20 cm in diameter, with an orange knob in the center on which a Trimble Geo 7 Series Premium Centimeter Kit RTK GPS unit was positioned to collect coordinates (Figure 3). The GCPs were fixed to the ground with a stake, left in place between the flight campaigns, and revisited prior to the second campaign to clear off any debris or obstructions. Images were georeferenced using the coordinates of the GCPs combined with the internal geotags and orientation parameters of the images. Radiometric corrections were applied during image processing using built-in software functionality.

2.3.2. Ground Reference Vegetation Data Collection

Reference data for existing plant species were collected in the field on 28 June 2020. These data were ultimately used to train and test the random forest classification (discussed below). Data were collected using Collector for ArcGIS software and a Bad Elf GPS Receiver to record the coordinates of field observations. Forty-eight reference points were collected for eight different species. Data included information on the vegetation type, description, and pictures so species could be verified by an expert if needed (Figure 4).

3. Methods

The integrated spectral–structural workflow for mapping invasive species involved three stages, which are detailed below. In the first stage, the Phase 1 P4P drone images were processed using SfM [19] to generate a series of structural data products including a digital terrain model (DTM), digital surface model (DSM), and a canopy height model (CHM; the difference between the DTM and DSM). The Phase 2 P4M images were processed using SfM into a multiband orthomosaic. During the second stage, these data products were used as inputs to derive vegetation indices (from the orthomosaic) and a hydrological flow accumulation raster (from the DTM). In the third stage, the original and derived data products were integrated into two separate machine learning random forest classifications to map vegetation. Full details are provided below.

3.1. Image Data Processing

The images from both Phase 1 and Phase 2 were processed using Pix4Dmapper v. 4.5.6. The Phase 1 dataset was used to produce the 3D models (Figure 5) while the Phase 2 dataset, captured using the multispectral sensor, was used to create an orthomosaic and vegetation indices. Additional processing details are provided in the Appendix A (Table A1). The same 28 GCPs were incorporated into both image sets to improve georeferencing accuracy. The point cloud from Phase 1 was also used to orthorectify the Phase 2 data to improve geolocation compatibility between the two layers [41]. Both the orthomosaic and elevation product were coregistered to an output coordinate system of NAD83/UTM Zone 12 N and underwent positional checks to ensure coincidence. The DEM had a nominal resolution of 3.28 cm/pixel while the orthomosaic had a nominal resolution of 3.9 cm/pixel. Both layers were resampled to 0.16 m, which accounted for any minor geolocation errors between the two layers while also remaining at a high enough resolution to permit differentiation of the vegetation species.

3.1.1. Vegetation Indices

Using the multispectral orthomosaic, we computed a set of vegetation indices to aid species discrimination during classification. The most commonly used vegetation index is the Normalized Difference Vegetation Index (NDVI), which highlights vegetation vigor, or “greenness” [42], but NDVI alone is not sufficient to discriminate species in the study area. We therefore computed five additional indices (Table 1) commonly used to discriminate vegetation, especially in arid regions [43] (Table 1). The Normalized Difference Water Index (NDWI) is sensitive to changes in plant water content [42]. The Normalized Difference Salinity Index (NDSI) highlights soil salinity levels [44,45], which can aid in identifying saltcedar since these plants excrete salt through their leaves, which then drop to the ground and salinate the soil. The Soil Adjusted Vegetation Index (SAVI) is similar to NDVI but performs better in areas with sparse vegetation coverage. A high NDSI value with low NDVI or low SAVI value indicates higher soil salinity with less greenness, which should help distinguish saltcedar from the other vegetation [45]. The Two Band Enhanced Vegetation Index (EVI-2) indicates plant vigor [46], while the Green Normalized Difference Vegetation Index (GNDVI) has been successful at differentiating varying plant greenness levels [47]. Indices were computed in ArcGIS Pro v 2.7.

3.1.2. Hydrological Flow Accumulation

The DTM was used to create a raster representing hydrological flow accumulation throughout the study area, since flow accumulation can serve as a proxy for drainage patterns [49,50]. Water accumulation and drainage are known to contribute to invasive species establishment in the study area. Using the method of Garbrecht and Martz [51], we computed flow accumulation from the DTM as the accumulated weight of all cells flowing into each downslope cell in the output raster. Cells with a high flow accumulation can signal stream channels or other areas where water may persist after a flood. The flow accumulation analysis was conducted in ArcGIS Pro v. 2.7.

3.2. Image Classification

Advances in high performance computing have created an opportunity to speed up processing pipelines for semiautomated image classifiers, which are vital for natural resource monitoring [52]. Machine learning offers a means to integrate spectral and structural components into land cover classification, and random forest classification [53,54] in particular has been demonstrated to produce accurate classifications of multisource remote sensing data in land cover studies [11,55,56,57]. The advantage of the random forest machine learning approach to classification is its versatility along with its ability to assign relative importance rankings to the various input features, allowing users to understand which components (spectral or structural) are contributing most to the prediction.
Pixel-based classification using random forest (RF; [54]) was determined to be most appropriate for the study area based on the size and characteristics of the vegetation species present [58]. RF is a supervised classification technique that can identify learned characteristics in unclassified data [58] and is more robust than unsupervised approaches [59]. RF uses decision trees to vote on the most likely class for each pixel. A single decision tree is a weak learner, but many decision trees together will ensemble a strong learner. The 11 spectral and three structural layers discussed above were included in two separate classification schemes (Table 2). During the first run, only the 11 spectral layers were included in the RF classification. In the second run, the three structural layers (DTM, CHM, flow accumulation) were added to the complete set of spectral layers (14 total) to understand how the integration of structural information improves classification accuracy.
We classified the orthomosaic into 11 classes including the eight vegetation species (listed in Section 2.1) plus bare soil, water, and roads. Polygons of ground reference data were digitized based on the reference locations of the vegetation collected in the field. Since the pixel sizes of the orthomosaic are small (0.16 m), and the spatial precision of the GPS unit used to map reference species in the field was greater than the size of pixels, the use of polygons is preferred over points to capture the variation in reflectance from a single species. Manually digitized polygons of bare soil, water, and roads were identified from the orthomosaic (Figure 6). The RF algorithm was implemented using Python 3.7 language, Scikit-Learn 0.23.1 package, GDAL python 3.1.2 API, and Numpy 1.19.0 packages. All of the layers for each scheme (Table 2) were stacked into a single raster. The code was run in PyCharm Integrated Development Environment (IDE) 2019.3.1 Version. The reference samples were split into 70% for training and 30% for testing with a bootstrap method (with replacement) for sampling. The RF classifier was parameterized with 200 trees, a minimum leaf size of one, and no maximum tree depth. Node splitting utilized the gini criterion with a minimum of 2 samples (no maximum) for splitting. Classification accuracy was assessed using overall accuracy (%) and user’s and producer’s accuracy.

4. Results

4.1. Image Products

The DEM derived from the Phase 1 images had a root mean square (RMS) error of 0.056 m while the multispectral orthomosaic derived from the Phase 2 images had an RMS error of 0.046 m. Coregistration error was less than 1 cm. Ground eleva-tion (DTM; Figure 7a) in the study area ranged from 319 to 390 m, with lower eleva-tions around the eastern and western edges and higher elevations in the north and central portions of the study area (Figure 7a). Vegetation canopy heights (CHM) ranged from 0 to 6.80 m with the highest heights located along the eastern edge as well as in the central and southern portions of the study area, generally corresponding to areas of higher elevation (Figure 7b). Flow accumulation, derived from the DTM, ranged from 0 to 3.72 ha. Flow direction in the study area is from north to south. Ac-cumulation generally followed the elevation profile with high accumulation in lower elevation areas corresponding to the primary river channel, which runs along the western edge of the study area. Greater accumulation was also found in areas to the southwest (Figure 7c). Areas with lower accumulation were located in the central part of the study area where elevations are higher.
The vegetation indices show heterogeneous value distributions across the study area (Figure 8). NDVI (Figure 8a) ranged [−0.92, 0.93], with the highest values being adjacent to the river channel on the west and also in the area of high elevation in the center of the study area. NDWI (Figure 8b) ranged [−0.91, 0.93], with wetter areas surrounding the high elevation area in the center of the study area. SAVI (Figure 8c) ranged [−1.39, 1.39], with values highly correlated with NDVI. NDSI (Figure 8d) ranged [−0.93, 0.927], with higher salinity areas corresponding to lower elevations. EVI-2 was also similar to NDVI but showed more contrast amongst the vegetated areas. GNDVI also followed the patterns of NDVI, SAVI, and EVI-2, but with less contrast.

4.2. Classification Results

The spectral-only model, which included the 11 spectral layers only, had an overall accuracy of 80%. Producer and user accuracies for each class are provided in the Appendix A (Table A2). The most important layers for classifying vegetation were the BLUE band along with two spectral indices, NDWI and NDSI (Figure 9). NDVI and the related GNDVI were found to be only moderately important along with the NIR band. The remaining spectral bands (RE, RED, and GREEN) and the remaining indices (SAVI, EVI-2) only marginally contributed to the classification results.
The integrated spectral–structural model, which included all 14 layers, had an overall accuracy of 93%. Producer and user accuracies for each class and model are provided in the Appendix A (Table A2). In contrast to the spectral-only model, two structural layers, CHM and DTM, were the most important features for classification (Figure 10). Additionally, there was a sharp decline in feature importance scores after these two structural layers, with NDSI, NDWI, and the BLUE band remaining important predictors but with lower scores than they had in the spectral-only model. The remaining spectral layers were ranked in a similar order as they were in the spectral-only model, with the GREEN, RED, and RE bands having low feature importance scores. The flow accumulation variable (FLOW), the third structural variable included in the model, had a very low feature importance score.
Classification results for the spectral–structural model show large areas of saltcedar and giant reed—the two main species of concern—along the southern portion of the study area (Figure 11). Based on the confusion matrix for the spectral–structural model (Table A3), saltcedar was most often confused with cattail and giant reed. These three species were mapped in close proximity to each other in the southern part of the study area. Cattail, arrow weed, and giant reed were the most prolific species in the area, followed by mesquite and saltcedar. There is a clear pattern between the classification results and the structural (DTM, CHM; Figure 7) results, where the species-level classification closely follows the elevation and canopy height gradients from the terrain model. It should be noted that validation samples were captured primarily in the southern portions of the study area, so accuracies may be greater in those regions.

5. Discussion

Past studies using UAS for terrestrial investigations have tended to focus either on the development of 3D models (i.e., DTM, DSM, etc.) for terrain, geomorphic, or other similar analyses, or they have focused on the development of accurate, high spatial resolution multispectral orthomosaics for creating classifications. However, structural information from the 3D models also has value for classification, particularly vegetation discrimination. In this study, we compared a spectral-only model, which included only spectral layers, to a spectral–structural model that included both spectral and elevation layers to differentiate invasive species in an arid study region prone to fire and flooding. The spectral–structural model outperformed the spectral-only model, with overall accuracy increasing to 93% compared to 80% for the spectral-only model. The vegetation in the study area, particularly non-native species, is known for opportunistically establishing and spreading after flood events. Since flooding and water flow patterns are closely related to the terrain elevation, the importance of elevation uncovered in the classification here is not unexpected. However, the degree to which the accuracy of the classification model improved with the incorporation of the DTM and CHM structural layers suggests they may be more important than previously considered, particularly for classifying invasive vegetation in the study area.
The CHM and DTM combined to account for more than 30% of the variable importance in the spectral–structural model (Figure 10). According to the SEINet data Portal for southwestern biodiversity, there is considerable variation in the height of the vegetation species in the study area (Table 3) [60], which may explain why the CHM layer was highly important in the combined spectral–structural model. In short, vegetation species in the region may be more differentiable based on plant height compared to their spectral differences. These differences likely led to the CHM being more important than any of the spectral layers for classifying vegetation. More broadly, the importance of the structural variables in the classification model is important for future studies because the structural characteristics of a landscape often do not change as frequently as the spectral characteristics. While spectral signatures are prone to seasonal and phenological changes as well as water and nutrient inputs [61,62], structural signatures of the plants themselves such as canopy height and the elevation of the ground where they are growing do not change as dynamically. The findings from this study suggest there is great potential to use structural information such as terrain elevation to understand which areas may be at risk for future invasion. For example, if saltcedar distribution is predicted well through a digital terrain model, similar elevations can be proactively treated following a disturbance event (e.g., flood) to prevent establishment without having to wait for the plants to physically establish themselves in order to detect a spectral signal. Another benefit of this finding for land management is that digital terrain models already exist for much of the world (although not necessarily at the high spatial resolution used here). Depending on their resolutions, pre-existing digital terrain models may provide initial insights into which areas are at risk for future invasion. Global canopy height and other 3D structure data has also recently become available through the Global Ecosystem Dynamics Investigation (GEDI) high resolution laser deployed on the International Space Station [63].
Given the importance of the DTM layer in the classification, and the wide availability of DTM datasets worldwide, a logical next study would be to test how the spatial resolution of the DTM affects classification accuracy. An investigation into the optimal spatial scale could aid in determining the coarsest resolution at which structural information can provide key information for species discrimination. The spatial resolution used in this study was 0.16 m, which is moderate for a drone study. Selecting an appropriate minimum mapping unit (MMU) has long been recognized in remote sensing studies of land cover [64], where MMU is the area of the smallest entity to be mapped. For land cover studies, some scholars have suggested that MMU should be 2–5 times smaller than the smallest object of interest [65], but similar thresholds have not yet been determined for classifications involving a terrain model. Elevation values vary continuously, whereas land cover can transition more discretely (e.g., from plant in one pixel to water in the next). Determining an appropriate MMU for structural layers such as DTMs can help users balance accuracy needs with data volume and processing costs [40]. Even if UAS-acquired DTMs are required, coarser MMUs can minimally translate into higher flying altitudes with less time and fewer images needed to cover the study area, saving time, money, and resources.
Interestingly, while the CHM and DTM were found to be important to the classification model, flow accumulation only marginally contributed. FLOW was derived directly from the DTM, and is also correlated with DTM. Therefore, it is possible the FLOW layer simply did not contribute any new information to the model. If DTM was removed from the model, the importance of FLOW may increase. The BLUE band was found to be the most important spectral layer in the spectral-only and the spectral–structural classification. Blue light is scattered considerably by atmospheric constituents, which has made it challenging to use this information from satellite imagery because the band can be noisy. When flying UAS at low altitudes, atmospheric scattering effects are often reduced [40]. Recent research using other close-range remote sensing methods found that blue and even ultra-blue wavelengths hold potential for vegetation discrimination and estimating biophysical (e.g., chlorophyll) components [66]. Our findings suggest that sensors specifically designed for close-range UAS that capture reflectance in the blue wavelength regions where chlorophyll a and b are absorbed may aid vegetation discrimination.
Lastly, in terms of public land management, saltcedar is generally absent from the area that has been chemically treated for giant reed, and these areas also have a more heterogeneous mixture of vegetation (Figure 11). In the areas that have been mechanically treated for giant reed, saltcedar appears to be thriving, and in the areas that have not been treated at all for giant reed, saltcedar appears in dense stands (Figure 11). Saltcedar is notoriously difficult to remove mechanically [67] and can reproduce adventitiously, making follow-on treatments necessary. These classification results along with the spatial overlays of treatment areas suggest that treatments should be targeted based on the type of vegetation and location within the study area, particularly since mechanical treatments can be difficult due to the need for heavy machinery [67].

6. Conclusions

This study developed a spectral–structural workflow for classifying invasive species in an arid region prone to fire and flooding. Drone images were captured and processed into spectral orthomosaics and structural digital elevation models, including a digital terrain model and a canopy height model. The spectral mosaics were used to derive a suite of vegetation indices, while the digital terrain model was used to derive a hydrological flow accumulation model of the site. The spectral data layers, which included five multispectral bands and six vegetation indices, were used to develop a spectral-only random forest classification model to distinguish vegetation species. The full set of spectral and structural layers were combined into a spectral–structural model for the same purpose. Comparison of the two models indicates that the spectral–structural model outperformed the spectral-only model, and the canopy height model and digital terrain model were identified as the most important variables in the combined model. The implications of these findings for land management are that more robust forecasting of where areas are likely to be colonized by invasive species is possible using terrain models, but more research is needed on what the ideal spatial resolutions of these models are for invasive species mapping in arid regions.

Author Contributions

Conceptualization, A.C.K., B.K., S.L., J.D., J.E., C.U., and A.E.F.; methodology, A.C.K., B.K., S.L., J.D., J.E., C.U., and A.E.F.; software, A.C.K., B.K., S.L., J.D., and J.E.; validation, A.C.K., B.K. and S.L.; formal analysis, A.C.K., B.K., S.L., and A.E.F.; investigation, A.C.K., B.K. and S.L.; data curation, A.C.K., B.K., S.L., J.D., and J.E.; writing—original draft preparation, A.C.K., B.K., S.L., and A.E.F.; writing—review and editing, A.C.K., B.K., S.L., J.D., J.E., C.U., and A.E.F.; visualization, A.C.K., B.K., S.L., and A.E.F.; supervision, J.D., J.E., C.U., A.E.F.; project administration, C.U. and A.E.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partly funded by Green Drone AZ as part of the Lower Salt River Restoration Project. A.E.F. is supported by NSF Grant #1934759.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data were obtained by Green Drone AZ and are available from the corresponding author with permission from that organization.

Acknowledgments

The authors would like to thank Green Drone AZ (funded by National Forest Foundation, Boeing, and Northern Arizona University Foundation), the School of Geographical Sciences and Urban Planning (Arizona State University; ASU), the Spatial Analysis Research Center (SPARC) at ASU, and Geospatial Research Solutions at ASU. This work was part of an ASU Masters of Advanced Studies in Geographic Information Systems (MAS-GIS) capstone project by A.K., B.K. and S.L.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Camera and processing parameters for the two flight campaigns. Processing completed using Pix4DMapper v. 4.5.6.
Table A1. Camera and processing parameters for the two flight campaigns. Processing completed using Pix4DMapper v. 4.5.6.
ParametersPhantom 4 ProPhantom 4 Multispectral
Flight: altitude115 m61 m
Flight: sensor angleNadirNadir
Flight: forward lap90%85%
Flight: side overlaps80%75%
Flight: image trigger rate3.0 s2.5 s
Flight: resolution3.28 cm3.89 cm
Flight: Area covered121.05 ha121.05 ha
Camera: sensor1″ CMOS1/2.9″ CMOS
Camera: Focal length8.6 mm5.740 mm
Camera: sensor width13.2 mm4.96 mm
Camera: field of view84°62.7°
Camera: effective pixels20M2.12M
Camera: gimbal angleNadirNadir
Camera: optimizationAll internal and externalAll internal and external
Images: total number226878,555 *
Images: % calibrated100%99%
Images: % geolocated100%100%
Georeferencing: no. GCPs2828
Georeferencing: mean RMSE0.056 m0.046 m
Georeferencing: RMSE X, Y, Z0.3997, 0.0463, 0.08280. 0550, 0. 0517, 0. 1365
Processing: no. key pointsAutomaticAutomatic
Processing: CalibrationStandardStandard
Processing: Point cloud densification½ image scale½ image scale
Processing: Point densityOptimizedOptimized
Processing: Noise filteringYesYes
Processing: Surface smoothingYesYes
Processing: TypeSharpSharp
Processing: Raster typeGeoTIFFGeoTIFF
* The multispectral camera had five separate sensors, so the effective number of images was 15,117 per sensor.
Table A2. Producer’s and user’s accuracy for the random forest classification of 11 classes using the spectral-only and the spectral–structural model.
Table A2. Producer’s and user’s accuracy for the random forest classification of 11 classes using the spectral-only and the spectral–structural model.
Class Spectral-Only ModelSpectral–Structural Model
Producer’sUser’sProducer’sUser’s
Arrow Weed0.620.680.920.92
Cattail0.820.890.900.97
Chamomile0.940.840.960.93
Cottonwood0.630.370.960.88
Giant Reed0.630.60.900.89
Mesquite0.550.320.900.67
Road0.980.990.991.0
Sahara Mustard0.560.290.820.43
Saltcedar0.690.780.910.94
Soil0.960.910.980.96
Water0.990.991.01.0
Table A3. Confusion matrix for the spectral–structural model for classes: Arrow weed, cattail, chamomile, cottonwood, giant reed, mesquite, road, Sahara mustard, saltcedar, soil, and water.
Table A3. Confusion matrix for the spectral–structural model for classes: Arrow weed, cattail, chamomile, cottonwood, giant reed, mesquite, road, Sahara mustard, saltcedar, soil, and water.
ArroCattChamCottGianMesqRoadSahaSaltSoilWater
Arro38,013211412063096056394480
Cat820149,501423181695102190494600124137
Cham35266043,3102701721343561030
Cott6232416,2721003702191201
GR1017251520760,08048130629286963
Mesq.75711833201208013,8751696247340
Road0810233463,004101070
SM21654979001160870519710160
Salt583210576599157756903788,80102
Soil66738190163460020,0690
Water01563040008088,935

References

  1. Didham, R.K.; Tylianakis, J.M.; Hutchison, M.A.; Ewers, R.M.; Gemmell, N.J. Are invasive species the drivers of ecological change? Trends Ecol. Evol. 2005, 20, 470–474. [Google Scholar] [CrossRef]
  2. Pejchar, L.; Mooney, H.A. Invasive species, ecosystem services and human well-being. Trends Ecol. Evol. 2009, 24, 497–504. [Google Scholar] [CrossRef]
  3. The National Wildlife Federation. Invasive Species. Available online: https://www.nwf.org/Educational-Resources/Wildlife-Guide/Threats-to-Wildlife/Invasive-Species (accessed on 3 November 2020).
  4. Di Tomaso, J.M. Impact, Biology, and Ecology of Saltcedar (Tamarix spp.) in the Southwestern United States. Weed Technol. 1998, 12, 326–336. [Google Scholar] [CrossRef]
  5. Frazier, A.E.; Wang, L. Characterizing spatial patterns of invasive species using sub-pixel classifications. Remote Sens. Environ. 2011, 115, 1997–2007. [Google Scholar] [CrossRef]
  6. Ji, W.; Wang, L. Phenology-guided saltcedar (Tamarix spp.) mapping using Landsat TM images in western U.S. Remote Sens. Environ. 2016, 173, 29–38. [Google Scholar] [CrossRef]
  7. Henderson, F.M.; Lewis, A.J. Radar detection of wetland ecosystems: A review. Int. J. Remote Sens. 2008, 29, 5809–5835. [Google Scholar] [CrossRef]
  8. Shaw, D.R. Translation of remote sensing data into weed management decisions. Weed Sci. 2005, 53, 264–273. [Google Scholar] [CrossRef]
  9. Roura-Pascual, N.; Richardson, D.M.; Krug, R.M.; Brown, A.; Chapman, R.A.; Forsyth, G.G.; Le Maitre, D.C.; Robertson, M.P.; Stafford, L.; Van Wilgen, B.W.; et al. Ecology and management of alien plant invasions in South African fynbos: Accommodating key complexities in objective decision making. Biol. Conserv. 2009, 142, 1595–1604. [Google Scholar] [CrossRef] [Green Version]
  10. Richardson, D.M. Fifty Years of Invasion Ecology: The Legacy of Charles Elton; Wiley-Blackwell: Chichester, UK; Hoboken, NJ, USA, 2011; ISBN 978-1-444-33585-9. [Google Scholar]
  11. Stefanski, J.; Mack, B.; Waske, O. Optimization of Object-Based Image Analysis with Random Forests for Land Cover Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2492–2504. [Google Scholar] [CrossRef]
  12. Madden, M.; Jordan, T.; Bernardes, S.; Cotten, D.; O’Hare, N.; Pasqua, A. Unmanned Aerial Systems and Structure from Motion Revolutionize Wetlands Mapping. In Remote Sensing of Wetlands; CRC Press: Boca Raton, FL, USA, 2015; pp. 195–220. [Google Scholar]
  13. Silván-Cárdenas, J.L.; Wang, L. Retrieval of subpixel Tamarix canopy cover from Landsat data along the Forgotten River using linear and nonlinear spectral mixture models. Remote Sens. Environ. 2010, 114, 1777–1790. [Google Scholar] [CrossRef]
  14. Frazier, A.E.; Wang, L. Modeling landscape structure response across a gradient of land cover intensity. Landsc. Ecol. 2013, 28, 233–246. [Google Scholar] [CrossRef]
  15. Michez, A.; Piégay, H.; Jonathan, L.; Claessens, H.; Lejeune, P. Mapping of riparian invasive species with supervised classification of Unmanned Aerial System (UAS) imagery. Int. J. Appl. Earth Obs. Geoinf. 2016, 44, 88–94. [Google Scholar] [CrossRef]
  16. Kattenborn, T.; Lopatin, J.; Förster, M.; Braun, A.C.; Fassnacht, F.E. UAV data as alternative to field sampling to map woody invasive species based on combined Sentinel-1 and Sentinel-2 data. Remote Sens. Environ. 2019, 227, 61–73. [Google Scholar] [CrossRef]
  17. Hill, D.J.; Tarasoff, C.; Whitworth, G.E.; Baron, J.; Bradshaw, J.L.; Church, J.S. Utility of unmanned aerial vehicles for mapping invasive plant species: A case study on yellow flag iris (Iris pseudacorus L.). Int. J. Remote Sens. 2017, 38, 2083–2105. [Google Scholar] [CrossRef]
  18. Lippitt, C.D.; Zhang, S. The impact of small unmanned airborne platforms on passive optical remote sensing: A conceptual perspective. Int. J. Remote Sens. 2018, 39, 4852–4868. [Google Scholar] [CrossRef]
  19. Snavely, N.; Seitz, S.M.; Szeliski, R. Modeling the World from Internet Photo Collections. Int. J. Comput. Vis. 2008, 80, 189–210. [Google Scholar] [CrossRef] [Green Version]
  20. Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’ photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
  21. Yang, M.-D.; Huang, K.-S.; Kuo, Y.-H.; Tsai, H.; Lin, L.-M. Spatial and Spectral Hybrid Image Classification for Rice Lodging Assessment through UAV Imagery. Remote Sens. 2017, 9, 583. [Google Scholar] [CrossRef] [Green Version]
  22. Díaz-Varela, R.A.; Calvo Iglesias, S.; Cillero Castro, C.; Díaz Varela, E.R. Sub-metric analisis of vegetation structure in bog-heathland mosaics using very high resolution rpas imagery. Ecol. Indic. 2018, 89, 861–873. [Google Scholar] [CrossRef]
  23. Kirsch, M.; Lorenz, S.; Zimmermann, R.; Tusa, L.; Möckel, R.; Hödl, P.; Booysen, R.; Khodadadzadeh, M.; Gloaguen, R. Integration of Terrestrial and Drone-Borne Hyperspectral and Photogrammetric Sensing Methods for Exploration Mapping and Mining Monitoring. Remote Sens. 2018, 10, 1366. [Google Scholar] [CrossRef] [Green Version]
  24. Husson, E.; Reese, H.; Ecke, F. Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation. Remote Sens. 2017, 9, 247. [Google Scholar] [CrossRef] [Green Version]
  25. Keeley, J.; Syphard, A. Climate Change and Future Fire Regimes: Examples from California. Geosciences 2016, 6, 37. [Google Scholar] [CrossRef] [Green Version]
  26. Wikipedia Mesa, Arizona. Available online: https://en.wikipedia.org/wiki/Mesa,_Arizona (accessed on 13 March 2012).
  27. National Park Service. Tamarisk. Available online: https://www.nps.gov/sagu/learn/nature/tamarisk.htm (accessed on 11 May 2020).
  28. Mokhtar, E.S.; Pradhan, B.; Ghazali, A.H.; Shafri, H.Z.M. Assessing flood inundation mapping through estimated discharge using GIS and HEC-RAS model. Arab. J. Geosci. 2018, 11, 682. [Google Scholar] [CrossRef]
  29. Wang, L.; Silván-Cárdenas, J.L.; Yang, J.; Frazier, A.E. Invasive Saltcedar (Tamarisk spp.) Distribution Mapping Using Multiresolution Remote Sensing Imagery. Prof. Geogr. 2013, 65, 1–15. [Google Scholar] [CrossRef]
  30. United States Department of Agriculture. Field Guide for Managing Giant Reed in the Southwest; United States Department of Agriculture: Washington, DC, USA, 2014. [Google Scholar]
  31. Hawkins, N.C.T.O. Invasive Plants of the Sonoran Desert. 2002. Available online: https://www.resolutionmineeis.us/documents/chambers-hawkins-2002 (accessed on 22 February 2021).
  32. Dudley, T.L.; Deloach, C.J. Saltcedar (Tamarix spp.), Endangered Species, and Biological Weed Control—Can They Mix? Weed Technol. 2004, 18, 1542–1551. [Google Scholar] [CrossRef]
  33. Cleverly, J.R.; Smith, S.D.; Sala, A.; Devitt, D.A. Invasive capacity of Tamarix ramosissima in a Mojave Desert floodplain: The role of drought. Oecologia 1997, 111, 12–18. [Google Scholar] [CrossRef]
  34. Verde River Cooperative Invasive Plant Management. Available online: https://verderiver.org/verde-watershed-restoration-coalition/vwrc-in-action/verde-river-cooperative-invasive-plant-management/ (accessed on 22 February 2021).
  35. Glenn, E.P.; Nagler, P.L. Comparative ecophysiology of Tamarix ramosissima and native trees in western U.S. riparian zones. J. Arid Environ. 2005, 61, 419–446. [Google Scholar] [CrossRef]
  36. Arizona Emergency Information Network. Firefighters Successful Holding Fire Lines on the Cactus Fire. 2017. Available online: https://ein.az.gov/emergencyinformation/emergency-bulletin/firefighters-successful-holding-fire-lines-cactus-fire (accessed on 1 August 2020).
  37. Matthew, B.; Michael, L. Fire Management and Invasive Plants: A Handbook. U. S. Fish Wildl. Serv. Arlingt. Va. 2008, 27. [Google Scholar]
  38. Van Wilgen, B. Natural fires & plant invaders—What is the link? Quest 2015, 11, 22–23. [Google Scholar]
  39. Rhoades, C.; Barnes, T.; Washburn, B. Prescribed Fire and Herbicide Effects on Soil Processes During Barrens Restoration. Restor. Ecol. 2002, 10, 656–664. [Google Scholar] [CrossRef] [Green Version]
  40. Singh, K.K.; Frazier, A.E. A meta-analysis and review of unmanned aircraft system (UAS) imagery for terrestrial applications. Int. J. Remote Sens. 2018, 39, 5078–5098. [Google Scholar] [CrossRef]
  41. Pix4d. Pix4Dmapper 4.1 User Manual. 2017. Available online: https://support.pix4d.com/hc/en-us/articles/204272989-Offline-Getting-Started-and-Manual-pdf (accessed on 6 June 2020).
  42. Nguyen, U.; Glenn, E.P.; Dang, T.D.; Pham, L.T.H. Mapping vegetation types in semi-arid riparian regions using random forest and object-based image approach: A case study of the Colorado River Ecosystem, Grand Canyon, Arizona. Ecol. Inform. 2019, 50, 43–50. [Google Scholar] [CrossRef]
  43. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective; Prentice Hall series in geographic information science; Prentice Hall: Upper Saddle River, NJ, USA, 2005; ISBN 9780131453616. [Google Scholar]
  44. Khan, N.M.; Rastoskuev, V.V.; Sato, Y.; Shiozawa, S. Assessment of hydrosaline land degradation by using a simple approach of remote sensing indicators. Agric. Water Manag. 2005, 77, 96–109. [Google Scholar] [CrossRef]
  45. Nguyen, K.-A.; Liou, Y.-A.; Tran, H.-P.; Hoang, P.-P.; Nguyen, T.-H. Soil salinity assessment by using near-infrared channel and Vegetation Soil Salinity Index derived from Landsat 8 OLI data: A case study in the Tra Vinh Province, Mekong Delta, Vietnam. Prog. Earth Planet. Sci. 2020, 7, 1. [Google Scholar] [CrossRef] [Green Version]
  46. Santos, W.J.R.; Silva, B.M.; Oliveira, G.C.; Volpato, M.M.L.; Lima, J.M.; Curi, N.; Marques, J.J. Soil moisture in the root zone and its relation to plant vigor assessed by remote sensing at management scale. Geoderma 2014, 221–222, 91–95. [Google Scholar] [CrossRef]
  47. Wahab, I.; Hall, O.; Jirström, M. Remote Sensing of Yields: Application of UAV Imagery-Derived NDVI for Estimating Maize Vigor and Yields in Complex Farming Systems in Sub-Saharan Africa. Drones 2018, 2, 28. [Google Scholar] [CrossRef] [Green Version]
  48. Huete, A. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  49. Esri. Hillshade Function. Available online: https://desktop.arcgis.com/en/arcmap/10.3/manage-data/raster-and-images/hillshade-function.htm (accessed on 5 July 2020).
  50. Esri. An Overview of the Neighborhood Toolset. Available online: https://pro.arcgis.com/en/pro-app/tool-reference/spatial-analyst/an-overview-of-the-neighborhood-tools.htm (accessed on 5 July 2020).
  51. Garbrecht, J.; Martz, L.W. The assignment of drainage direction over flat surfaces in raster digital elevation models. J. Hydrol. 1997, 193, 204–213. [Google Scholar] [CrossRef]
  52. Mafanya, M.; Tsele, P.; Botai, J.; Manyama, P.; Swart, B.; Monate, T. Evaluating pixel and object based image classification techniques for mapping plant invasions from UAV derived aerial imagery: Harrisia pomanensis as a case study. ISPRS J. Photogramm. Remote Sens. 2017, 129, 1–11. [Google Scholar] [CrossRef] [Green Version]
  53. KopeĿ, D.; Michalska-Hejduk, D.; Sſawik, S.; Berezowski, T.; Borowski, M.; Rosadziſski, S.; Chormaſski, J. Application of multisensoral remote sensing data in the mapping of alkaline fens Natura 2000 habitat. Ecol. Indic. 2016, 70, 196–208. [Google Scholar] [CrossRef]
  54. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  55. Myint, S.W.; Gober, P.; Brazel, A.; Grossman-Clarke, S.; Weng, Q. Per-pixel vs. object-based classification of urban land cover extraction using high spatial resolution imagery. Remote Sens. Environ. 2011, 115, 1145–1161. [Google Scholar] [CrossRef]
  56. Millard, K.; Richardson, M. Wetland mapping with LiDAR derivatives, SAR polarimetric decompositions, and LiDAR–SAR fusion using a random forest classifier. Can. J. Remote Sens. 2013, 39, 290–307. [Google Scholar] [CrossRef]
  57. Jin, H.; Mountrakis, G.; Stehman, S.V. Assessing integration of intensity, polarimetric scattering, interferometric coherence and spatial texture metrics in PALSAR-derived land cover classification. ISPRS J. Photogramm. Remote Sens. 2014, 98, 70–84. [Google Scholar] [CrossRef]
  58. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  59. Niemeyer, J.; Rottensteiner, F.; Soergel, U. Contextual classification of lidar data and building object detection in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 87, 152–165. [Google Scholar] [CrossRef]
  60. SEINet Data Portal: Arizona-New Mexico Chapter. Available online: https://swbiodiversity.org/seinet/index.php (accessed on 15 October 2020).
  61. Gausman, H.W. Reflectance of leaf components. Remote Sens. Environ. 1977, 6, 1–9. [Google Scholar] [CrossRef]
  62. Curran, P.J. Remote sensing of foliar chemistry. Remote Sens. Environ. 1989, 30, 271–278. [Google Scholar] [CrossRef]
  63. Dubayah, R.; Blair, J.B.; Goetz, S.; Fatoyinbo, L.; Hansen, M.; Healey, S.; Hofton, M.; Hurtt, G.; Kellner, J.; Luthcke, S.; et al. The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth’s forests and topography. Sci. Remote Sens. 2020, 1, 100002. [Google Scholar] [CrossRef]
  64. Saura, S. Effects of minimum mapping unit on land cover data spatial configuration and composition. Int. J. Remote Sens. 2002, 23, 4853–4880. [Google Scholar] [CrossRef]
  65. O’Neill, R.V.; Hunsaker, C.T.; Timmins, S.P.; Jackson, B.L.; Jones, K.B.; Riitters, K.H.; Wickham, J.D. Scale problems in reporting landscape pattern at the regional scale. Landsc. Ecol. 1996, 11, 169–180. [Google Scholar] [CrossRef]
  66. Flynn, K.C.; Frazier, A.E.; Admas, S. Performance of chlorophyll prediction indices for Eragrostis tef at Sentinel-2 MSI and Landsat-8 OLI spectral resolutions. Precis. Agric. 2020, 21, 1057–1071. [Google Scholar] [CrossRef]
  67. O’mera, S.; Larsen, D.; Owens, C. Methods to Control Saltcedar and Russian Olive; Shafroth, P.B., Brown, C.A., Merritt, D.M., Eds.; U.S. Geological Survey Scientific Investigations Report 2009-5247; USGS: Washington, DC, USA, 2010.
Figure 1. (a) Inset map of Arizona, located in the southeastern United States, with major hydrology shown; (b) the study site along the Salt River in the Tonto National Forest, located northeast of Phoenix. The area burned by the Cactus fire is outlined.
Figure 1. (a) Inset map of Arizona, located in the southeastern United States, with major hydrology shown; (b) the study site along the Salt River in the Tonto National Forest, located northeast of Phoenix. The area burned by the Cactus fire is outlined.
Drones 05 00019 g001
Figure 2. (a) The Lower Salt River Restoration Project divided into 46 grids showing the Cactus fire perimeter, (b) the study area (black outline around A3 and A4) within the fire perimeter, and (c) aerial imagery with different treatment boundaries overlain.
Figure 2. (a) The Lower Salt River Restoration Project divided into 46 grids showing the Cactus fire perimeter, (b) the study area (black outline around A3 and A4) within the fire perimeter, and (c) aerial imagery with different treatment boundaries overlain.
Drones 05 00019 g002
Figure 3. (a) Distribution of the 28 ground control points (GCPs) throughout the study area; (b) RTK GPS unit used to capture coordinates.
Figure 3. (a) Distribution of the 28 ground control points (GCPs) throughout the study area; (b) RTK GPS unit used to capture coordinates.
Drones 05 00019 g003
Figure 4. Examples and locations of some of the ground reference data collected in the field for training and testing the classification.
Figure 4. Examples and locations of some of the ground reference data collected in the field for training and testing the classification.
Drones 05 00019 g004
Figure 5. Data processing flowchart.
Figure 5. Data processing flowchart.
Drones 05 00019 g005
Figure 6. (a) Inset map showing the focus area of interest; (b) polygons of the training and validation samples digitized based on field reference data.
Figure 6. (a) Inset map showing the focus area of interest; (b) polygons of the training and validation samples digitized based on field reference data.
Drones 05 00019 g006
Figure 7. The three structural layers including (a) Digital terrain model (DTM), (b) canopy height model (CHM), and (c) flow accumulation.
Figure 7. The three structural layers including (a) Digital terrain model (DTM), (b) canopy height model (CHM), and (c) flow accumulation.
Drones 05 00019 g007
Figure 8. (a) Normalized Difference Vegetation Index (NDVI), (b) Normalized Difference Water Index (NDWI), (c) Soil Adjusted Vegetation Index (SAVI), (d) Normalized Difference Salinity Index (NDSI), (e) Two Band Enhanced Vegetation Index (EVI-2), and (f) Green Normalized Difference Vegetation Index (GNDVI). Lighter shades are higher values, darker shades are lower values.
Figure 8. (a) Normalized Difference Vegetation Index (NDVI), (b) Normalized Difference Water Index (NDWI), (c) Soil Adjusted Vegetation Index (SAVI), (d) Normalized Difference Salinity Index (NDSI), (e) Two Band Enhanced Vegetation Index (EVI-2), and (f) Green Normalized Difference Vegetation Index (GNDVI). Lighter shades are higher values, darker shades are lower values.
Drones 05 00019 g008
Figure 9. Feature importance from the spectral-only classification using the 11 spectral features (5 spectral bands plus six spectral indices) in the classification model.
Figure 9. Feature importance from the spectral-only classification using the 11 spectral features (5 spectral bands plus six spectral indices) in the classification model.
Drones 05 00019 g009
Figure 10. Random forest feature importance scores for the integrated model with the entire suite of 14 spectral and structural variables.
Figure 10. Random forest feature importance scores for the integrated model with the entire suite of 14 spectral and structural variables.
Drones 05 00019 g010
Figure 11. (a) Classification results with areas for mechanical and chemical treatment of giant reed overlaid, and (b) comparison of the area mapped for each class with areas (in hectares) following each bar.
Figure 11. (a) Classification results with areas for mechanical and chemical treatment of giant reed overlaid, and (b) comparison of the area mapped for each class with areas (in hectares) following each bar.
Drones 05 00019 g011
Table 1. Vegetation indices computed for plant species classification.
Table 1. Vegetation indices computed for plant species classification.
IndexEquation
Normalized Difference Vegetation Index (NDVI) N I R R E D N I R + R E D
Normalized Difference Water Index (NDWI) G R E E N N I R G R E E N + N I R
Soil Adjusted Vegetation Index (SAVI) * ( N I R R E D N I R + R E D + L ) ( 1 + L )
Normalized Difference Salinity Index (NDSI) R E D N I R R E D + N I R
Two Band Enhanced Vegetation Index (EVI-2) 2.5 N I R R E D ( N I R + 2.4 ) ( R E D + 1 )
Green Normalized Difference Vegetation Index (GNDVI) N I R G R E E N N I R + G R E E N
* L was set to 0.5 following [48].
Table 2. List of layers included in the random forest classification.
Table 2. List of layers included in the random forest classification.
Layer NumberScheme NumberLayer Name
11, 2Red band (RED)
21, 2Green band (GREEN)
31, 2Blue band (BLUE)
41, 2Red Edge band (RE)
51, 2Near Infrared band (NIR)
61, 2Two Band Enhanced Vegetation Index (EVI-2)
71, 2Green Normalized Difference Vegetation Index (GNDVI)
81, 2Normalized Difference Salinity Index (NDSI)
91, 2Normalized Difference Vegetation Index (NDVI)
101, 2Normalized Difference Water Index (NDWI)
111, 2Soil-Adjusted Vegetation Index (SAVI)
122Canopy Height Model (CHM)
132Digital Terrain Model (DTM)
142Flow Accumulation (FLOW)
Table 3. Typical heights or height ranges for the vegetation species classified in the study area (from SEINet Southwest Data Portal).
Table 3. Typical heights or height ranges for the vegetation species classified in the study area (from SEINet Southwest Data Portal).
Vegetation SpeciesHeight (m)
Arrow Weed1.5–3
Cattail1–3
Cottonwoodup to 30
Chamomile0.5
Giant Reed2–5
Sahara Mustard0.3–1.2
Mesquiteup to 17
Saltcedarusually 4–5, up to 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kedia, A.C.; Kapos, B.; Liao, S.; Draper, J.; Eddinger, J.; Updike, C.; Frazier, A.E. An Integrated Spectral–Structural Workflow for Invasive Vegetation Mapping in an Arid Region Using Drones. Drones 2021, 5, 19. https://doi.org/10.3390/drones5010019

AMA Style

Kedia AC, Kapos B, Liao S, Draper J, Eddinger J, Updike C, Frazier AE. An Integrated Spectral–Structural Workflow for Invasive Vegetation Mapping in an Arid Region Using Drones. Drones. 2021; 5(1):19. https://doi.org/10.3390/drones5010019

Chicago/Turabian Style

Kedia, Arnold Chi, Brandi Kapos, Songmei Liao, Jacob Draper, Justin Eddinger, Christopher Updike, and Amy E. Frazier. 2021. "An Integrated Spectral–Structural Workflow for Invasive Vegetation Mapping in an Arid Region Using Drones" Drones 5, no. 1: 19. https://doi.org/10.3390/drones5010019

APA Style

Kedia, A. C., Kapos, B., Liao, S., Draper, J., Eddinger, J., Updike, C., & Frazier, A. E. (2021). An Integrated Spectral–Structural Workflow for Invasive Vegetation Mapping in an Arid Region Using Drones. Drones, 5(1), 19. https://doi.org/10.3390/drones5010019

Article Metrics

Back to TopTop