Next Article in Journal
A Heatmap-Supplemented R-CNN Trained Using an Inflated IoU for Small Object Detection
Previous Article in Journal
Characterizing Chromophoric Dissolved Organic Matter Spatio-Temporal Variability in North Andean Patagonian Lakes Using Remote Sensing Information and Environmental Analysis
Previous Article in Special Issue
Application of Free Satellite Imagery to Map Ecosystem Services in Ungwana Bay, Kenya
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tree Species Classification by Multi-Season Collected UAV Imagery in a Mixed Cool-Temperate Mountain Forest

1
Graduate School of Environmental Science, Hokkaido University, Sapporo 060-0810, Japan
2
Faculty of Environmental Earth Science, Hokkaido University, Sapporo 060-0810, Japan
3
Department of Civil Engineering, Chennai Institute of Technology, Chennai 600069, Tamilnadu, India
4
Department of Architecture, College of Architecture & Planning, King Khalid University, Abha 61421, Saudi Arabia
5
Department of Earth and Environmental Sciences, Indian Institute of Science Education and Research Mohali, Punjab 140-306, India
6
Field Science Center for Northern Biosphere, Hokkaido University, Sapporo 060-0809, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(21), 4060; https://doi.org/10.3390/rs16214060
Submission received: 12 September 2024 / Revised: 28 October 2024 / Accepted: 29 October 2024 / Published: 31 October 2024

Abstract

:
Effective forest management necessitates spatially explicit information about tree species composition. This information supports the safeguarding of native species, sustainable timber harvesting practices, precise mapping of wildlife habitats, and identification of invasive species. Tree species identification and geo-location by machine learning classification of UAV aerial imagery offer an alternative to tedious ground surveys. However, the timing (season) of the aerial surveys, input variables considered for classification, and the model type affect the classification accuracy. This work evaluates how the seasons and input variables considered in the species classification model affect the accuracy of species classification in a temperate broadleaf and mixed forest. Among the considered models, a Random Forest (RF) classifier demonstrated the highest performance, attaining an overall accuracy of 83.98% and a kappa coefficient of 0.80. Simultaneously using input data from summer, winter, autumn, and spring seasons improved tree species classification accuracy by 14–18% from classifications made using only single-season input data. Models that included vegetation indices, image texture, and elevation data obtained the highest accuracy. These results strengthen the case for using multi-seasonal data for species classification in temperate broadleaf and mixed forests since seasonal differences in the characteristics of species (e.g., leaf color, canopy structure) improve the ability to discern species.

1. Introduction

In Japan, approximately 25 million hectares are covered by forests, accounting for two thirds of the country’s land surface [1]. In Japan and globally, natural forests play a significant role in maintaining carbon neutrality, biodiversity, and ecological equilibrium, making them an indispensable land cover type and an integral component of ecosystems [2]. To support the management and monitoring of these forests, it is important to understand species composition and spatial distributions. Such information supports the understanding of community dynamics and the contributions of different species to ecosystem functions and services [3,4]. Knowledge of the composition and spatial distribution of tree species also feed into multiple other cases such as afforestation decision-making, estimating carbon cycles, assessing biodiversity, and analyzing tree–environment interactions in more depth [5]. For instance, forest biomass productivity can be enhanced by employing tree-species-specific biomass models [6]. Consequently, there is a high demand for effective techniques to delineate and classify tree species [7].
Ground-based stock-taking has traditionally been used to gather data on tree species, their location, and other tree metrics. But, this approach is associated with high costs and requires significant human resources; additionally, the inaccessibility of certain forest areas poses further challenges to field investigations [8]. Particularly in Japan, large human-resource constraints in the forestry sector have stressed the need to establish alternative robust techniques to assist forest stock-taking efforts [1,9]. In various countries, Unmanned Aerial Vehicles (UAVs) are increasingly being used for tree species classification and mapping, driven by their potential to capture high-resolution data and operate at low costs [10]. For instance, Heinzel and Koch [11] employed UAV data and a Support Vector Machine (SVM) classifier to differentiate four temperate tree species (Pine, Spruce, Oak, and Beech). Their study reported favorable classification accuracy results of 83.1–90.7% using machine learning algorithms. Furthermore, they highlighted the improvement in tree species classification achieved by incorporating machine learning algorithms that utilize multiple-source data. In a similar vein, Natesan et al. [12] employed UAV-RGB images (true color images) captured in different seasons in conjunction with deep learning algorithms to train a classifier for two pine species. However, the results of such studies are difficult to extend to different regions and forest types, since specific tree species present in the forest differ, and differences in local climate affects seasonal variations in the forest phenology. For example, in temperate forests, changes in color during autumn, leaf fall in winter, leaf expansion in spring, and blooming in summer significantly impact tree species classification due to variations in spectral reflectance [5].
To improve the robustness of remote-sensing-based vegetation classification models to the effect of seasonal phenological differences, recent studies incorporated multi-temporal satellite imagery in order to capture the phenological aspects of vegetation [13,14,15]. Most of this work focused on satellite remote sensing missions such as Sentinel 1, 2, and Landsat, which offer multiple years of coverage and frequent data acquisitions (e.g., every 2 weeks). The use of multi-seasonal data, as described here, may be particularly effective to differentiate species in temperate mixed forests that consist of both deciduous and evergreen tree species. Seasonal variation in leaf color and canopy structure differs starkly between deciduous broadleaved species and evergreen conifers. Further, the timing of phenological changes differ between broadleaved species, which could further help to differentiate species in a classification model that uses multi-seasonal data. However, the resolution of the data considered in the mentioned studies is too low to enable the identification of individual trees in mixed species forests since each pixel unit in the satellite imagery covers a fairly large on the ground (e.g., 100 m2 in the case of Sentinel-2). Therefore, to achieve robust individual-tree-level species classification in a temperate forest, it may be necessary to adapt the multi-temporal data collection approach to be used by UAV-based surveys.
However, since UAV-based surveys need to be scheduled and require resources for each survey, it is necessary to determine the temporal frequency and timing of surveys (i.e., which combination of seasons’ data) that will lead to adequate tree classification accuracy. Moreover, quantifying the ‘signal’ of phenological changes in individual trees requires the use of a variety of data sources that can capture the spectral variation in leaves (such as Vegetation Indices), image texture features (such as Correlation and image Entropy), and structure information of the canopy, as can be provided by Digital Surface Models (DSMs) or 3D point clouds. The use of such multi-source data has helped to improve forest species classification in previous studies [16]. A recent work by Liu [17] investigated these components in an urban setting, showing that a large set of 20 features including 11 image texture features captured in different seasons were optimal for classifying urban tree species. Veras et al. also showed that in tropical forests, fusion of multi-seasonal UAV data increased classification accuracy by up to 21% over single-season models [18]. However, the role of specific seasons’ data and important features differ between ecosystems and forest types. For example, tropical forest data from the rainy season resulted in higher species classification accuracy than UAV data from the dry season [18], but these observations do not necessarily extend to warm-temperate deciduous forests with different species and phenological patterns [19]. It is, therefore, necessary that important data collection seasons and priority features be studied in different forest types and ecosystems.
In light of the above discussion, this study’s objective was to find a multi-season UAV-based tree classification workflow that can classify prominent tree species in the temperate broadleaf and mixed forests that cover a large area in Japan’s northern island of Hokkaido. To achieve this goal, we conducted repeated UAV surveys in all four seasons, and used RGB cameras, multispectral sensors, and LiDAR (Light Detection and Ranging—sensor used to generate a 3D point-cloud of the forest canopy structure). From this data, different potential texture features, vegetation indices, and elevation data in the form of Digital Surface Models (DSMs) were extracted and fed into machine learning classifiers for the target species. The study then investigated the effect of different input features on classification accuracy and evaluated alternative modeling frameworks available in the literature. By considering multispectral and LiDAR data for model input and different segmentation and modeling frameworks, this work thus extends related UAV-based tree classification studies in Japan [9]. Finally, the effect of data collection season was evaluated to determine suitable survey times as a reference for further work in this region.

2. Study Area

Hokkaido prefecture holds a prominent position among Japan’s forested regions, encompassing approximately 5.54 million hectares of forest [20]. This study was conducted within the Uryu Experimental Forest, situated in Horokanai Town, northern Hokkaido, and affiliated with Hokkaido University. The Uryu Experimental Forest is located at coordinates 44°22′N, 142°12′E, with an elevation of 280 m above sea level. The study area, shown by the blue rectangle in Figure 1, covers an area of 1.66 hectares. The prevailing forest type in this area is a cool-temperate mixed forest, characterized by the coexistence of evergreen coniferous and deciduous broadleaf trees. The primary tree species observed include Painted maple (Acer mono), Japanese oak (Quercus crispula), Japanese rowan (Sorbus commixta), Sakhalin fir (Abies sachalinensis), Castor aralia (Kalopanax septemlobus), Russian rock birch (Betula ermanii), and Sakhalin spruce (Picea glehnii). Notably, the study area is primarily dominated by broadleaved tree species.
The climate of the study area is characterized by an annual mean temperature of 3.1 °C and precipitation of 1390 mm, based on data from the period spanning 1956 to 2014 [21]. Approximately 50% of the precipitation occurs as rain between May and November, while the remaining 50% falls as snow. The region experiences particularly low temperatures and heavy snowfall during winter, with the lowest recorded temperature in Japan, −41.2 °C, recorded in 1978. The ground remains covered in snow for approximately seven months, from November to the following May, and the maximum snow height reaches up to 3 m [22].

3. Materials and Methods

The objective of this study was to determine the optimal input data and classifier combination that would yield the highest-accuracy tree classification for the forest stands in the study area. To achieve this, a methodology was followed as illustrated in Figure 2. It consisted of ground field surveys, UAV data collection, and image processing (seasonal information derivation, dimensionality reduction), followed by tree canopy segmentation, machine-learning-based species classification, and final accuracy assessments.

3.1. UAV Data and Preprocessing

In this study, we utilized multiple sources of Unmanned Aerial Vehicle (UAV) data, including RGB (true color images), multispectral, and LiDAR (Light Detection and Ranging) data. The multispectral and RGB images were captured using a DJI Phantom 4 Multispectral UAV (Table 1) equipped with a high-accuracy Global Navigation Satellite System (GNSS) receiver, which enabled positional accuracy of a few centimeters in Real-Time Kinematic (RTK) mode.
The LiDAR data was collected using a DJI Matrice 300 RTK UAV equipped with a Zenmuse L1 LiDAR sensor. The flight parameters were set as follows: a 40% overlap and a flight height of 75–80 m above the ground level. For the generation of the orthomosaics and Digital Elevation Models (DEMs) from the multispectral and LiDAR sensors, the DJI Terra software (Version 2.2.0.15) was utilized.
Regarding the considered tree species and field surveys, Uryu Experimental Forest’s managers maintain a comprehensive inventory of the trees represented in the forest (Supplementary Table S1). The forest contains 18 species, of which the dominant are Painted maple (26%), Japanese oak (18%), Japanese rowan (16%), Sakhalin fir (15.5%), and Castor aralia (8%), while the remaining tree species account for less than 20% (Supplementary Table S1). Field surveys were conducted five times in the forest, spanning from September 2021 to June 2022. During these surveys, GPS coordinates, species, tree diameter at breast height (DBH), and tree heights were collected. However, a subset of the 7 most prominent species were focused on for the classification study. This decision was based on the small number of individuals for the remaining species’ trees observed in the study area, as well as field survey constraints. The ground survey teams repeatedly experienced difficulties in capturing GPS coordinates of tree locations accurate enough to be used for the study. Most GNSS receivers may obtain positional accuracy of within 2 m, but the accuracy reduces in densely covered areas like a forest floor [23]. What is more, it was observed that even small spatial offsets (e.g., 1 m) could easily result in misclassification of trees due to their close proximity. The forest management also maintains a catalog of the GPS locations of individual trees, but it was also not sufficiently accurate for the study. As a result, follow up surveys with a high-precision GNSS (EMLID Reach RTK and Reach M2 module) were conducted, and the location of 181 specimens belonging to the 7 target species were confirmed (see Table 2; Figure 3). However, some spatial offsets were still observed, and thus validation points were manually adjusted to the location of the corresponding trees in the georeferenced UAV orthomosaics. These were then used to assist in validation of model results.

3.2. Image Classification

The classification procedure consisted of tree crown segmentation, followed by extraction of target variables from the segmented tree canopy, which was then fed into the classification models. We opted for automated image segmentation using the Simple Non-Iterative Clustering (SNIC) algorithm [24] within the Google Earth Engine (GEE) platform. The SNIC algorithm was applied with the parameters of compactness = 7, connectivity = 8, and neighborhood size = 64 to all image combinations, after normalization of each input band to a 0–1 range. The chosen parameters were selected by a manual ‘trial-and-error’ approach of changing segmentation parameters, and visually assessing the correspondence between segmentation result and aerial imagery over the study site.
Prior to applying machine learning algorithms for tree species classification, it is necessary to extract relevant variables or features from the image objects identified by the segmentation procedure. In this study, we considered a combination of vegetation indices, texture, and Digital Elevation Model (DEM) variables. This selection was based on previous research that highlighted the significant improvement in classification accuracy achieved through the utilization of multiple features [15,16]. In particular, for image texture quantification, the Gray-Level Co-occurrence Matrix (GLCM) technique proposed by Haralick et al. [25] was employed. The GLCM approach allows for the characterization of image texture by quantifying the spatial relationships between pixels. To quantify spectral characteristics of the trees, four VIs were selected for the analysis, as outlined in Table 3. To quantify elevation and canopy height information, DEMs generated from the UAV surveys were included as input variables.
For tree species classification, we specifically focused on three algorithms that were successfully applied to prior tree classification studies, namely, Random Forest (RF), Classification And Regression Tree (CART), and Support Vector Machine (SVM) [15,30,31]. These algorithms were selected based on their ability to achieve high classification accuracy in mixed-forest scenarios, as demonstrated in previous studies. Different modeling frameworks were considered, firstly to compare their suitability for the present application, and secondly, to assess whether the effect of seasonal input data on classification accuracy is a case/modeling-framework-specific or generalizable observation.
The 3 considered modeling frameworks have different parameters and training procedures, which are described below. For all models, data were normalized to a 0–1 range before SNIC clustering and model training. The relevant hyperparameters were determined by iterating over possible values and assessing the corresponding classification accuracy by 5-fold cross-validation. For hyperparameters that affect model flexibility, lower values were finally chosen to reduce chances of overfitting in cases where different parameter values produced similar classification results. Each model type was trained 5 times, using firstly individual season data as inputs (i.e., 4 models) and finally 1 model combining all seasons’ data as input. Hyperparameters were selected separately for each model. The specific hyperparameter combinations for each model and season combination are reported in Table 4. For the SVM model, a ‘RBF’ (Radial Basis Function) was used as Kernel function, and its gamma and cost parameters were optimized following Hsu [32].
For the collection of model training points, a visual approach was employed. Training points for coniferous tree species were visually identified from aerial images collected during the winter field survey, since the conifers are evergreen, and thus easily distinguishable in the winter, when deciduous species are without leaves. On the other hand, training points for broadleaved species were selected by visually evaluating aerial images from the summer and autumn field surveys, when species-specific characteristics like leave shape and color allow for easier differentiation of species. Overall, a total of 145 training points that consisted of 131 tree samples and 14 non-tree points (grass, road, dead tree) were collected for the classification process. To validate the accuracy of our results, a separate set of 181 ground truth points were used based on the location of trees as captured by GPS during the field surveys (Figure 3; Table 2).

3.3. Accuracy Metrics and Model Evaluation

To evaluate the overall accuracy of the classification results, Overall Accuracy (OA) was considered. OA is defined as the total number of correctly classified instances, divided by the total number of reference values (confirmed tree species locations, in this case, Table 2). To further evaluate class-specific classification results, we considered the User’s Accuracy (UA) and Producer’s Accuracy (PA), as recommended by Liu et al. [33,34]. The User’s Accuracy for class i can be interpreted as how often the class i shown on the map will correspond to the same class in the ground truth/reference data, and is calculated as the proportion of samples correctly classified as class i out of all the samples classified as class i. The Producer’s accuracy for class i can be expressed as the proportion of samples correctly classified as class i, out of all the samples labelled as class i in the reference dataset. If C i j is the number of samples classified as class i but labeled as class j in reference data, then the OA, UAi, and PAi Producer’s Accuracy for class i can be denoted by Equations (1)–(3) below, respectively.
O A = i = 1 m C i i i = 1 ,   j = 1 m C i j  
U A i = C i i j = 1 m C i j
P A i = C i i j = 1 m C i j  
To compare the role of different input variables, the variable importance was extracted for the RF and CART classifiers, using the built in ‘explain ()’ method for trained classifiers in Earth Engine. The Earth Engine documentation does not specify the specific variable importance measure implemented, but Earth Engine uses the RF model from the SMILE (Statistical Machine Intelligence and Learning Engine) Java library, which calculates importance of a variable as the sum of the decrease in node impurity over all nodes where the variable is considered [35]. A more general discussion on Random Forest importance measures and their statistical properties can be found in [36]. The SVM classifier used in this study did not, however, have a variable importance measure.

4. Results

4.1. Data Capture Season and Input Variables

All three considered model frameworks obtained their highest classification accuracy for the multi-seasonal input data (Table 5). The variable importance calculated for the multi-seasonal RF model shows, however, that autumn and spring VI variables NDEGE and NDVI contributed more to the classification than other variables (Figure 4). The SVM model did not report variable importance, but SVM models trained on only autumn and spring data obtained notably higher classification accuracy than models trained on other seasons’ data (OA higher by 5.5–9.9%, Table 4). However, the variable importance chart reveals that for the other VIs, DEM and image correlation variables, all seasons’ data contributed to the classification (Figure 4). The importance of these contributions is shown by the notable increase in classification accuracy for the multi-seasonal models over single-season models. Despite the relative importance of autumn and spring, the multi-season model obtained an OA increase over single-season autumn or spring models (Table 5). The image texture metric ‘Entropy’ contributed least to the classification (Figure 4). The CART model obtained the lowest accuracy in almost all model combinations, and thus the remaining results for CART are only reported in the Appendices.

4.2. Tree-Species-Specific Classification Accuracy

Although the multi-seasonal data had the highest overall accuracy for all modeling frameworks, individual species’ classification accuracy varied notably between seasons, as shown by the PA and UA values reported in bold font in Table 6. For example, Japanese rowan was classified most accurately in summer for both RF and SVM models, whereas Sakhalin firs were classified accurately in autumn and winter. On the other hand, Japanese oak and Painted maple had the high classification accuracy in spring, although accuracy increased further in the multi-seasonal image. This suggests that specific seasons’ data could facilitate accurate classification of a subset of the species. But, by considering the multi-seasonal dataset, the characteristics of all species could be better captured, leading to higher overall classification accuracy (Table 4), despite lower individual species accuracy in the multi-seasonal dataset for some species.
The Sakhalin fir and Japanese oak obtained high classification accuracies, possibly due to the relatively large number of training samples for these species. There was, however, relatively high misclassification between the broadleaved species Castor aralia, Japanese rowan, Russian rock birch, and Painted maple, as shown in the confusion matrices in Figure 5. Sakhalin spruce had the highest misclassification (PA = 62.5% in all models). This could be a result of the small number of samples (n = 8) in the training and validation sets.
Maps of the classified trees species from the multi-seasonal RF model, overlayed with field verified tree locations, are shown in Figure 6. Classification maps for the SVM and CART models are provided in Supplementary A. Confusion matrices for individual season models are reported in Supplementary B.

5. Discussion

This study investigated different data sources and modeling frameworks to understand how different seasons and input factors influence accuracy of tree species classification models in temperate broadleaved and mixed forests. The main finding was that including aerial images from multiple seasons significantly improved classification accuracy (Table 4). This result probably stems from species-specific changes in tree canopy structure (e.g., due to the dropping of leaves in winter) and leaf color in different seasons. Including data from the different seasons thus ‘increases the signal’ of the species-specific characteristics, which facilitates more accurate classification. This study considered one data acquisition from each of the four seasons (summer, autumn, spring, and winter), since the variation in tree characteristics was deemed to be larger between the seasons, compared to phenological variations at different times within a single season. However, it may be that specific times within a certain season could further improve classification results. For example, since flowering (flower timing, flower color, and form) and timing of regrowth after winter differ between broadleaved species, including multiple data acquisitions within the spring season could further differentiate broadleaved trees species in a temperate forest. However, increasing model accuracy in this case would be at the expense of study cost (field survey cost, total study time, etc.). Further studies can evaluate whether increasing the number of seasons considered or the timing of the survey within the season notably changes classification accuracy, and how this depends on tree species considered. On the other hand, reducing the number of seasons considered might also not come at the expense of classification accuracy for some species (Table 5). However, this study showed that data captured during autumn and spring contributed most to classifications (Table 4, Figure 4). If studies have resource constraints, priority could be given to data collection during these seasons.
The classifiers employed in this study exhibited several misclassifications between broadleaved tree species (Figure 5). This observation may be a result of the similar spectral characteristics of the broadleaved tree species, making them more difficult to distinguish. A similar observation of reduced model accuracy for phylogenetically close species was made by Onishi et al. [9], who used a deep learning model (EfficientNet B7) and true color UAV images for model training. Still, in the current study, not only true-color images, but also multi-spectral data, image texture variables, and elevation data were considered as model inputs, yet similar misclassification patterns were observed. This may result from several factors.
One possibility is errors introduced by faulty tree canopy segmentation. A recent review noted that the majority of studies in the 2013–2023 decade used non-supervised techniques such as region-growing or super-pixel-based segmentation [37], as was also performed in the current study. However, the SNIC segmentation algorithm considered in the study shows some errors (by visual interpretation) in correctly delineating individual tree crowns, especially where the tree canopy is dense and continuous. This could result in the tree canopies of adjacent trees belonging to separate species being considered together by the model, so that the input data includes mixed information from several adjacent trees. This could reduce the signal of individual species’ specific characteristics. To alleviate such an issue, follow up studies can consider alternative tree segmentation algorithms that rely on 3D canopy structure information obtained from LiDAR point clouds [38]. Since the canopy cover of phylogenetically close species may be difficult to distinguish by considering only visual information, an approach that considers the 3D structure may more effectively segment individual trees within the canopy, irrespective of their visual likeness.
A second reason for the misclassifications may be the small training sample size used for some species, such as Sakhalin spruce (Table 2, Table 5). The small sample sizes were a consequence of the small number of tree specimens in the study area. For example, according to provided data from the Uryu Experimental Forest’s management, there are in total only 13 Sakhalin spruce specimens in the study area (Supplementary Table S1), of which 8 were considered for model training (Table 2). This highlights the difficulty of creating robust tree classification models for (relatively) rare species. In such cases where few samples of similar species lead to low individual species classification accuracy, it may be preferable to generate a class that includes several related species, such as ‘mixed-broadleaved forest’, as performed by Lee et al. [39]. However, in the current study, the aim was for individual species classification, thus precluding such an approach.
However, the modeling frameworks considered in this study in general require less training input data than deep-learning-based classifiers, as used by, e.g., Onishi et al. [9] or Ma et al. [40]. For comparison, Ma et al. [40] used field-sampled locations of 2104 tree specimens, with several hundred specimens for individual species as input to a convolutional neural network (CNN)-based classifier, whereas this study considered per-species training sample sizes of approximately only a fifth of that. Therefore, the data collection and machine-learning-based modeling framework proposed in this study may be appropriate in cases where field validation data cannot be obtained due to inaccessibility or insufficient number of species samples in the site. Still, the tree samples used for training and validation in this study are possibly too few, and similar studies would benefit from a higher number of tree samples for each species, and where possible, a similar number of samples for each species. Also, in the current study, both training and results validation were conducted within a single forest patch, thus weakening the validation result. Further iterations of this work will consider field surveys and collection of validation data in additional forest sites in the area to better gauge the model robustness.
While the performance of classifiers in separating tree species can vary across regions and species (e.g., [8,15,16]), with the input sample size and data types considered in this study, Random Forest (RF) and Support Vector Machine (SVM) outperformed Classification And Regression Tree (CART) in terms of classification accuracy (Table 5). The results regarding classifier performance agree with those reported by Wessel et al. [41], who observed that Support Vector Machines (SVMs) achieved the highest accuracy in distinguishing broadleaved and coniferous trees within a temperate German forest. Finally, although our method using UAVs may have limitations compared to previous approaches employing airborne sensors, such as restricted coverage, the low-cost and user-friendly nature of UAVs enables periodic monitoring and the acquisition of high-resolution images close to the canopies, providing detailed leaf-level information Supplementary Materials [42,43].

6. Conclusions

The study aimed to assess the influence of UAV data-collection season and input variables on tree species classification accuracies in mixed (broadleaved–coniferous) forests. To evaluate this question, four UAV data collection and field surveys were conducted to the Uryu Experimental Forest of Hokkaido University, Japan, which is a mixed cool-temperate mountain forest.
The main findings of the study are that considering UAV imagery and elevation data collected during all four seasons increased tree species classification accuracies over machine learning models that considered only data collected during a single season. However, in the study area, the data collected during autumn and spring were more important than winter and summer, and it was found that vegetation indices (esp. NDEGE and NDVI) contributed most to classification accuracy. The results indicated that using UAV images captured during different seasons allowed for high classification accuracy (OA = 82–83%) despite a relatively small training dataset. The findings suggest that in mixed cool-temperate forests, multi-season machine-learning-based tree species classification may be a suitable approach for cases where inaccessibility or study resource constraints do not permit the field based collection of large enough training data needed for deep learning or other automatic tree classification approaches.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs16214060/s1, Supplementary A and Supplementary B. Table S1. Tree species and number of specimens in the Uryu Experimental Forest, as provided by the forest management. Figure S1. Variable importance for RF single season models. facets show the data capture season. Figure S2. Overlayed map of field verified tree locations (colored circles) with forest species classification result based on the SVM classifier that used multi-seasonal data. Figure S3. Overlayed map of field verified tree locations (colored circles) with forest species classification result based on the CART classifier that used multi-seasonal data.

Author Contributions

Conceptualization, R.A., J.F. and A.S.L.; methodology, R.A., X.C., H.S. and N.T.; software, R.A., J.F., Y.A.P., X.C. and A.S.L.; validation, R.A., J.F., X.C., A.S.L. and N.T.; formal analysis, R.A., J.F. and A.S.L.; resources, R.A., S.A. and N.T.; writing—original draft preparation, R.A., J.F., Y.A.P. and A.S.L.; writing—review and editing, R.A., J.F., X.C., S.A., H.S., Y.A.P., N.T. and A.S.L.; supervision, R.A. and N.T.; project administration, R.A.; funding acquisition, R.A. and S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The work was partially supported by the Deanship of Scientific Research at King Khalid funding this work through a large research group (Project group number RGP.2/279/45). We are also thankful to Sumitomo Foundation, grant number 2330169, JST SICORP, grant number JPMJSC23A1 and Global Change Observation Mission (GCOM: PI#116, #ER2GCF103) of the Japan Aerospace Exploration Agency (JAXA).

Data Availability Statement

All data produced or examined during the investigation are incorporated in this published publication.

Acknowledgments

The Authors extend their appreciation to the Uryu Experimental Forest and TOEF, Hokkaido University for conducting this research.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Japan Forestry Agency. Annual Report on Forest and Forestry in Japan Fiscal Year 2019; Ministry of Agriculture, Forestry and Fisheries: Tokyo, Japan, 2019.
  2. Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping Forest Tree Species in High Resolution UAV-Based RGB-Imagery by Means of Convolutional Neural Networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. [Google Scholar] [CrossRef]
  3. Chambers, L.E.; Altwegg, R.; Barbraud, C.; Barnard, P.; Beaumont, L.J.; Crawford, R.J.M.; Durant, J.M.; Hughes, L.; Keatley, M.R.; Low, M.; et al. Phenological Changes in the Southern Hemisphere. PLoS ONE 2013, 8, e75514. [Google Scholar] [CrossRef] [PubMed]
  4. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of Studies on Tree Species Classification from Remotely Sensed Data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  5. Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest Stand Species Mapping Using the Sentinel-2 Time Series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef]
  6. Puc-Kauil, R.; Ángeles-Pérez, G.; Valdez-Lazalde, J.R.; Reyes-Hernández, V.J.; Dupuy-Rada, J.M.; Schneider, L.; Pérez-Rodríguez, P.; García-Cuevas, X. Species-Specific Biomass Equations for Small-Size Tree Species in Secondary Tropical Forests. Trop. Subtrop. Agroecosyst. 2019, 22, 735–754. [Google Scholar] [CrossRef]
  7. Lin, J.; Kroll, C.N.; Nowak, D.J.; Greenfield, E.J. A Review of Urban Forest Modeling: Implications for Management and Future Research. Urban For. Urban Green. 2019, 43, 126366. [Google Scholar] [CrossRef]
  8. Modzelewska, A.; Fassnacht, F.E.; Stereńczak, K. Tree Species Identification within an Extensive Forest Area with Diverse Management Regimes Using Airborne Hyperspectral Data. Int. J. Appl. Earth Obs. Geoinf. 2020, 84, 101960. [Google Scholar] [CrossRef]
  9. Onishi, M.; Watanabe, S.; Nakashima, T.; Ise, T. Practicality and Robustness of Tree Species Identification Using UAV RGB Image and Deep Learning in Temperate Forest in Japan. Remote Sens. 2022, 14, 1710. [Google Scholar] [CrossRef]
  10. Otero, V.; Van De Kerchove, R.; Satyanarayana, B.; Martínez-Espinosa, C.; Fisol, M.A.B.; Ibrahim, M.R.B.; Sulong, I.; Mohd-Lokman, H.; Lucas, R.; Dahdouh-Guebas, F. Managing Mangrove Forests from the Sky: Forest Inventory Using Field Data and Unmanned Aerial Vehicle (UAV) Imagery in the Matang Mangrove Forest Reserve, Peninsular Malaysia. For. Ecol. Manag. 2018, 411, 35–45. [Google Scholar] [CrossRef]
  11. Heinzel, J.; Koch, B. Investigating Multiple Data Sources for Tree Species Classification in Temperate Forest and Use for Single Tree Delineation. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 101–110. [Google Scholar] [CrossRef]
  12. Natesan, S.; Armenakis, C.; Vepakomma, U. RESNET-BASED TREE SPECIES CLASSIFICATION USING UAV IMAGES. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2-W13, 475–481. [Google Scholar] [CrossRef]
  13. Kollert, A.; Bremer, M.; Löw, M.; Rutzinger, M. Exploring the Potential of Land Surface Phenology and Seasonal Cloud Free Composites of One Year of Sentinel-2 Imagery for Tree Species Mapping in a Mountainous Region. Int. J. Appl. Earth Obs. Geoinf. 2021, 94, 102208. [Google Scholar] [CrossRef]
  14. Simonetti, D.; Simonetti, E.; Szantoi, Z.; Lupi, A.; Eva, H.D. First Results From the Phenology-Based Synthesis Classifier Using Landsat 8 Imagery. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1496–1500. [Google Scholar] [CrossRef]
  15. Xie, B.; Cao, C.; Xu, M.; Duerler, R.S.; Yang, X.; Bashir, B.; Chen, Y.; Wang, K. Analysis of Regional Distribution of Tree Species Using Multi-Seasonal Sentinel-1&2 Imagery within Google Earth Engine. Forests 2021, 12, 565. [Google Scholar] [CrossRef]
  16. Shen, X.; Cao, L. Tree-Species Classification in Subtropical Forests Using Airborne Hyperspectral and LiDAR Data. Remote Sens. 2017, 9, 1180. [Google Scholar] [CrossRef]
  17. Liu, H. Classification of Tree Species Using UAV-Based Multi-Spectral and Multi-Seasonal Images: A Multi-Feature-Based Approach. New For. 2024, 55, 173–196. [Google Scholar] [CrossRef]
  18. Veras, H.F.P.; Ferreira, M.P.; da Cunha Neto, E.M.; Figueiredo, E.O.; Corte, A.P.D.; Sanquetta, C.R. Fusing Multi-Season UAS Images with Convolutional Neural Networks to Map Tree Species in Amazonian Forests. Ecol. Inform. 2022, 71, 101815. [Google Scholar] [CrossRef]
  19. Shi, W.; Wang, S.; Yue, H.; Wang, D.; Ye, H.; Sun, L.; Sun, J.; Liu, J.; Deng, Z.; Rao, Y.; et al. Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles. Drones 2023, 7, 353. [Google Scholar] [CrossRef]
  20. Hokkaido Regional Forest Office. National Forests in Hokkaido. Available online: https://www.rinya.maff.go.jp/hokkaido/koho/koho_net/library/pdf/national_forest_in_hokkaido.pdf. (accessed on 8 August 2024).
  21. Akitsu, T.K.; Nakaji, T.; Yoshida, T.; Sakai, R.; Mamiya, W.; Terigele; Takagi, K.; Honda, Y.; Kajiwara, K.; Nasahara, K.N. Field Data for Satellite Validation and Forest Structure Modeling in a Pure and Sparse Forest of in Northern Hokkaido. Ecol. Res. 2020, 35, 750–764. [Google Scholar] [CrossRef]
  22. Xu, X.; Shibata, H. Landscape Patterns of Overstory Litterfall and Related Nutrient Fluxes in a Cool-Temperate Forest Watershed in Northern Hokkaido, Japan. J. For. Res. 2007, 18, 249–254. [Google Scholar] [CrossRef]
  23. OpenStreetMap Wiki Accuracy of GNSS Data—OpenStreetMap Wiki. Available online: https://wiki.openstreetmap.org/w/index.php?title=Accuracy_of_GNSS_data&oldid=2376050 (accessed on 6 August 2024).
  24. Achanta, R.; Susstrunk, S. Superpixels and Polygons Using Simple Non-Iterative Clustering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  25. Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef]
  26. Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS. NASA Spec. Publ. 1974, 351, 309. [Google Scholar]
  27. Gitelson, A.; Merzlyak, M.N. Spectral Reflectance Changes Associated with Autumn Senescence of Aesculus hippocastanum L. and Acer platanoides L. Leaves. Spectral Features and Relation to Chlorophyll Estimation. J. Plant Physiol. 1994, 143, 286–292. [Google Scholar] [CrossRef]
  28. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between Leaf Chlorophyll Content and Spectral Reflectance and Algorithms for Non-Destructive Chlorophyll Assessment in Higher Plant Leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef]
  29. Buschmann, C.; Nagel, E. In Vivo Spectroscopy and Internal Optics of Leaves as Basis for Remote Sensing of Vegetation. Int. J. Remote Sens. 1993, 14, 711–722. [Google Scholar] [CrossRef]
  30. Modica, G.; Messina, G.; De Luca, G.; Fiozzo, V.; Praticò, S. Monitoring the Vegetation Vigor in Heterogeneous Citrus and Olive Orchards. A Multiscale Object-Based Approach to Extract Trees’ Crowns from UAV Multispectral Imagery. Comput. Electron. Agric. 2020, 175, 105500. [Google Scholar] [CrossRef]
  31. Tamiminia, H.; Salehi, B.; Mahdianpari, M.; Quackenbush, L.; Adeli, S.; Brisco, B. Google Earth Engine for Geo-Big Data Applications: A Meta-Analysis and Systematic Review. ISPRS J. Photogramm. Remote Sens. 2020, 164, 152–170. [Google Scholar] [CrossRef]
  32. Hsu, C.-W. A Practical Guide to Support Vector Classification; Department of Computer Science, National Taiwan University: Taipei City, Taiwan, 2003. [Google Scholar]
  33. Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good Practices for Estimating Area and Assessing Accuracy of Land Change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
  34. Liu, C.; Frazier, P.; Kumar, L. Comparative Assessment of the Measures of Thematic Classification Accuracy. Remote Sens. Environ. 2007, 107, 606–616. [Google Scholar] [CrossRef]
  35. Li, H. Smile Class RandomForest. Available online: https://haifengl.github.io/api/java/smile/regression/RandomForest.html (accessed on 14 October 2024).
  36. Louppe, G.; Wehenkel, L.; Sutera, A.; Geurts, P. Understanding Variable Importances in Forests of Randomized Trees. In Advances in Neural Information Processing Systems; Burges, C.J., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q., Eds.; Curran Associates, Inc.: New York, NY, USA, 2013; Volume 26, Available online: https://proceedings.neurips.cc/paper_files/paper/2013/file/e3796ae838835da0b6f6ea37bcf8bcb7-Paper.pdf (accessed on 14 October 2024).
  37. Chehreh, B.; Moutinho, A.; Viegas, C. Latest Trends on Tree Classification and Segmentation Using UAV Data—A Review of Agroforestry Applications. Remote Sens. 2023, 15, 2263. [Google Scholar] [CrossRef]
  38. Yun, T.; Jiang, K.; Li, G.; Eichhorn, M.P.; Fan, J.; Liu, F.; Chen, B.; An, F.; Cao, L. Individual Tree Crown Segmentation from Airborne LiDAR Data Using a Novel Gaussian Filter and Energy Function Minimization-Based Approach. Remote Sens. Environ. 2021, 256, 112307. [Google Scholar] [CrossRef]
  39. Lee, E.-R.; Baek, W.-K.; Jung, H.-S. Mapping Tree Species Using CNN from Bi-Seasonal High-Resolution Drone Optic and LiDAR Data. Remote Sens. 2023, 15, 2140. [Google Scholar] [CrossRef]
  40. Ma, Y.; Zhao, Y.; Im, J.; Zhao, Y.; Zhen, Z. A Deep-Learning-Based Tree Species Classification for Natural Secondary Forests Using Unmanned Aerial Vehicle Hyperspectral Images and LiDAR. Ecol. Indic. 2024, 159, 111608. [Google Scholar] [CrossRef]
  41. Wessel, M.; Brandmeier, M.; Tiede, D. Evaluation of Different Machine Learning Algorithms for Scalable Classification of Tree Types and Tree Species Based on Sentinel-2 Data. Remote Sens. 2018, 10, 1419. [Google Scholar] [CrossRef]
  42. Unmanned Aerial Vehicle: Applications in Agriculture and Environment; Avtar, R.; Watanabe, T. (Eds.) Springer International Publishing: Cham, Switzerland, 2020; ISBN 978-3-030-27156-5. [Google Scholar]
  43. Avtar, R.; Suab, S.A.; Syukur, M.S.; Korom, A.; Umarhadi, D.A.; Yunus, A.P. Assessing the Influence of UAV Altitude on Extracted Biophysical Parameters of Young Oil Palm. Remote Sens. 2020, 12, 3030. [Google Scholar] [CrossRef]
Figure 1. (A) Map showing a part of Japan, and Hokkaido, Japan’s northernmost prefecture. (B) The location of the study site, to the northeast of Lake Shumarinai. (C) Aerial imagery of the study site in Jinjiayama, Uryu Experimental Forest.
Figure 1. (A) Map showing a part of Japan, and Hokkaido, Japan’s northernmost prefecture. (B) The location of the study site, to the northeast of Lake Shumarinai. (C) Aerial imagery of the study site in Jinjiayama, Uryu Experimental Forest.
Remotesensing 16 04060 g001
Figure 2. Workflow of the presented method. List of abbreviations: NDVI (Normalized difference vegetation index); NDRE (Normalized Difference Red-Edge Index); CGI (Green Chlorophyll Index); NDEGE (Normalized Difference Red-Edge Green Index); GLCM (Gray-Level Co-occurrence Matrix); DSM (digital elevation model); SNIC (the Simple Non-Iterative Clustering algorithm); CART (Classification And Regression Tree); RF (Random Forest); SVM (Support Vector Machine).
Figure 2. Workflow of the presented method. List of abbreviations: NDVI (Normalized difference vegetation index); NDRE (Normalized Difference Red-Edge Index); CGI (Green Chlorophyll Index); NDEGE (Normalized Difference Red-Edge Green Index); GLCM (Gray-Level Co-occurrence Matrix); DSM (digital elevation model); SNIC (the Simple Non-Iterative Clustering algorithm); CART (Classification And Regression Tree); RF (Random Forest); SVM (Support Vector Machine).
Remotesensing 16 04060 g002
Figure 3. Field survey map showing the forest stand and location of field verified samples of the 7 considered tree species. The point locations are overlayed on an orthomosaic image captured by the UAV survey in autumn (11 October 2021).
Figure 3. Field survey map showing the forest stand and location of field verified samples of the 7 considered tree species. The point locations are overlayed on an orthomosaic image captured by the UAV survey in autumn (11 October 2021).
Remotesensing 16 04060 g003
Figure 4. Variable importance for the Random Forest model applied to the multi-seasonal dataset. The point shape and color distinguish the season in which the data was collected. The variable shown are as follows: Normalized Difference Red-Edge Green Index (NDEGE), Normalized Difference Red-Edge Index (NDRE), Green Chlorophyll Index (GCI), Normalized difference vegetation index (NDVI), Digital Elevation Model (DEM), Blue reflectance (B), Green reflectance (G), and two image texture metrics (Correlation, Entropy).
Figure 4. Variable importance for the Random Forest model applied to the multi-seasonal dataset. The point shape and color distinguish the season in which the data was collected. The variable shown are as follows: Normalized Difference Red-Edge Green Index (NDEGE), Normalized Difference Red-Edge Index (NDRE), Green Chlorophyll Index (GCI), Normalized difference vegetation index (NDVI), Digital Elevation Model (DEM), Blue reflectance (B), Green reflectance (G), and two image texture metrics (Correlation, Entropy).
Remotesensing 16 04060 g004
Figure 5. Confusion matrices for the (a) RF and (b) SVM models trained on the multi-seasonal data. The column shows the reference classes at the 181 field-verified validation points, and rows are the predicted classes. Row and column sums are also reported. The PA of the corresponding classes are reported as percentages below the class count, and the class UA in vertical text to the right of the class count.
Figure 5. Confusion matrices for the (a) RF and (b) SVM models trained on the multi-seasonal data. The column shows the reference classes at the 181 field-verified validation points, and rows are the predicted classes. Row and column sums are also reported. The PA of the corresponding classes are reported as percentages below the class count, and the class UA in vertical text to the right of the class count.
Remotesensing 16 04060 g005
Figure 6. Overlayed map of field verified tree locations (colored circles) with forest species classification result based on the Random Forest classifier that used multi-seasonal data. The small inset map in the legend shows the overview of the zoomed area in the main map.
Figure 6. Overlayed map of field verified tree locations (colored circles) with forest species classification result based on the Random Forest classifier that used multi-seasonal data. The small inset map in the legend shows the overview of the zoomed area in the main map.
Remotesensing 16 04060 g006
Table 1. UAV sensors’ characteristics.
Table 1. UAV sensors’ characteristics.
UAVSensorFocal LengthImage Resolution
Lidar: Single Returns
Central Band and Bandwidth
DJI Phantom 4 MultispectralMultispectral5.74 nm1600 × 1300
(2.08 MP)
R : 650   n m ± 16   n m
G : 560   n m ± 16   n m
B : 450   n m ± 16   n m
R E d g e : 730   n m ± 16   n m
N I R : 840   n m ± 26   n m
RGB5.74 nm1600 × 1300 (2.08 MP)n.a.
DJI Matrice 300 RTKZenmuse L1 LiDARn.a.240,000 points/sn.a.
MP = megapixels; R = red; G = green; B = blue; REdge = red-edge; NIR = near-infrared; n.a. = not applicable.
Table 2. Tree species and number of field samples confirmed by high precision GNSS.
Table 2. Tree species and number of field samples confirmed by high precision GNSS.
Class (Species)Common NameClass LabelNumber
Abies sachalinensisSakhalin firAs55
Quercus crispulaJapanese oakQr46
Kalopanax septemlobusCastor araliaKs16
Sorbus commixtaJapanese rowanSc15
Betula ermaniiRussian rock birchBe29
Acer monoPainted mapleAc29
Picea jezoensisSakhalin sprucePj8
Total 181
Table 3. Vegetation indices (VIs) adopted in this work and computed in the Google Earth Engine (GEE).
Table 3. Vegetation indices (VIs) adopted in this work and computed in the Google Earth Engine (GEE).
Vegetation Indices (VIs)FormulaReference
Normalized difference vegetation index (NDVI) N I R R N I R + R [26]
Normalized Difference Red-Edge Index (NDRE) N I R R E N I R + R E [27]
Green Chlorophyll Index (GCI) N I R G 1 [28]
Normalized Difference Red-Edge Green Index (NDEGE) R E G R E + G [29]
Table 4. Hyperparameter combinations for each machine learning model combination, determined by grid-search over reasonable values followed by 5-fold cross-validation for each combination.
Table 4. Hyperparameter combinations for each machine learning model combination, determined by grid-search over reasonable values followed by 5-fold cross-validation for each combination.
ModelParameterAutum
October
Winter
April
Spring
May
Summer
June
Multi-Seasonal
SVMCost13201788
Gamma3025302712
CARTMax leaf node7575757550
Min. leaf per node11111
RFNumber of trees500500500500500
Variables per split33336
Table 5. Overall accuracy metrics of tree species classifications for Support Vector Machine (SVM), Classification and Regression Tree (CART), and Random Forest (RF) models, trained on different seasons input data and multi-seasonal data. The best result obtained from each model is highlighted in bold.
Table 5. Overall accuracy metrics of tree species classifications for Support Vector Machine (SVM), Classification and Regression Tree (CART), and Random Forest (RF) models, trained on different seasons input data and multi-seasonal data. The best result obtained from each model is highlighted in bold.
ModelAutum
(October)
Winter
(April)
Spring
(May)
Summer
(June)
Multi-Seasonal
(All)
SVM71.2761.3269.0663.5383.42
CART62.4363.5367.9556.9075.13
RF64.6470.1777.9071.2782.32
Table 6. Producer Accuracy (PA) and User Accuracy (UA) for all considered tree species, reported for each input data combination (i.e., data collection season) and the two model frameworks: Random Forest (RF) and Support Vector Machine (SVM) models. The highest PA and UA value are shown in bold for each species.
Table 6. Producer Accuracy (PA) and User Accuracy (UA) for all considered tree species, reported for each input data combination (i.e., data collection season) and the two model frameworks: Random Forest (RF) and Support Vector Machine (SVM) models. The highest PA and UA value are shown in bold for each species.
ModelClassified
Tree Species
Autum
October 2021
Winter
April 2022
Spring
May 2022
Summer
June 2022
Multi-Seasonal
PA%UA%PA%UA%PA%UA%PA%UA%PA%UA%
RFSakhalin fir78.1892.5978.1895.5678.188650.9190.3283.6493.88
Japanese oak84.7869.6480.4359.6884.7879.5982.6174.5191.384
Castor Aralia7563.16508068.7584.6281.25657563.16
Japanese rowan73.3361.1146.6763.648070.598085.7166.6783.33
Russian rock birch33.3357.1483.3376.9266.6747.067569.237569.23
Painted maple72.4177.7855.1769.5779.3188.4682.768086.2189.29
Sakhalin spruce62.571.437554.5562.562.562.555.5662.571.43
SVMSakhalin fir85.4510085.4597.9267.2784.0950.9193.3378.1893.48
Japanese oak82.6167.8676.0945.4580.4368.5278.2665.4591.373.68
Castor Aralia87.55018.7510056.256062.543.4887.577.78
Japanese rowan73.3384.6233.3345.4566.6762.580808092.31
Russian rock birch5046.15507541.67255054.5583.3371.43
Painted maple62.077534.485075.8684.6262.0762.0786.2192.59
Sakhalin spruce62.510062.510062.583.3362.583.3362.5100
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Avtar, R.; Chen, X.; Fu, J.; Alsulamy, S.; Supe, H.; Pulpadan, Y.A.; Louw, A.S.; Tatsuro, N. Tree Species Classification by Multi-Season Collected UAV Imagery in a Mixed Cool-Temperate Mountain Forest. Remote Sens. 2024, 16, 4060. https://doi.org/10.3390/rs16214060

AMA Style

Avtar R, Chen X, Fu J, Alsulamy S, Supe H, Pulpadan YA, Louw AS, Tatsuro N. Tree Species Classification by Multi-Season Collected UAV Imagery in a Mixed Cool-Temperate Mountain Forest. Remote Sensing. 2024; 16(21):4060. https://doi.org/10.3390/rs16214060

Chicago/Turabian Style

Avtar, Ram, Xinyu Chen, Jinjin Fu, Saleh Alsulamy, Hitesh Supe, Yunus Ali Pulpadan, Albertus Stephanus Louw, and Nakaji Tatsuro. 2024. "Tree Species Classification by Multi-Season Collected UAV Imagery in a Mixed Cool-Temperate Mountain Forest" Remote Sensing 16, no. 21: 4060. https://doi.org/10.3390/rs16214060

APA Style

Avtar, R., Chen, X., Fu, J., Alsulamy, S., Supe, H., Pulpadan, Y. A., Louw, A. S., & Tatsuro, N. (2024). Tree Species Classification by Multi-Season Collected UAV Imagery in a Mixed Cool-Temperate Mountain Forest. Remote Sensing, 16(21), 4060. https://doi.org/10.3390/rs16214060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop