Next Article in Journal
Multiclass Land Cover Mapping from Historical Orthophotos Using Domain Adaptation and Spatio-Temporal Transfer Learning
Next Article in Special Issue
Assessment of Oak Groves Conservation Statuses in Natura 2000 Sacs with Single Photon Lidar and Sentinel-2 Data
Previous Article in Journal
Digital Soil Texture Mapping and Spatial Transferability of Machine Learning Models Using Sentinel-1, Sentinel-2, and Terrain-Derived Covariates
Previous Article in Special Issue
Stand Structural Characteristics Derived from Combined TLS and Landsat Data Support Predictions of Mushroom Yields in Mediterranean Forest
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Method for Detecting and Delineating Coppice Trees in UAV Images to Monitor Tree Decline

1
Department of Photogrammetry and Remote Sensing, Faculty of Geodesy and Geomatics Engineering, K. N. Toosi University of Technology, Tehran P.O. Box 15433-19967, Iran
2
Department of Remote Sensing, University of Würzburg, Oswald KülpeWeg 86, 97074 Würzburg, Germany
3
Forest Research Division, Research Institute of Forests and Rangelands (RIFR), Tehran P.O. Box 14968-13111, Iran
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(23), 5910; https://doi.org/10.3390/rs14235910
Submission received: 4 October 2022 / Revised: 16 November 2022 / Accepted: 21 November 2022 / Published: 22 November 2022

Abstract

:
Monitoring tree decline in arid and semi-arid zones requires methods that can provide up-to-date and accurate information on the health status of the trees at single-tree and sample plot levels. Unmanned Aerial Vehicles (UAVs) are considered as cost-effective and efficient tools to study tree structure and health at small scale, on which detecting and delineating tree crowns is the first step to extracting varied subsequent information. However, one of the major challenges in broadleaved tree cover is still detecting and delineating tree crowns in images. The frequent dominance of coppice structure in degraded semi-arid vegetation exacerbates this problem. Here, we present a new method based on edge detection for delineating tree crowns based on the features of oak trees in semi-arid coppice structures. The decline severity in individual stands can be analyzed by extracting relevant information such as texture from the crown area. Although the method presented in this study is not fully automated, it returned high performances including an F-score = 0.91. Associating the texture indices calculated in the canopy area with the phenotypic decline index suggested higher correlations of the GLCM texture indices with tree decline at the tree level and hence a high potential to be used for subsequent remote-sensing-assisted tree decline studies.

1. Introduction

Semi-arid tree covers play an essential role in protecting water and soil resources and provide multiple ecosystem services across fragile ecosystems [1]. However, tree decline is a serious threat to semi-arid tree cover, which is a multifactorial phenomenon that can eventually lead to complete tree dieback [2]. In semi-arid Iran, the oak forests in the Zagros Mountains have been facing massive tree decline during the last two decades. The structure of the Zagros forests has historically shifted from the seed to the coppice form due to ca. 50 centuries of extensive and continuous exploitation. Due to the presence of clusters of branches, crowns are commonly elongated and oval-shaped as opposed to the original form of seed trees with circular crowns [3].
The role of habitat factors In the occurrence and expansion of tree decline can be quantified by establishing and monitoring fixed sample plots at different timespans, which can lead to decisions on auxiliary treatments to revitalize the declining trees [4]. These sample plots also provide reference data for modelling the decline phenomenon with satellite data via extrapolations from the tree to higher spatial levels. This calls for using remote sensing methods to monitor the crown decline severity, since the first symptoms of tree decline commonly appear in the tree crowns [5]. When a severe decline occurs, obvious signs (such as low foliage density and numerous dead branches) become gradually apparent in the crown [6].
In field measurement, in addition to determining the position and degree of tree decline by examining the condition of the crown, structural measurements are also conducted for each tree in difficult and time-consuming field surveys. The lack of visibility on the crown surface is one of the main obstacles in field measurements, especially in broad-leaved trees with generally spherical or irregular crowns or multiple stems. One solution to overcome this problem is to use high-resolution remote sensing data, which can provide a full view of the crown’s upper surface.
Collecting data from the top surface of the crowns using UAV imagery can be considered a cost-effective method compared to field data collection. UAVs fly at lower altitudes than satellites or airplanes and can collect data with very high spatial resolution in the range of 2 to 10 cm pixel size [7]. They can offer a valuable tool in small-scale forest inventories to meet the growing need for more accurate geospatial data on demand [8]. UAVs are known to be an effective complement to other common remote sensing platforms due to their cost-effectiveness, safety, maneuverability, positioning accuracy, and high spatial resolution [9]. Among various UAV payloads, RGB cameras are inexpensive and allow for rapid and direct data interpretation [10]. These sensors collect data that are applied to extract information on tree height, canopy area, degree of canopy closure, and shape of the canopy, which are used for purposes such as analyzing forest and tree health status, assessing forest growth, creating maps of invasive alien species, studying the response of forests to climate change, and assessing their ecosystem services [11].The rugged topography and difficulty of fieldwork in remote and mountainous like Zagros forests have seriously hampered timely studies on tree decline. This can be solved by using UAV-based methods and combining the information obtained with photogrammetric image modelling and processing methods at a local level, where the results could provide inputs for further spatial extrapolations.
The first step in studying tree decline in UAV imagery is to identify individual trees and delineate their partly overlapping crowns to extract structural information to be used in subsequent decline studies. Tree canopy delineation has been amongst the main challenges which has been addressed since the beginning of the use of drones in forest inventory and analysis [12]. Due to the importance of tree-level forest variables, accurate detection of single trees and their delineation in UAV images is of particular importance. Tree crown delineation is generally a fundamental task in remote sensing for forestry and ecology [13], which is complicated on UAV imagery, as the very high spatial resolution of these images results in a high level of details of the canopy structure and makes individual tree identification and canopy delineation difficult [14]. Although many studies have been conducted and various algorithms presented, their results have been mostly feasible for application to specific species or regions [13]; thus, using methods that are commonly available in image processing does not enable the delineation of crowns in other target areas, in particular, across coppice and multi-stem structures. Therefore, a method that fits the characteristics of the coppice structures across semi-arid zones is lacking. The next step in tree-based decline studies is to find information that can be extracted from the UAV imagery which can fully or partially describe oak decline and is compatible with that suggested by recent field-based silvicultural literature, e.g., [6].
We briefly review methods to detect and delineate tree crowns in UAV data, which are broadly divided into the two groups of conventional and machine learning methods. Conventional methods gained popularity due to their convenient data processing. The previously used conventional individual tree detection (ITD) algorithms mainly include two-step procedures involving tree detection and delineation [15]. Some of these methods use the canopy height model (CHM), while some are directly applied to point clouds [16]. Examples of algorithms used to find individual trees are local maxima filtering [17,18,19,20,21], image binarization [22], multiscale analysis [23], and template matching [24]. Furthermore, methods for delineating the crowns fall into three groups including valley-following [25], region-growing [26], and watershed segmentation [27]. Usually, the process of tree crown segmentation in these algorithms consists of initially determining points as the locations of the trees and then determining the boundaries of the tree crowns corresponding to these points [28]. In some cases, however, segmentation methods that do not need initial points have also been used [16].
Among the conventional approaches, the local maxima (LM) and marker-controlled watershed segmentation (MCWS) algorithms are the most common detection methods [12]. The LM algorithm is appropriate for trees like different coniferous species in which a point is visible as the brightest pixel in UAV images or as the highest point in CHM [29,30]. The highest points can be identified by using a moving window to filter the image [13]. This moving window can have fixed [31,32] or variable [33,34,35] sizes. Marker-controlled watershed segmentation (MCWS), which is an extension of the valley-following algorithm [12], requires determining the tree top [36]. The use of the MCWS algorithm for tree crown delineation has been described in numerous studies [37,38,39,40].
Nevertheless, structural differences amongst conifers and broad-leaved trees affect the performance of individual tree detection algorithms [41]. Many algorithms have been specifically developed for conifers [15,42], which often include completely isolated monopods that have a local maximum as the highest point of the tree. In such cases, the use of algorithms to find the local maximum has been reported with high accuracies [43], while typically several local maxima can be identified in a broad-leaf tree crown. One of the suggested solutions is CHM smoothing [44,45], which reduces height changes in the tree crown and increases the accuracy of tree identification. High-pass filters [46], Gaussian filters [47] and average filters [29] are examples of filters used to smooth CHM. Although high performances have been reported with simple filters in planted forests, one may note that strong or large filters will result in removing small trees or shrubs [16]. Various studies show that these methods are strongly influenced by the inherent characteristics of a forest stand such as stand density, species heterogeneity, and stand age [16].
The application of conventional tree delineation methods to broad-leaved trees with overlapping crowns resulted in practically infeasible accuracies [48]. Therefore, studies gradually shifted to methods based on machine learning [12]. With the development of machine learning methods, effective solutions have been found for many problems related to machine vision [49,50], with comparably higher performances than classical approaches [51]. Convolutional neural networks (CNN) are considered amongst the most progressive deep learning approach, especially for remote sensing applications in vegetation [52,53,54]. In recent years, many studies have used neural networks to detect broad-leaved trees. One of the main advantages of CNNs as compared with classical methods (e.g., the LM algorithm and MCWS algorithm) is the ability to extract information from multi-band images [12]. Although the use of neural networks is increasing, a review of the literature indicates that most of the research based on CNNs has been conducted in planted forests or orchards [55,56,57].
The latest development of CNNs is the Mask R-CNN [12]. Because of its high accuracy [58,59] and ability to detect other tree features like the tree height simultaneously [60], it is reported to have the potential to become one of the most widely used tree canopy detection and mapping algorithms in the future [12]. However, it requires large volumes and very accurate manually or semi-automatically specified training samples [61]. It is the main problem in complex forest structures, in which manually delineating tree canopies, even using high-spatial-resolution images, is a challenge. In many cases, it is infeasible to fully design and train a new neural network [62], since data labelling is time-consuming and computationally intensive for model training [63]. These methods are highly dependent on training data and show diverse relationships depending on the plot and tree species. A typical neural network has millions of parameters and is therefore at risk of being over-fitted when using small data sets that are typically available for a given location [64]. In addition to CNNs, other machine learning methods such as clustering [14,65,66,67,68] and object-based image analysis (OBIA) classifications have also been used for ITD [69]. These methods are mainly used in planted forests in which the segmentation stage is considered to be a complicated task [62].
Here, we deal with the problem of broad-leaved trees that mostly occur in coppice and multi-stem structures, i.e., the case in which the specific shape of crowns cannot be predicted and the crowns are overlapped. Therefore, it is neither possible to fit them to a special geometric shape nor to visually detect and delineate the coppice canopies. These problems create difficulties in providing training data as a serious requirement in machine learning algorithms and make these methods inefficient in mountainous semi-arid tree cover.
In short, neither classical methods nor machine learning methods are effective in delineating broad-leaved trees with coppice form and overlapped crowns. Conventional methods are infeasible due to their dependence on the structural features of coniferous trees such as the existence of a local maximum in the crowns, while machine learning methods commonly fail due to their need for enormous training data. Therefore, this calls for a new perspective in solve this problem. Defining the problem of tree crown delineation as the determination of the edge of tree crowns entails the use of edge detection techniques in a pixel-based context. High resolutions of UAV imagery result in a vast amount of details and thereby introduce challenges for common pixel-based methods [69], which is presumably responsible for the dearth of pixel-based methods for delineating trees in the relevant literature. Nevertheless, details displayed in UAV images contain useful information like differences amongst crown pixels located inside and at the edge of crowns. To our knowledge, there is no comparable study in which edge detection was exclusively integrated to delineate trees.
Edge detection can be defined as a method of classifying image pixels into edge and non-edge categories. When several coppices are merged in a tree stand, their separation entails a particular attention to details inside the crowns. Therefore, methods based on pixel classification, i.e., edge detection methods, can be effective to delineate pixels that form the edge of a canopy area. Dollr and Zitnick (2014) applied an edge detection method called structured forest on a natural complex image scene. The structured forest edge detector has been frequently used due to its high detection and localization accuracy as well as its robustness to image noise [70]. In the structured forest algorithm, multiscale search mechanisms and edge sharpening are also used in addition to using a set of decision trees for edge detection [71]. Running this algorithm on UAV imagery to detect edges associated with tree canopies showed that structured forest edge detection results in classification of numerous details within the canopy as edges despite its robustness to noise.
In our study, we presented a method for tree canopy detection by changing the classifier within the structured forest edge detection method (i.e., the decision trees) to support vector machines (SVM) and using more information layers. Using SVM offers the possibility to order the detected edges based on the score parameter obtained from the classifier [72]. Here, the details detected by the algorithm can be ranked and, in turn, stronger edges can be considered as boundaries of the main crown, while weaker edges represent details within the crown group. Converting the generated raster edge map to a vector structure followed by using topology rules that correspond to a tree crown [73] can complete the processes. Our suggested method for crown delineation comprises two novel features: first, it delineates tree crowns using edge detection as a pixel-based approach; second, it uses the score parameter to distinguish between stronger and weaker edges, which is particularly applicable to broad-leaved tree elements with coppice structures.
Following crown delineation, information on tree decline status can be obtained, in addition to structural parameters (e.g., height and crown area) which can be directly extracted from UAV imagery. Texture is one of the key elements of human visual perception and is used in several machine vision systems. Multiple studies have demonstrated the potential of texture to characterize very high spatial resolution canopy images [74]. Furthermore, UAV data enable a detailed texture analysis due to their high spatial resolution [74]. Different methods for extracting texture features have been developed so far [75], amongst which two methods have been more frequently used in ecological remote sensing: grey level co-occurrence matrix or spatial gray level dependence Matrix (GLCM) [76] and Fourier transform textural ordination (FOTO) [74]. In forest monitoring, these traits have been applied to identify individual trees [15], study forest structure [77], and to monitor tree health and growth [74,78,79]. Here, we explore the potential of texture indicators to describe oak decline at the canopy level. To this end, we look at the relationship between the texture indices extracted from the UAV data and the phenotypic decline index (PDI), which describes decline as a continuous variable. To our knowledge, we are the first study to treat the decline phenomenon as a continuous entity in remote sensing, whereas hitherto remote sensing studies utilized classes with crisply defined limits to represent tree decline severity. Consequently, the following objectives were explored in this study:
  • Delineating broad-leaved oak tree crowns, mostly in coppice form, using a new edge detection method.
  • Retrieving the height and area of the delineated canopies.
  • Assessing the correlation of textural information with tree decline severity.
We first present our suggested crown delineation algorithm, followed by discussing its accuracy. We then examine the correlation between multiple texture indices and the tree decline severity. The results of this study are expected to provide multiple lessons and implications for semi-arid tree cover monitoring, especially across coppice-dominated stand structures, and most particularly to serve subsequent studies on tree decline.

2. Materials and Methods

2.1. Study Sites and Data

The study sites comprised 16 plots of ca. 1 ha each located along the latitudinal gradient of the Zagros Mountains in Iran, including 4 plots in the northern zone, 6 plots in the central zone, and 6 plots in the southern zone. The Zagros Mountains stretch over 5 million ha of sparse and semi-sparse woody vegetation [80] and consist of approx. 54% of Iran’s total forest cover [81]. Semi-arid temperate climate dominates, with severe winters and extreme summer aridity. The mean annual precipitation ranges from 400 to 800 mm, while minimum temperature often drops below −25 °C in winter. The stands are dominated by coppice structures as a result of degradations caused by the long-term presence of human activities.
These forests are known as rich sources of biodiversity in fauna and flora and various non-wood forest products. The main woody species include Quercus brantii Lindl., Q. infectoria G. Olivier., Q. libani Olivier., Pistacia Atlantica Desf., Crategus sp., and Pyrus sp. [82], with a high dominance of up to ca. 70% for Q. brantii across the central and southern Zagros sub-zones [83]. The woody species may occur in either seed or coppice forms depending on multiple factors (topography, altitude, traditional logging, cattle grazing, and understory farming).

2.1.1. Field Data

The field data were collected during 2019–2020 within the framework of provincial sub-projects of “the National Zagros Forest Monitoring Plan” coordinated by the research institute of forests and rangelands (RIFR) of Iran. Data were divided into two categories: (1) control points required to process UAV images and (2) tree structural attributes and tree decline data. The control points included GPS-measured latitudes, longitudes, and altitudes. The position of each of these points was targeted on the ground and their coordinates were measured using the real-time kinematic (RTK) method for the northern and central plots and the post-processing kinematic (PPK) method for the southern plots. We used two GPS receivers, a Trimble 5700 and 5800, and a Zephyr Geodetic antenna. For the PPK method, the recorded points were post-processed using geodetic fixed reference points within Shamim National Platform (https://shamim.ssaa.ir/sbc, accessed on 20 November 2020). The locations of the control points in each plot were selected to cover the surroundings and the centroid of each plot.
The tree decline data consisted of single-tree-based positions and the degrees of decline. The decline degrees were categorized during fieldwork by discrete numbers ranging from 1 to 4 as well as a category of no visible decline according to the amount of crown dieback. Using these discrete classes and the structural information collected during fieldwork for the reference trees, the PDI was calculated, inspired by the method provided by Finch et al. (2021). The method is available via the R package pdi [84]. PDI is a continuous measure of overall decline severity by scoring trees between 0 and 1. More severely declining oak trees have a score closer to 1 [6]. Details on plot-based data are shown in Table 1. Furthermore, varying imaging hours in each plot inherently resulted in varying shadow conditions depending on the plot (Table 1). Each plot’s level of shadow was visually categorized into the four groups of no shadow (25), low shadow (50), high shadow (75), and very high shadow (100).

2.1.2. UAV-RGB Data

The RGB images were acquired using a consumer-grade DJI Phantom-4 pro quadcopter carrying an onboard RGB camera and a fixed gimbal (DJI 2016). The device included a 3-axis stabilization gimbal, 1’’ CMOS sensor camera, and an FOV 84° 8.8 mm/24 mm lens. The full list of technical specifications can be found at https://www.dji.com/phantom-4-pro/info (accessed on 20 November 2022). Flights were designed as double-grid networks with 80% horizontal/vertical overlaps within an iOS version of Pix4D Capture (https://support.pix4d.com/hc/en-us/articles/204010419-iOS-Pix4Dcapture-Manual-and-Settings, accessed on 20 November 2022) installed on an iPad 2018 notebook and were conducted during September 2019 for the northern and central and November 2019 for the southern plots with a resolution of 5472 × 3648 pixels at 100 m flight altitude. This resulted in ca. 125 image tiles per plot, which were further used for subsequent analysis.
The images were processed using the structure-from-motion software Agisoft Metashape Pro.1.7.0 [85] to derive an ortho-mosaic of each plot with the original 7 mm spatial resolution. In addition to ortho-mosaics, canopy height models (CHMs) were derived from the differences between digital terrain models (DTMs), and digital surface models (DSMs) were calculated from processing UAV images in each plot. The CHMs were applied to calculate the region occupied by the trees.
A set of possible RGB-based vegetation indexes (VIs) (Table 2) were computed using the spectral red (R), green (G) and blue (B) bands from the ortho-mosaics on a pixel basis. These included the green leaf index (GLI), visible atmospherically resistant index (VARI), normalized green–red difference index (NDTI), red–green–blue vegetation index (RGBVI), and excess green index (EXG), which were applied together with the original red, green, and blue bands.
In addition to the RGB indices, a series of texture indices (Table 3) including mean, variance, contrast, dissimilarity, entropy, homogeneity, and second-order moment were calculated using the glcm library in R (Table 3). In addition, two Fourier-transform-based indicators were calculated (see Section 2.2.8).

2.2. Methods

We suggest a novel approach based on edge detection inspired by the structured forest algorithm to delineate crowns in which we alternatively used the SVM classifier instead of the decision trees in the structured forest algorithm. In this method, multiscale search and edge sharpening were used to improve the edges specified by the classifier, following which the raster image containing the edges was converted to polygons defining the crowns.

2.2.1. Algorithm Inputs

As previously mentioned in Section 2.1.2, the information extracted from the UAV image included the CHM, orthophoto, RGB Vis, and texture indices. The best feature sets were selected using the stepwise feature selection method based on the Akaike information criterion (AIC) [93]. These features included the red, green, and blue bands of the orthophoto image, the single-band CHM raster, and the EXG index raster. Converting the CHM to a binary raster and then to a polygon layer isolates the area to the area in which crowns are expected. To achieve the exact boundaries of the canopy, it is necessary to search a larger area; thus, a 0.2 m buffer was created around the polygon layer that was representative of the canopy and was cropped by the orthophoto and EXG image to reduce the search space. This also helped reduce shadows. Then, the edges were computed on two cropped images by the Sobel and Canny methods. In the resulting rasters, the pixel values representing the edges differed more from other pixels. Eventually, the feature vector of each pixel included the normalized values of the orthophoto and EXG rasters and the Sobel and Canny masks together with CHM, which formed the inputs for the subsequent SVM classifier.

2.2.2. Classification

We used an SVM with a radial function kernel, which was already used by Gomez-Moreno [94] for edge detection. The radial function took the following form:
k ( x , y ) = exp ( ( x y ) 2 2 σ 2 ) ,
where the parameter ơ should be chosen to determine the level of simplification applied to input data. The more the training data, the less simplification is required in the SVM. A large ơ indicates greater simplification, while a small ơ indicates less simplification. Another parameter to be adjusted is C, the cost of misclassification errors due to improper data separation. The smaller this parameter, the larger the misclassification error. The optimal parameters C and ơ were determined with the module scikit-learn 1.0.2 in R. To train the algorithm, 4000 pixels with edge labels and 4000 non-edge pixels were manually selected from different forest sample plots. Negative samples were selected both inside and outside the crown. The trained algorithm was separately applied to each plot. Because the amount of shadows and coppice in each plot depends on the lighting conditions, different thresholds were applied to the score parameter to separate strong and weak edges. The score values generally ranged between 0 and 2, but the thresholds for each plot varied from 1.34 to 1.51. Then, 500 pixels from each plot were considered for evaluation. To evaluate the SVM performance, the receiver operating characteristic (ROC) curve was separately drawn for each plot. Applying SVM to each pixel resulted in a value for Label and a value as the score. As with the structured forest method, a multiscale detection process and edge sharpening were performed to improve the results, after which we added a new step based on the score parameter.

2.2.3. Multiscale Detection

As the first step to improve the results, a multiscale version of the edge detection algorithm [95] was implemented. With the input image I, the edge detection algorithm was implemented at three levels of spatial resolution, including the original image and two coarser levels with a factor of 2 from the original image. Then, the dimensions of the pixels on the two levels of the image pyramid were changed to the size of the original image. The pixels that received the edge label in all three images were then considered as edges. As shown in the structured forest process [71], this resulted in improving the edge quality.

2.2.4. Edge Sharpening

Following the multiscale edge detection, sporadic scattered and irregular edges were still observed. At this stage, edge sharpening according to the structured forest was performed. First, the points that were far from other points were removed. Then, the edges become narrower through the morphological processes of OPENING and CLOSING [71] in the presence of wide edges. This was performed using the open-source code provided at https://github.com/TArong/Fast-edge-detection-using-structured-forests (accessed on 20 November 2022).

2.2.5. The SCOR Parameter Thresholding

The SCOR parameter in the SVM classifier gives the distance of the sample from the boundary between two classes. After implementing the edge detection algorithm and improving the results, we observed that some details were identified as edges within the canopy in addition to the edges around the canopy. When creating an image of the SCOR values for the edges, low SCOR values were observed for the inner details of the crowns. Therefore, we used a novel approach to experimentally consider a threshold value for each plot and remove lower values. This step allows overlapping crowns to be separated, where the corresponding threshold led to a correct determination of the borders of coppice crowns.

2.2.6. Crown Boundaries

The output of the above steps is a raster comprising the boundaries of the detected crowns with irregular lines. As a set of topology rules are required to convert these lines into polygons that define the boundaries of the crowns, we first applied the local maxima algorithm to the generated CHM. We adjusted the search window size according to the dimensions of the smallest visible closed environment in each plot. Then, the following rules were used to convert edge lines into closed polygons (Figure 1):
  • The polygon must be the smallest convex polygon that can be fitted,
  • An area larger than the value is determined as the smallest area (this threshold is experimentally determined for each plot).
  • There must be at least one local maximum within the polygon.
  • The sides should be possibly smooth.

2.2.7. Tree Height and Crown Area

The height of the identified trees and the area of their delineated canopies were extracted directly from the UAV data by considering the highest point of the CHM in the area of the tree crown and the area of the polygon that determines the edge of the tree, respectively.

2.2.8. Texture Analysis

Whereas UAV-RGB images do not provide a wide range of spectral information, they contain considerable structural crown information such as crown cavities, the visibility of branches, and changes in the color of the leaves (changes in intensity recorded on the surface of the crown) due to their high spatial resolution. We accessed this information via texture indicators.
The GLCM indices are second-order statistical features [96]. The co-occurrence matrix was computed in the four directions of 0, 45, 90, and 135 degrees for a distance of length 1 [97]. Then, the mean, variance, contrast, dissimilarity, entropy, homogeneity, and second-order moment were extracted from each matrix. The contrast, the entropy, and the homogeneity indicate the clarity of the texture, the uniformity of the brightness distribution, and the degree of similarity, respectively. The dissimilarity indicates the degree of linear dependence of the neighboring pixels, and, finally, the second-order moment reveals all image edges.
In addition to the GLCM texture indices, the texture indices were also calculated based on FOTO. Here, a Fourier transform was used to decompose image I of dimensions W × H into its frequency components. This transformation can be defined as a sum of orthogonal basis functions as follows:
ϝ ( u , v ) = n m I ( n , m ) exp ( j 2 π ( u n W + v m H ) ) ,  
where ϝ is a Fourier transform and u and v are horizontal and vertical frequencies, and real and virtual parts can be extracted as magnitude and phase [74]. To extract the texture feature, the principle that the edges show low frequency in one direction is used and is represented by a line in Fourier space. For zero frequencies, the Fourier transform shows the average of the image. The result of the Fourier transformation is usually plotted as a spectrum that corresponds to the modulus of the complex values of amplitude. The further we get from the center of the spectrum, the higher the observed frequency becomes. Thus, a smooth texture shows high values around the center (low frequencies), while a rough texture shows values scattered around the border (high frequencies). The idea of feature extraction from the image is based on considering the Fourier transform as a weighted combination of sinusoids. We used the photo library in R to calculate the texture based on the Fourier transform. To calculate the indices, the square enclosed in each crown was introduced into the algorithm’s mask window and a Fourier transform was performed on each window to compute a two-dimensional periodogram therefrom, where radial and R spectra were extracted to provide simplified texture features. (FOTO_PCA1, FOTO_PCA2).
Finally, Pearson’s correlation coefficient was considered to examine the linear correlation between each index and the PDI calculated for the crowns.

2.3. Evaluation

To evaluate the presented method, first the ability of the classifier to distinguish between edge and non-edge pixels was plotted through the ROC curve, in which the area under the curve was calculated. The accuracy of tree detection was determined by comparing the number of isolated crowns with the trees identified in the field data. The indicators were the F-score, tree detection rate or recall (rc), and the accuracy of detected trees or precision (pr). These criteria used true positive (TP), false negative (FN), and false positive (FP) values representing perfect segmentation, under-segmentation, and over-segmentation, respectively [29,98]. The following equations were used to calculate these criteria [98,99]:
p r = T P ( T P + F P ) ,  
r c = T P ( T P + F N ) ,  
F - score = 2 × ( r c × p r ) ( r c + p r ) ,  
To evaluate the accuracy of the tree height and the area of the delineated crowns, the calculated values were compared to the field reference data, whose crown boundaries were manually delineated in the images. The root-mean-square error (RMSE) was used as the diagnostic measure.

3. Results

3.1. Individual Tree Detection and Delineation

The suggested edge detection algorithm was applied separately to each plot. Figure 2 displays the outcome of using the algorithm. We analyzed the classification algorithm’s performance prior to applying the topological rule and transforming the raster form of edges to polygons of crowns using 500 points in each plot. In fact, we evaluated the classification result (Figure 2b). On each plot, the ROC curve was drawn to evaluate the performance.
As shown in Figure 3, 10 of the 16 plots were associated with an area under the curve > 0.8, while 6 plots showed values between 0.5 and 0.6. This included 3 of the 6 central plots, 2 of the 6 southern plots, and 1 of the 4 northern plots.
The accuracy following improvements to edge sharpening is presented in Table 4. Our suggested method correctly detected 1671 trees (87%) out of a total 1931 trees. A total of 170 trees were not recognized in all plots, which include two or more trees whose crowns were overlapped and falsely recognized as one tree. In addition, 90 trees were falsely placed where there was no tree. These included crowns that were incorrectly split into multiple crowns. The F-score showed values between 0.85 and 0.99 for different plots, which corresponds to 0.93 for the total plots. The precision score ranged between 0.88 and 1, except for in the sixth central plot (0.77). The mean precision score for all plots was 0.95, but the mean recall value for all plots showed a lower value (0.91).

3.2. The Factors Affecting the Performance of the Algorithm

Here, we looked at how shadows, the severity of the decline, and the quantity of coppice trees affected the algorithm’s output. We visually classified each plot’s degree of shadow into one of the groups of no shade (25), low shadow (50), high shadow (75), and extremely high shadow (100). The severity of the decline in each plot was calculated by averaging the PDI of the trees and expressing it as a percentage. The ratio of coppice trees to all the trees in each plot was used to calculate the percentage of coppice trees. The percentage of trees that were successfully identified in each plot was plotted against these parameters (Figure 4). No particular pattern was observed neither between the proportion of coppice forest and the proportion of detected trees nor between the severity of decline and the proportion of correctly detected trees. A striking observation was related to the shadow. Although no precise pattern was observed, the percentage of correctly detected trees almost decreased as the amount of shadow in the plots increased. When separately looking at the northern, central, and southern zones, the northern and southern plots maintained the pattern mentioned above, but the central plots were an exception (C3 was more shadowed than C6 and C5, but had a higher percentage of correctly detected trees).

3.3. Comparison of the Suggested Method with the Common Methods of Crown Delineation

Considering the watershed segmentation-based algorithms as the most widely used methods for crown delineation (see Section 1), we compared the performance of our suggested algorithm with MCWS as a conventional method with reported high accuracies for delineating oak trees [13]. Figure 5 displays the results of the comparison.
The results suggested that the precision of both methods was equal at 0.95, but the suggested method performed better in terms of recall, F-score, and correctly detected trees. For the proposed methods and MWCS, the recall values were 0.91 and 0.72, respectively. The percentages of correctly detected trees for the proposed method and MCWS were 72% and 62%, respectively, while the F-score values for the proposed method and MWCS were 0.93 and 0.82, respectively.

3.4. Tree Height and Crown Area

The results revealed the lowest R2 values for plot C1 (0.41, Figure 6) with the RMSE = 1.69 (Table 5). The highest R2 was shown for plot C5 (0.97) with the RMSE = 0.3. All in all, only 3 plots out of 16 plots returned R2 values < 0.5. Due to slightly different measurement accuracies in the field data, the RMSE was also calculated to check the accuracy of tree height (Table 5). RMSE values < 1 were shown for 7 plots, while values between 1 and 2 were shown for 5 plots, and values > 2 were returned for 4 plots. The mean RMSE was 1.21. The calculated R2 and RMSE values for the area of the crowns ranged from 0.96 to 0.99 and from 0.04 to 0.89, respectively (Figure 7, Table 6).

3.5. Texture Analysis

We investigated the correlation between the GLCM texture indices and PDI. These indicators included variance, mean, contrast, homogeneity, second moment, dissimilarity, and entropy. Entropy and dissimilarity showed the highest correlations with the PDI (Figure 8 and Figure 9). The correlation between other indicators with PDI are summarized in the Supplementary Materials (Figures S2–S6). The correlation coefficient calculated for the dissimilarity index showed values between 0.83 and 0.97 for different plots. Furthermore, 12 plots out of 16 plots correlated > 0.9 with the PDI. All in all, a total value of 0.92 was returned for all plots (Figure S1).
The correlation coefficient between the entropy and the PDI for different plots ranged between 0.57 and 0.95. The value was > 0.8 on 14 out of 16 plots, whereas lower values were only observed on two plots. The total correlation between the entropy index and PDI across all plots was 0.81 (Figure S1).
Among the two FOTO indices, the FOTO_PCA1 showed a higher correlation with PDI as shown in Figure 10. The FOTO index generally showed a lower correlation with the PDI than the GLCM indices, with correlation values ranging between 0.13 and 0.82. The correlation between the FOTO index and the PDI was 0.49 (Figure S1), which was much lower than that calculated for the GLCM indices. In addition, correlations of other indices with PDI are summarized in the Supplementary Materials (Figures S2–S6).

4. Discussion

The prevalence of massive oak decline across the fragile and erosion-prone Zagros forests follows a general tendency worldwide across semi-arid and arid vegetation zones, and thus calls for intensified studies of both its status quo and dynamics. Studying tree decline initially entails tree crown delineation in the applied aerial imagery. We presented a new method based on edge detection to delineate the crowns of scattered, mostly coppice-formed broad-leaved trees. The juxtaposition of coppices and seeds and the overlapping crowns are the main challenges for individual tree detection and delineation in such remote areas. Following this, we examined the ability to interpret the oak decline using the texture indices extracted from high-spatial-resolution UAV imagery.

4.1. Comparison of the Suggested Method with the Common Methods of Crown Delineation

Previous studies attempting to delineate broadleaved trees faced multiple challenges related to the canopy structure, which has rendered the conventional methods of identifying the canopy of deciduous trees ineffective [12]. Therefore, this resulted in a shift to either machine learning or optimizing traditional methods for broadleaves.
Chandrasekaran (2022) recently optimized the MCWS algorithm for oak and walnut trees and reported an F-score of 0.91 for oak and 0.95 for walnut trees that, to our knowledge, is the highest accuracy reported so far in the available literature. Thus, we applied their algorithm to our data and compared its results to the results of our new algorithm (Section 3.3). Although the precision values for both algorithms were almost similar, the recall values (related to the wrong detections of the algorithm) were lower for the MWCS algorithm, i.e., it omitted more trees (recorded more FN values). The trees omitted by this algorithm are related to the set of crowns that are placed next to each other, indicating that MWCS has less ability to separate clumping crowns. Considering F-score as overall accuracy, our proposed algorithm was generally more accurate than MWCS (F-score for the proposed algorithm = 0.93 and = 0.82 for MWCS), i.e., more trees were correctly detected by the proposed algorithm. It correctly recognized 87% of the trees compared to MWCS that correctly recognized 68% of the trees.

4.2. Investigating the Factors Affecting the Performance of the Algorithm

In our proposed algorithm, a threshold was used for the classifier’s score parameter to separate the edges related to the inner details of the crowns from the actual crown edges. This threshold was experimentally and separately determined for each plot, which deviates the algorithm from being fully automatic. The difference in tree growth form (coppice or seed), the degree of decline, the severity of damage to the crowns, and the level of shadows in each image are the factors affecting this threshold. To examine whether the threshold value could be set automatically, we surveyed the effect of shadows, decline severity and the number of coppice trees on the performance of the algorithm in Section 3.2. As shown in the results, the analysis revealed no systematic pattern between the proportion of coppice forest and the proportion of detected trees, which was also the case when examining the severity of decline and the proportion of correctly detected trees. The main reason could be the manual determination of the threshold to separate coppice crowns.
However, the shadow effect became more obvious when the points within the plots of the north, central and south zones were separately examined. Various algorithms have currently been presented to reduce the effects of shadows in aerial photographs [100]. However, appropriate image acquisition time can still be considered as the most appropriate method to reduce the effects of shadows and consequently improve the results. Nevertheless, one may note that the need for a fully automated algorithm that takes all factors dealing with the accuracy and the required thresholds over semi-arid natural broad-leaved tree groups into account is quite obvious and should be considered for future studies.

4.3. The Accuracy of Tree Height

Although our study provided fairly accurate estimates of tree height, previous studies have shown higher accuracies in broad-leaved forests [17,101]. The decrease in accuracy in broad-leaved stands using this method may be due to the inherent uncertainty in tree height measurement [102,103]. Field measurements of crowns are difficult and subject to human error as it is difficult to measure the height of trees with overlapping crowns [13]. In addition, the treetop position estimated in UAV data may not correspond to the measured tree height on the ground [48]. For ground measurements, the highest point along the tree trunk is usually considered for the height measurement, while in the UAV images the highest point of the crown-level CHM is considered as tree height. Some studies used LiDAR data as reference [13,104], which is clearly associated with lower uncertainties than the ground-measured tree height, but was beyond the scope of our study.

4.4. The Potential of Canopy Texture to Assess Tree Decline Severity

In this paper, we showed that canopy texture indices extracted from high-spatial-resolution UAV-RGB data can be used to describe the decline phenomenon and have the potential to assess the effects of canopy degradation. Since it was not possible to compare all methods for calculating texture parameters due to a large number of existing methods [105], we compared the most important methods based on our literature review. Among the calculated texture indices, the dissimilarity index extracted from GLCM showed the highest correlation with the PDI. While a large number of previous studies that used texture indices for forest health and degradation studies derived landscape-level texture indices [74,106,107], we calculated the texture index at the single-tree-canopy level. This method can be very time-consuming in forests dominated by broad-leaved trees, where the crowns are usually very different in size. Finding an optimal window size for calculating texture indices used to describe the decline phenomenon is challenging and is amongst the subjects of our further research.

5. Conclusions

The semi-arid Zagros forests are currently experiencing a continuous decline. The first step in using UAV data to study this issue is delineating tree crowns. Individual tree detection and delineation in aerial imagery is one of the most challenging steps in investigating tree decline at the single-tree level in semi-arid regions and is a prerequisite for any extrapolations to coarser spatial levels using space-borne data. Considering the structure of coppice trees, classical methods are not sufficient and the deep learning methods are not practical due to their need for enormous training data. In order to investigate decline for the detected crowns, a new algorithm for detecting and delineating tree crowns in UAV images was presented here. The presented approach was an edge detection method based on the SVM classifier and in the framework of the structured forest. Using the score parameter of the classifier, broadleaf coppice trees with overlapping crowns were separated. The proposed algorithm was compared to MCWS and showed higher performances. The performance values included precision = 0.91, recall = 0.93, and F-score = 0.95. By detecting the trees, texture indicators can be used to check tree decline. Among the surveyed texture indices, the dissimilarity index correlated more strongly with the PDI than other indices and thus has the potential to be used to study tree decline across semi-arid tree cover with dominant coppice structure. Finding a method to automatically determine the limits of the crown according to the factors affecting the accuracy of the algorithm can influence the monitoring of tree decline in an up-to-date and rapid way using consumer-grade UAV data.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/rs14235910/s1. Figure S1. correlation between (a) Foto, (b) dissimilarity, (c) entropy index and PDI for all plots. The line shows the best fit. r denotes the correlation coefficient. Figure S2. correlation between variance index and PDI. The line shows the best fit. r denotes the correlation coefficient. Figure S3. correlation between mean index and PDI. The line shows the best fit. r denotes the correlation coefficient. Figure S4. correlation between contrast index and PDI. The line shows the best fit. r denotes the correlation coefficient. Figure S5. correlation between homogeneity index and PDI. The line shows the best fit. r denotes the correlation coefficient. Figure S6. correlation between second moment index and PDI. The line shows the best fit. r denotes the correlation coefficient.

Author Contributions

Conceptualization, H.L., M.G. and M.P.; methodology, M.G. and H.L.; software, M.G.; validation, M.G. and H.L.; formal analysis, M.G. and H.L; investigation, M.G. and H.L; data curation, H.L. and M.P.; writing—original draft preparation, M.G. and H.L; writing—review and editing, M.G., M.P. and H.L.; visualization, M.G.; supervision, H.L. and M.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Zagros Monitoring Project of the Research Institute of Forests and Rangelands (RIFR) (grant number 01-09-09-047-97012) within the sub-project “UAV-assisted Forest Structure Monitoring” (grant number: 013-09-0951-048-97012-970548).

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to diverse field crews in the three provinces of Kermanshah, Chaharmahal-and-Bakhtiari, and Fars who collected the field data on oak decline. We are particularly grateful for the assistance of Yaghoub Iranmanesh, Hassan Jahanbazi, Seyed Kazem Bordbar, Mehrdad Zarafshar, and Habibollah Rahimi at the provincial bureaus of the Iranian Research Institute of Forests and Rangelands (RIFR), as well as our patient driver Qarliqi and our GPS assistants Bahavar and Sabaei. This research was conducted within the Research Lab “Remote Sensing for Ecology and Ecosystem Conservation (RSEEC)” of the KNTU (Link: https://www.researchgate.net/lab/Research-Lab-Remote-Sensing-for-Ecology-and-Ecosystem-Conservation-RSEEC-Hooman-Latifi, accessed on 20 November 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fakhri, S.A.; Latifi, H. A Consumer Grade UAV-Based Framework to Estimate Structural Attributes of Coppice and High Oak Forest Stands in Semi-Arid Regions. Remote Sens. 2021, 13, 4367. [Google Scholar] [CrossRef]
  2. Pourhashemi, M.; Sadeghi, S.M.M. A review on ecological causes of oak decline phenomenon in forests of Iran. Ecol. Iran. For. 2020, 8, 148–164. [Google Scholar]
  3. Erfanifard, Y.; Mouselo, M. Assessment of Crown Measurement Techniques in Coppice Trees of Zagros Forests by UltraCam-D Aerial Imagery. For. Wood Prod. 2014, 66, 413–426. [Google Scholar]
  4. Fallah, A.; Haidari, M. Investigation of Oak decline in diameter classes in Sarab-Kazan forests of Ilam. Iran. J. For. 2018, 9, 499–510. [Google Scholar]
  5. Haidari, M.; Pourhashemi, M.; Alizadeh, T.; Hedayateypour, S.M.K. The effect of forest structure on some physical and chemical soil properties in the forests stands of Kurdistan province. For. Wood Prod. 2022, 74, 469–483. [Google Scholar]
  6. Finch, J.P.; Brown, N.; Beckmann, M.; Denman, S.; Draper, J. Index measures for oak decline severity using phenotypic descriptors. For. Ecol. Manag. 2021, 485, 118948. [Google Scholar] [CrossRef]
  7. Javan, F.D.; Samadzadegan, F.; Mehravar, S.; Toosi, A. A review on spatial quality assessment methods for evaluation of pan-sharpened satellite imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, 42, 255–261. [Google Scholar] [CrossRef] [Green Version]
  8. Raparelli, E.; Bajocco, S. A bibliometric analysis on the use of unmanned aerial vehicles in agricultural and forestry studies. Int. J. Remote Sens. 2019, 40, 9070–9083. [Google Scholar] [CrossRef]
  9. Xiang, T.-Z.; Xia, G.-S.; Zhang, L. Mini-unmanned aerial vehicle-based remote sensing: Techniques, applications, and prospects. IEEE Geosci. Remote Sens. Mag. 2019, 7, 29–63. [Google Scholar] [CrossRef] [Green Version]
  10. Garza, B.N.; Ancona, V.; Enciso, J.; Perotto-Baldivieso, H.L.; Kunta, M.; Simpson, C. Quantifying Citrus Tree Health Using True Color UAV Images. Remote Sens. 2020, 12, 170. [Google Scholar] [CrossRef] [Green Version]
  11. Ke, Y.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
  12. Yu, K.; Hao, Z.; Post, C.J.; Mikhailova, E.A.; Lin, L.; Zhao, G.; Tian, S.; Liu, J. Comparison of Classical Methods and Mask R-CNN for Automatic Tree Detection and Mapping Using UAV Imagery. Remote Sens. 2022, 14, 295. [Google Scholar] [CrossRef]
  13. Chandrasekaran, A.; Shao, G.; Fei, S.; Miller, Z.; Hupy, J. Automated Inventory of Broadleaf Tree Plantations with UAS Imagery. Remote Sens. 2022, 14, 1931. [Google Scholar] [CrossRef]
  14. Chen, Y.; Hou, C.; Tang, Y.; Zhuang, J.; Lin, J.; He, Y.; Guo, Q.; Zhong, Z.; Lei, H.; Luo, S. Citrus Tree Segmentation from UAV Images Based on Monocular Machine Vision in a Natural Orchard Environment. Sensors 2019, 19, 5558. [Google Scholar] [CrossRef]
  15. Harikumar, A.; Bovolo, F.; Bruzzone, L. A Local Projection-Based Approach to Individual Tree Detection and 3-D Crown Delineation in Multistoried Coniferous Forests Using High-Density Airborne LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1168–1182. [Google Scholar] [CrossRef]
  16. Miraki, M.; Sohrabi, H.; Fatehi, P.; Kneubuehler, M. Individual tree crown delineation from high-resolution UAV images in broadleaf forest. Ecol. Inform. 2021, 61, 101207. [Google Scholar] [CrossRef]
  17. Birdal, A.C.; Avdan, U.; Türk, T. Estimating tree heights with images from an unmanned aerial vehicle. Geomat. Nat. Hazards Risk 2017, 8, 1144–1156. [Google Scholar] [CrossRef] [Green Version]
  18. Caruso, G.; Zarco-Tejada, P.J.; González-Dugo, V.; Moriondo, M.; Tozzini, L.; Palai, G.; Rallo, G.; Hornero, A.; Primicerio, J.; Gucci, R. High-resolution imagery acquired from an unmanned platform to estimate biophysical and geometrical parameters of olive trees under different irrigation regimes. PLoS ONE 2019, 14, e0210804. [Google Scholar] [CrossRef] [Green Version]
  19. Novotný, J.; Hanuš, J.; Lukeš, P.; Kaplan, V. Individual tree crowns delineation using local maxima approach and seeded region growing technique. In Proceedings of the Symposium GIS Ostrava, Ostrava, Czech Republic, 24–26 January 2011. [Google Scholar]
  20. Korpela, I.; Anttila, P.; Pitkänen, J. The performance of a local maxima method for detecting individual tree tops in aerial photographs. Int. J. Remote Sens. 2006, 27, 1159–1175. [Google Scholar] [CrossRef]
  21. Monnet, J.M.; Mermin, E.; Chanussot, J.; Berger, F. Tree top detection using local maxima filtering: A parameter sensitivity analysis. In Proceedings of the 10th International Conference on LiDAR Applications for Assessing Forest Ecosystems (Silvilaser 2010), Freiburg, Germany, 14–17 September 2010. [Google Scholar]
  22. Pitkänen, J. Individual tree detection in digital aerial images by combining locally adaptive binarization and local maxima methods. Can. J. For. Res. 2001, 31, 832–844. [Google Scholar] [CrossRef]
  23. Pouliot, D.; King, D. Approaches for optimal automated individual tree crown detection in regenerating coniferous forests. Can. J. Remote Sens. 2005, 31, 255–267. [Google Scholar] [CrossRef]
  24. Pollock, R. The Automatic Recognition of Individual Trees in Aerial Images of Forests Based on a Synthetic Tree Crown Image Model. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 1996. [Google Scholar]
  25. Gougeon, F.A. A crown-following approach to the automatic delineation of individual tree crowns in high spatial resolution aerial images. Can. J. Remote Sens. 1995, 21, 274–284. [Google Scholar] [CrossRef]
  26. Bunting, P.; Lucas, R. The delineation of tree crowns in Australian mixed species forests using hyperspectral Compact Airborne Spectrographic Imager (CASI) data. Remote Sens. Environ. 2006, 101, 230–248. [Google Scholar] [CrossRef]
  27. Lamar, W.R.; McGraw, J.B.; Warner, T.A. Multitemporal censusing of a population of eastern hemlock (Tsuga canadensis L.) from remotely sensed imagery using an automated segmentation and reconciliation procedure. Remote Sens. Environ. 2005, 94, 133–143. [Google Scholar] [CrossRef]
  28. Yancho, J.M.M. Individual tree detection in coastal western hemlock forests using remotely piloted aircraft system acquired digital aerial photogrammetric point clouds. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 2019. [Google Scholar]
  29. Mohan, M.; Silva, C.A.; Klauberg, C.; Jat, P.; Catts, G.; Cardil, A.; Hudak, A.T.; Dia, M. Individual Tree Detection from Unmanned Aerial Vehicle (UAV) Derived Canopy Height Model in an Open Canopy Mixed Conifer Forest. Forests 2017, 8, 340. [Google Scholar] [CrossRef] [Green Version]
  30. Khosravipour, A.; Skidmore, A.K.; Wang, T.; Isenburg, M.; Khoshelham, K. Effect of slope on treetop detection using a LiDAR Canopy Height Model. ISPRS J. Photogramm. Remote Sens. 2015, 104, 44–52. [Google Scholar] [CrossRef]
  31. Xu, X.; Zhou, Z.; Tang, Y.; Qu, Y. Individual tree crown detection from high spatial resolution imagery using a revised local maximum filtering. Remote Sens. Environ. 2021, 258, 112397. [Google Scholar] [CrossRef]
  32. Pouliot, D.A.; King, D.J.; Bell, F.W.; Pitt, D.G. Automated tree crown detection and delineation in high-resolution digital camera imagery of coniferous forest regeneration. Remote Sens. Environ. 2002, 82, 322–334. [Google Scholar] [CrossRef]
  33. Näsi, R.; Honkavaara, E.; Lyytikäinen-Saarenmaa, P.; Blomqvist, M.; Litkey, P.; Hakala, T.; Viljanen, N.; Kantola, T.; Tanhuanpää, T.; Holopainen, M. Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level. Remote Sens. 2015, 7, 15467–15493. [Google Scholar] [CrossRef] [Green Version]
  34. Hamraz, H.; Contreras, M.A.; Zhang, J. A robust approach for tree segmentation in deciduous forests using small-footprint airborne LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 532–541. [Google Scholar] [CrossRef] [Green Version]
  35. Jaskierniak, D.; Lucieer, A.; Kuczera, G.; Turner, D.; Lane, P.; Benyon, R.; Haydon, S. Individual tree detection and crown delineation from Unmanned Aircraft System (UAS) LiDAR in structurally complex mixed species eucalypt forests. ISPRS J. Photogramm. Remote Sens. 2020, 171, 171–187. [Google Scholar] [CrossRef]
  36. Yun, T.; Jiang, K.; Li, G.; Eichhorn, M.P.; Fan, J.; Liu, F.; Chen, B.; An, F.; Cao, L. Individual tree crown segmentation from airborne LiDAR data using a novel Gaussian filter and energy function minimization-based approach. Remote Sens. Environ. 2021, 256, 112307. [Google Scholar] [CrossRef]
  37. Hu, B.; Li, J.; Jing, L.; Judah, A. Improving the efficiency and accuracy of individual tree crown delineation from high-density LiDAR data. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 145–155. [Google Scholar] [CrossRef]
  38. Jing, L.; Hu, B.; Li, J.; Noland, T.; Guo, H. Automated tree crown delineation from imagery based on morphological techniques. IOP Conf. Ser. Earth Environ. Sci. 2014, 17, 012066. [Google Scholar] [CrossRef]
  39. Wallace, L.; Sun, Q.C.; Hally, B.; Hillman, S.; Both, A.; Hurley, J.; Saldias, D.S.M. Linking urban tree inventories to remote sensing data for individual tree mapping. Urban For. Urban Green. 2021, 61, 127106. [Google Scholar] [CrossRef]
  40. Yin, D.; Wang, L. Individual mangrove tree measurement using UAV-based LiDAR data: Possibilities and challenges. Remote Sens. Environ. 2019, 223, 34–49. [Google Scholar] [CrossRef]
  41. Hastings, J.H.; Ollinger, S.V.; Ouimette, A.P.; Sanders-DeMott, R.; Palace, M.W.; Ducey, M.J.; Sullivan, F.B.; Basler, D.; Orwig, D.A. Tree species traits determine the success of LiDAR-based crown mapping in a mixed temperate forest. Remote Sens. 2020, 12, 309. [Google Scholar] [CrossRef] [Green Version]
  42. Fujimoto, A.; Haga, C.; Matsui, T.; Machimura, T.; Hayashi, K.; Sugita, S.; Takagi, H. An end to end process development for UAV-SfM based forest monitoring: Individual tree detection, species classification and carbon dynamics simulation. Forests 2019, 10, 680. [Google Scholar] [CrossRef] [Green Version]
  43. Zaforemska, A.; Xiao, W.; Gaulton, R. Individual tree detection from UAV LiDAR data in a mixed species woodland. In Proceedings of the ISPRS Geospatial Week 2019, Enschede, The Netherlands, 10–14 June 2019. [Google Scholar]
  44. Koch, B.; Heyder, U.; Weinacker, H. Detection of Individual Tree Crowns in Airborne Lidar Data. Photogramm. Eng. Remote Sens. 2006, 72, 357–363. [Google Scholar] [CrossRef] [Green Version]
  45. Panagiotidis, D.; Abdollahnejad, A.; Surový, P.; Chiteculo, V. Determining tree height and crown diameter from high-resolution UAV imagery. Int. J. Remote Sens. 2017, 38, 2392–2410. [Google Scholar] [CrossRef]
  46. Wang, L.; Gong, P.; Biging, G.S. Individual Tree-Crown Delineation and Treetop Detection in High-Spatial-Resolution Aerial Imagery. Photogramm. Eng. Remote Sens. 2004, 70, 351–357. [Google Scholar] [CrossRef] [Green Version]
  47. Tanhuanpää, T.; Saarinen, N.; Kankare, V.; Nurminen, K.; Vastaranta, M.; Honkavaara, E.; Karjalainen, M.; Yu, X.; Holopainen, M.; Hyyppä, J. Evaluating the Performance of High-Altitude Aerial Image-Based Digital Surface Models in Detecting Individual Tree Crowns in Mature Boreal Forests. Forests 2016, 7, 143. [Google Scholar] [CrossRef]
  48. Nuijten, R.J.G.; Coops, N.C.; Goodbody, T.R.H.; Pelletier, G. Examining the Multi-Seasonal Consistency of Individual Tree Segmentation on Deciduous Stands Using Digital Aerial Photogrammetry (DAP) and Unmanned Aerial Systems (UAS). Remote Sens. 2019, 11, 739. [Google Scholar] [CrossRef] [Green Version]
  49. Zheng, X.; Wang, Y.; Gan, M.; Zhang, J.; Teng, L.; Wang, K.; Shen, Z. Discrimination of Settlement and Industrial Area Using Landscape Metrics in Rural Region. Remote Sens. 2016, 8, 845. [Google Scholar] [CrossRef]
  50. Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
  51. Kattenborn, T.; Eichel, J.; Fassnacht, F. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci. Rep. 2019, 9, 17656. [Google Scholar] [CrossRef] [Green Version]
  52. Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. Remote Sens. 2021, 173, 24–49. [Google Scholar] [CrossRef]
  53. Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.; Moritake, K.; Cabezas, M. Deep Learning in Forestry Using UAV-Acquired RGB Data: A Practical Review. Remote Sens. 2021, 13, 2837. [Google Scholar] [CrossRef]
  54. Brandt, M.; Tucker, C.J.; Kariryaa, A.; Rasmussen, K.; Abel, C.; Small, J.; Chave, J.; Rasmussen, L.V.; Hiernaux, P.; Diouf, A.A.; et al. An unexpectedly large count of trees in the West African Sahara and Sahel. Nature 2020, 587, 78–82. [Google Scholar] [CrossRef]
  55. Csillik, O.; Cherbini, J.; Johnson, R.; Lyons, A.; Kelly, M. Identification of Citrus Trees from Unmanned Aerial Vehicle Imagery Using Convolutional Neural Networks. Drones 2018, 2, 39. [Google Scholar] [CrossRef] [Green Version]
  56. Zortea, M.; Macedo, M.M.; Mattos, A.B.; Ruga, B.C.; Gemignani, B.H. Automatic citrus tree detection from UAV images based on convolutional neural networks. In Proceedings of the 31st Sibgrap/WIA—Conference on Graphics, Patterns and Images, SIBGRAPI, Rio de Janeiro, Brazil, 29 October–1 November 2018. [Google Scholar]
  57. Zhao, T.; Yang, Y.; Niu, H.; Wang, D.; Chen, Y. Comparing U-Net convolutional network with mask R-CNN in the performances of pomegranate tree canopy segmentation. In Multispectral, Hyperspectral, and Ultraspectral Remote Sensing Technology, Techniques and Applications VII; SPIE: Bellingham, WA, USA, 2018. [Google Scholar]
  58. Iqbal, M.S.; Ali, H.; Tran, S.N.; Iqbal, T. Coconut trees detection and segmentation in aerial imagery using mask region-based convolution neural network. IET Comput. Vis. 2021, 15, 428–439. [Google Scholar] [CrossRef]
  59. Guirado, E.; Blanco-Sacristán, J.; Rodríguez-Caballero, E.; Tabik, S.; Alcaraz-Segura, D.; Martínez-Valderrama, J.; Cabello, J. Mask R-CNN and OBIA Fusion Improves the Segmentation of Scattered Vegetation in Very High-Resolution Optical Sensors. Sensors 2021, 21, 320. [Google Scholar] [CrossRef] [PubMed]
  60. Hao, Z.; Lin, L.; Post, C.J.; Mikhailova, E.A.; Li, M.; Chen, Y.; Yu, K.; Liu, J. Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN). ISPRS J. Photogramm. Remote. Sens. 2021, 178, 112–123. [Google Scholar] [CrossRef]
  61. Braga, J.R.; Peripato, V.; Dalagnol, R.; Ferreira, M.; Tarabalka, Y.; Aragão, L.E.; de Campos Velho, H.; Shiguemori, E.H.; Wagner, F.H. Tree Crown Delineation Algorithm Based on a Convolutional Neural Network. Remote Sens. 2020, 12, 1288. [Google Scholar] [CrossRef]
  62. Dai, W.; Yang, B.; Dong, Z.; Shaker, A. A new method for 3D individual tree extraction using multispectral airborne LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 144, 400–411. [Google Scholar] [CrossRef]
  63. Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A review. IEEE Geosci. Remote Sens. Mag. 2017, in press.
  64. Weinstein, B.G.; Marconi, S.; Bohlman, S.A.; Zare, A.; White, E.P. Cross-site learning in deep learning RGB tree crown detection. Ecol. Inform. 2020, 56, 101061. [Google Scholar] [CrossRef]
  65. Marques, P.; Pádua, L.; Adão, T.; Hruška, J.; Peres, E.; Sousa, A.; Sousa, J.J. UAV-Based Automatic Detection and Monitoring of Chestnut Trees. Remote Sens. 2019, 11, 855. [Google Scholar] [CrossRef] [Green Version]
  66. Hassaan, O.; Nasir, A.K.; Roth, H.; Khan, M.F. Precision Forestry: Trees Counting in Urban Areas Using Visible Imagery based on an Unmanned Aerial Vehicle. IFAC-PapersOnLine 2016, 49, 16–21. [Google Scholar] [CrossRef]
  67. Lin, Y.; Jiang, M.; Yao, Y.; Zhang, L.; Lin, J. Use of UAV oblique imaging for the detection of individual trees in residential environments. Urban For. Urban Green. 2015, 14, 404–412. [Google Scholar] [CrossRef]
  68. Poblete-Echeverría, C.; Olmedo, G.F.; Ingram, B.; Bardeen, M. Detection and Segmentation of Vine Canopy in Ultra-High Spatial Resolution RGB Imagery Obtained from Unmanned Aerial Vehicle (UAV): A Case Study in a Commercial Vineyard. Remote Sens. 2017, 9, 268. [Google Scholar] [CrossRef] [Green Version]
  69. Yang, K.; Zhang, H.; Wang, F.; Lai, R. Extraction of Broad-Leaved Tree Crown Based on UAV Visible Images and OBIA-RF Model: A Case Study for Chinese Olive Trees. Remote Sens. 2022, 14, 2469. [Google Scholar] [CrossRef]
  70. Jing, J.; Liu, S.; Wang, G.; Zhang, W.; Sun, C. Recent advances on image edge detection: A comprehensive review. Neurocomputing 2022, 503, 259–271. [Google Scholar] [CrossRef]
  71. Dollár, P.; Zitnick, C.L. Fast edge detection using structured forests. In IEEE Transactions on Pattern Analysis and Machine Intelligence; IEEE: Piscataway, NJ, USA, 2014; Volume 37, pp. 1558–1570. [Google Scholar]
  72. Peryanto, A.; Yudhana, A.; Umar, R. Convolutional Neural Network and Support Vector Machine in Classification of Flower Images. Khazanah Inform. J. Ilmu Kompʹût. dan Inform. 2022, 8. [Google Scholar] [CrossRef]
  73. Argamosa, R.J.L.; Paringit, E.C.; Quinton, K.R.; Tandoc, F.A.M.; Faelga, R.A.G.; Ibañez, C.A.G.; Posilero, M.A.V.; Zaragosa, G.P. Fully automated GIS-based individual tree crown delineation based on curvature values from a lidar derived canopy height model in a coniferous plantation. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 563–569. [Google Scholar] [CrossRef]
  74. Bourgoin, C.; Betbeder, J.; Couteron, P.; Blanc, L.; Dessard, H.; Oszwald, J.; Le Roux, R.; Cornu, G.; Reymondin, L.; Mazzei, L.; et al. UAV-based canopy textures assess changes in forest structure from long-term degradation. Ecol. Indic. 2020, 115, 106386. [Google Scholar] [CrossRef] [Green Version]
  75. Humeau-Heurtier, A. Texture feature extraction methods: A survey. IEEE Access 2019, 7, 8975–9000. [Google Scholar] [CrossRef]
  76. Douss, R.; Farah, I.R. Extraction of individual trees based on Canopy Height Model to monitor the state of the forest. Trees For. People 2022, 8, 100257. [Google Scholar] [CrossRef]
  77. Nyamgeroh, B.B.; Groen, T.A.; Weir, M.J.; Dimov, P.; Zlatanov, T. Detection of forest canopy gaps from very high resolution aerial images. Ecol. Indic. 2018, 95, 629–636. [Google Scholar] [CrossRef]
  78. Guo, Y.; Chen, S.; Wu, Z.; Wang, S.; Bryant, C.R.; Senthilnath, J.; Cunha, M.; Fu, Y.H. Integrating Spectral and Textural Information for Monitoring the Growth of Pear Trees Using Optical Images from the UAV Platform. Remote Sens. 2021, 13, 1795. [Google Scholar] [CrossRef]
  79. Yuan, W.; Wijewardane, N.; Jenkins, S.; Bai, G.; Ge, Y.; Graef, G.L. Early Prediction of Soybean Traits through Color and Texture Features of Canopy RGB Imagery. Sci. Rep. 2019, 9, 14089. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Sadeghi, M.; Malekian, M.; Khodakarami, L. Forest losses and gains in Kurdistan province, western Iran: Where do we stand? Egypt. J. Remote Sens. Space Sci. 2017, 20, 51–59. [Google Scholar] [CrossRef]
  81. FAO. Guide for Country Reporting for Forest Resource Assessment (FRA) 2015. 2015. Available online: http://www.fao.org/forest-resources-assessment/past-assessments/fra-2015/en/ (accessed on 26 September 2022).
  82. Ghazanfari, H.; Namiranian, M.; Sobhani, H.; Mohajer, R.M. Traditional forest management and its application to encourage public participation for sustainable forest management in the northern Zagros Mountains of Kurdistan Province, Iran. Scand. J. For. Res. 2004, 19 (Suppl. 4), 65–71. [Google Scholar] [CrossRef]
  83. Jazirehi, M.; Ebrahimi Rostaghi, M. Silviculture in Zagros; University of Tehran Press: Tehran, Iran, 2003; 560p. [Google Scholar]
  84. Finch, J. PDI: Phenotypic Index Measures for Oak Decline Severity. R Package Version 0.4.2. 2021. Available online: https://CRAN.R-project.org/package=pdi (accessed on 20 August 2022).
  85. Agisoft. Agisoft Metashape User Manual: Professional Edition; Version 1.7; Agisoft: St. Petersburg, Russia, 2021; Available online: https://www.agisoft.com/downloads/user-manuals (accessed on 23 February 2021).
  86. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially Located Platform and Aerial Photography for Documentation of Grazing Impacts on Wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  87. Gitelson, A.A.; Kaufman, Y.J.; Stark, R.; Rundquist, D. Novel algorithms for remote estimation of vegetation fraction. Remote Sens. Environ. 2002, 80, 76–87. [Google Scholar] [CrossRef]
  88. Kawashima, S.; Nakatani, M. An algorithm for estimating chlorophyll content in leaves using a video camera. Ann. Bot. 1998, 81, 49–54. [Google Scholar] [CrossRef] [Green Version]
  89. Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
  90. Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color Indices for Weed Identification Under Various Soil, Residue, and Lighting Conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
  91. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man Cybern. 1973, 610–621. [Google Scholar] [CrossRef] [Green Version]
  92. Conners, R.W.; Harlow, C.A. A theoretical comparison of texture algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 1980, PAMI-2, 204–222. [Google Scholar] [CrossRef]
  93. Akaike, H. A new look at the statistical model identification. IEEE Trans. Autom. Control 1974, 19, 716–723. [Google Scholar] [CrossRef]
  94. Gómez-Moreno, H.; Maldonado-Bascón, S.; López-Ferreras, F.; Acevedo-Rodríguez, F.J.; Martín-Martín, P. Edge detection by using the support vector machines. Training 2001, 2, 3. [Google Scholar]
  95. Ren, X. Multi-scale improves boundary detection in natural images. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  96. Lan, Z.; Liu, Y. Study on Multi-Scale Window Determination for GLCM Texture Description in High-Resolution Remote Sensing Image Geo-Analysis Supported by GIS and Domain Knowledge. ISPRS Int. J. Geo-Inf. 2018, 7, 175. [Google Scholar] [CrossRef] [Green Version]
  97. Tan, X.; Triggs, B. Enhanced Local Texture Feature Sets for Face Recognition Under Difficult Lighting Conditions. IEEE Trans. Image Process. 2010, 19, 1635–1650. [Google Scholar] [PubMed] [Green Version]
  98. Goutte, C.; Gaussier, E. A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  99. Sokolova, M.; Japkowicz, N.; Szpakowicz, S. Beyond accuracy, F-score and ROC: A family of discriminant measures for performance evaluation. In Australasian Joint Conference on Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  100. Pons, X.; Padró, J.-C. An Operational Radiometric Correction Technique for Shadow Reduction in Multispectral UAV Imagery. Remote Sens. 2021, 13, 3808. [Google Scholar] [CrossRef]
  101. Krause, S.; Sanders, T.G.M.; Mund, J.-P.; Greve, K. UAV-Based Photogrammetric Tree Height Measurement for Intensive Forest Monitoring. Remote Sens. 2019, 11, 758. [Google Scholar] [CrossRef]
  102. Ganz, S.; Käber, Y.; Adler, P. Measuring Tree Height with Remote Sensing—A Comparison of Photogrammetric and LiDAR Data with Different Field Measurements. Forests 2019, 10, 694. [Google Scholar] [CrossRef] [Green Version]
  103. Luoma, V.; Saarinen, N.; Wulder, M.A.; White, J.C.; Vastaranta, M.; Holopainen, M.; Hyyppä, J. Assessing Precision in Conventional Field Measurements of Individual Tree Attributes. Forests 2017, 8, 38. [Google Scholar] [CrossRef] [Green Version]
  104. Sullivan, F.B.; Ducey, M.J.; Orwig, D.A.; Cook, B.; Palace, M.W. Comparison of lidar- and allometry-derived canopy height models in an eastern deciduous forest. For. Ecol. Manag. 2017, 406, 83–94. [Google Scholar] [CrossRef]
  105. Chazdon, R.L.; Brancalion, P.H.S.; Laestadius, L.; Bennett-Curry, A.; Buckingham, K.; Kumar, C.; Moll-Rocek, J.; Vieira, I.C.G.; Wilson, S.J. When is a forest a forest? Forest concepts and definitions in the era of forest and landscape restoration. AMBIO 2016, 45, 538–550. [Google Scholar] [CrossRef] [Green Version]
  106. Asner, G.; Knapp, D.E.; Balaji, A.; Paez-Acosta, G. Automated mapping of tropical deforestation and forest degradation: CLASlite. J. Appl. Remote Sens. 2009, 3, 033543. [Google Scholar] [CrossRef]
  107. Ecke, S.; Dempewolf, J.; Frey, J.; Schwaller, A.; Endres, E.; Klemmt, H.-J.; Tiede, D.; Seifert, T. UAV-Based Forest Health Monitoring: A Systematic Review. Remote Sens. 2022, 14, 3205. [Google Scholar] [CrossRef]
Figure 1. Workflow for tree delineation.
Figure 1. Workflow for tree delineation.
Remotesensing 14 05910 g001
Figure 2. An example of tree crown delineation with the proposed algorithm related to plot N3: (a) ortho photo, (b) SVM output, (c) tree crown polygons, and (d) the final result after enhancement.
Figure 2. An example of tree crown delineation with the proposed algorithm related to plot N3: (a) ortho photo, (b) SVM output, (c) tree crown polygons, and (d) the final result after enhancement.
Remotesensing 14 05910 g002
Figure 3. ROC curve of SVM applied on 16 plots. AUC: Area under the curve.
Figure 3. ROC curve of SVM applied on 16 plots. AUC: Area under the curve.
Remotesensing 14 05910 g003
Figure 4. The performance of the proposed method as affected by (A) coppice rate, (B) decline rate, and (C) shadow.
Figure 4. The performance of the proposed method as affected by (A) coppice rate, (B) decline rate, and (C) shadow.
Remotesensing 14 05910 g004
Figure 5. Comparison of different accuracy parameters for the proposed method (SVM) and MWCS.
Figure 5. Comparison of different accuracy parameters for the proposed method (SVM) and MWCS.
Remotesensing 14 05910 g005
Figure 6. Comparison of field and UAV derived heights at plot level. The dashed line shows y = x and the solid line shows the best fit. The gray area represents the confidence interval.
Figure 6. Comparison of field and UAV derived heights at plot level. The dashed line shows y = x and the solid line shows the best fit. The gray area represents the confidence interval.
Remotesensing 14 05910 g006
Figure 7. Comparison of manually delineated and estimated crown area at plot. The dashed line shows y = x and the solid line shows the best fit.
Figure 7. Comparison of manually delineated and estimated crown area at plot. The dashed line shows y = x and the solid line shows the best fit.
Remotesensing 14 05910 g007
Figure 8. Correlation between dissimilarity index and PDI. The line shows the best fit. r denotes the correlation coefficient.
Figure 8. Correlation between dissimilarity index and PDI. The line shows the best fit. r denotes the correlation coefficient.
Remotesensing 14 05910 g008
Figure 9. Correlation between entropy index and PDI. The line shows the best fit. r denotes the correlation coefficient.
Figure 9. Correlation between entropy index and PDI. The line shows the best fit. r denotes the correlation coefficient.
Remotesensing 14 05910 g009
Figure 10. Correlation between the Foto index and PDI. The line shows the best fit. r denotes the correlation coefficient.
Figure 10. Correlation between the Foto index and PDI. The line shows the best fit. r denotes the correlation coefficient.
Remotesensing 14 05910 g010
Table 1. Summary of field-measured tree data. The canopy cover was calculated using the UAV ortho-mosaics. N denotes northern plots, C denotes central plots, and S denotes southern plots.
Table 1. Summary of field-measured tree data. The canopy cover was calculated using the UAV ortho-mosaics. N denotes northern plots, C denotes central plots, and S denotes southern plots.
PlotDate and Time of FlightCenter CoordinateNumber of Reference TreesAverage Canopy Cover (%)Amount of Shadow (%)
LatitudeLongitude
N12 September 2019 2:16:12 PM34°15′42.00″46°29′27.64″34632.3125
N22 September 2019 4:04:58 PM34°13′27.83″46°27′39.09″7922.4475
N33 September 2019 9:50:20 AM34°13′30.55″46°39′21.08″30233.4475
N43 September 2019 12:02:00 PM34°21′52.87″46°21′36.99″9935.0050
C14 September 2019 12:40:04 PM32°08′26.17″50°08′30.88″19537.7850
C24 September 2019 4:33:58 PM32°09′18.02″50°07′50.16″6235.6650
C35 September 2019 9:47:04 AM31°54′25.17″50°37′03.18″9730.31100
C45 September 2019 12:39:34 PM31°52′58.34″50°34′32.10″4228.5750
C55 September 2019 5:02:32 PM31°35′34.22″50°36′5.56″8338.1675
C65 September 2019 7:02:58 PM31°36′59.24″50°42′55.79″9735.7275
S17 November 2019 9:11:34 PM29°51′16.03″51°58′36.37″15638.7075
S28 November 2019 12:25:46 AM29°35′28.56″51°56′20.74″13333.4875
S38 November 2019 3:44:44 AM29°24′37.79″52°10′08.47″10015.4875
S48 November 2019 9:06:24 PM29°24′50.62″52°10′19.16″10014.2650
S58 November 2019 10:58:42 PM29°30′16.51″52°09′58.04″19120.4650
S68 November 2019 11:39:50 PM29°29′11.79″52°11′04.26″19525.0525
Table 2. RGB VIs calculated from UAV images where R, G, and B are normalized RGB values.
Table 2. RGB VIs calculated from UAV images where R, G, and B are normalized RGB values.
VIEquationNameReference
GLI(2G − R − B)/(2G + R + B)Green leaf index[86]
VARI(G − R)/(G + R − B)Visible atmospherically resistant index[87]
NDTI(R − G)/(R + G)Normalized difference turbidity index water[88]
RGBVI(GG) − (RB)/(GG) + (RB)Red–green–blue vegetation index[89]
EXG2G − R − BExcess of green[90]
Table 3. GLCM texture indices calculated from high-resolution UAV images. P denotes pixel.
Table 3. GLCM texture indices calculated from high-resolution UAV images. P denotes pixel.
Texture FeatureEquationReference
Mean∑∑iPi,j[91]
Variance∑∑(i − μ i)2 Pi,j[91]
Homogeneity∑∑Pi,j/(1 + (i − j)2)[92]
Contrast∑∑(i − j)2 Pi,j[91]
Dissimilarity∑∑i Pi,j|i − j|[92]
Entropy∑∑ Pi,jIg Pi,j[91]
Second Moment∑∑i Pi,j2[92]
Table 4. Accuracy parameter for the proposed method. Recall (rc), precision (pr), and detection rates calculated using the values of false negative (FN), false positive (FP), and true positive (TP) detected trees specified in the reference data.
Table 4. Accuracy parameter for the proposed method. Recall (rc), precision (pr), and detection rates calculated using the values of false negative (FN), false positive (FP), and true positive (TP) detected trees specified in the reference data.
SiteTree CountFNFPTPprrcF-ScoreOmission Error *Commission Error **Correctly Detected Trees ***
N1346243400.990.990.990.581.160.98
N27946690.920.950.935.488.000.87
N33024132580.990.860.9213.711.150.85
N49933930.970.970.973.133.130.94
C11959171690.910.950.935.069.140.87
C26292510.960.850.9015.003.770.82
C397190781.000.800.8919.590.000.80
C44222380.950.950.955.005.000.90
C583170661.000.800.8920.480.000.80
C697026710.731.000.850.0026.800.73
S11561811370.990.880.9411.610.720.88
S21331701161.000.870.9312.780.000.87
S3100118810.910.880.9011.968.990.81
S4100312850.880.970.923.4112.370.85
S51911241750.980.940.966.422.230.92
S6195561840.970.970.972.653.160.94
Total19311709016710.950.910.939.235.110.87
* Omission error was calculated from a recall of 100 × (1 − rc). ** Commission error was calculated from a precision of 100 × (1 − pr). *** Trees detected were calculated for the correctly detected trees from the proposed method against the manually detected trees from the orthomosaic.
Table 5. RMSE of tree height calculated from UAV data for each plot.
Table 5. RMSE of tree height calculated from UAV data for each plot.
Northern PlotsRMSECentral PlotsRMSESouthern PlotsRMSE
N10.84C11.69S11.23
N22.77C20.74S21.28
N30.56C30.3S32.15
N40.56C40.95S41.18
C52.72S50.84
C61.89S63.14
Table 6. RMSE of crown area for each plot.
Table 6. RMSE of crown area for each plot.
Northern PlotsRMSECentral PlotsRMSESouthern PlotsRMSE
N10.04C10.75S10.47
N20.14C20.67S20.86
N30.36C30.81S30.74
N40.81C40.89S40.34
C50.71S50.58
C60.12S60.34
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ghasemi, M.; Latifi, H.; Pourhashemi, M. A Novel Method for Detecting and Delineating Coppice Trees in UAV Images to Monitor Tree Decline. Remote Sens. 2022, 14, 5910. https://doi.org/10.3390/rs14235910

AMA Style

Ghasemi M, Latifi H, Pourhashemi M. A Novel Method for Detecting and Delineating Coppice Trees in UAV Images to Monitor Tree Decline. Remote Sensing. 2022; 14(23):5910. https://doi.org/10.3390/rs14235910

Chicago/Turabian Style

Ghasemi, Marziye, Hooman Latifi, and Mehdi Pourhashemi. 2022. "A Novel Method for Detecting and Delineating Coppice Trees in UAV Images to Monitor Tree Decline" Remote Sensing 14, no. 23: 5910. https://doi.org/10.3390/rs14235910

APA Style

Ghasemi, M., Latifi, H., & Pourhashemi, M. (2022). A Novel Method for Detecting and Delineating Coppice Trees in UAV Images to Monitor Tree Decline. Remote Sensing, 14(23), 5910. https://doi.org/10.3390/rs14235910

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop