Next Article in Journal
Application of Microwave Remote Sensing to Dynamic Testing of Stay-Cables
Next Article in Special Issue
Spatial Enhancement of MODIS-based Images of Leaf Area Index: Application to the Boreal Forest Region of Northern Alberta, Canada
Previous Article in Journal
Potential of MODIS EVI in Identifying Hurricane Disturbance to Coastal Vegetation in the Northern Gulf of Mexico
Previous Article in Special Issue
Using Urban Landscape Trajectories to Develop a Multi-Temporal Land Cover Database to Support Ecological Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Individual Tree Species Classification by Illuminated—Shaded Area Separation

Department of Remote Sensing and Photogrammetry, Finnish Geodetic Institute, Geodeetinrinne 2, 02431 Kirkkonummi, Finland
*
Author to whom correspondence should be addressed.
Remote Sens. 2010, 2(1), 19-35; https://doi.org/10.3390/rs2010019
Submission received: 10 October 2009 / Revised: 11 December 2009 / Accepted: 16 December 2009 / Published: 28 December 2009
(This article belongs to the Special Issue Ecological Status and Change by Remote Sensing)

Abstract

:
A new method, called Illumination Dependent Colour Channels (IDCC), is presented to improve individual tree species classification. The method is based on tree crown division into illuminated and shaded parts on a digital aerial image. Colour values of both sides of the tree crown are then used in species classification. Tree crown division is achieved by comparing the projected location of an aerial image pixel with its neighbours on a Canopy Height Model (CHM), which is calculated from a synchronized LIDAR point cloud. The sun position together with the mapping aircraft position are also utilised in illumination status detection. The new method was tested on a dataset of 295 trees and the classification results were compared with ones measured with two other feature extraction methods. The results of the developed method gave a clear improvement in overall tree species classification accuracy.

Graphical Abstract

1. Introduction

1.1. Use of Species Classification in Forest Management

Tree species classification has traditionally required an expert inspector working manually with aerial imagery. However, a demand for more precise and automatic forest attribute estimation has arisen. One of the keys towards more precise estimates is more accurate knowledge of the species involved. The forest management field is presented with a growing number of demands from both the industrial and non-industrial sectors. The industrial sector needs better information of the material quality and quantity of raw wood. The non-industrial sector needs more focused knowledge for such objectives as preservation of biodiversity, sequestration of carbon, creation of recreational opportunities, and hunting considerations. In order to meet these requirements, more precise information from forest inventories is needed. Thus, airborne laser scanning (ALS) is increasingly used for operative, stand-wise inventory in Scandinavia. The two main approaches used to derive forest information from ALS data have been based on laser canopy height distribution e.g., [1,2] and individual tree detection e.g., [3]. In both approaches, the two main development areas are: (1) practical solution for tree species classification and (2) improvement in accuracy and quality of the reference sample plots.
The knowledge of the tree species information is needed in forest management planning. Biological studies on forest habitat mapping benefit from species specific forest information, since for example, the preferred tree species of some endangered species could be located using remote sensing [4]. Knowledge of tree species is also needed in the forest industry as the species information determines the usability of the wood material. Both the tree growth and the timber volume estimates are species dependent. Very fine level information on the forest is especially important in wood procurement planning and in forest protection surveys [5].
Species specific reference measurements can be made with stand-wise precision on the field, but this requires a massive amount of work. The number of assessments per stand is also low, which lowers the estimation precision of the tree species specific timber range [5].
Species classification from aerial images is usually achieved using one of the following approaches: object-based or pixel-based. In object based species classification, trees or a group of trees are first detected, delineated and extracted from data. Features of a single tree object are then computed for classification. In pixel based classification, either image pixels or integrated data raster cells are classified. This approach is closely related to the methods used in land cover classification and is mainly used in determining the forest type or the main species in large forested areas [6,7].

1.2. LIDAR and Aerial Image Based Methods

One of the main reasons to use LIDAR intensity data is the fact that there is no shadowing in the LIDAR measurement. Range corrected LIDAR intensity data has been used lately for species classification [8,9]. In Donoghue et al. [8] range corrected intensity measures were computed for different height quantiles. These quantiles were used to quantify the volume of spruce in even aged, mixed spruce and pine stands. Ørka et al. [9] used structural features together with the range corrected, first return pulse intensity data. The overall classification accuracy for classifying spruce and birch was 88%. Korpela et al. noted that the LIDAR intensity data was not sufficient to separate the three main species of forest trees in Finland [10].
Different approaches have been developed to integrate aerial images and LIDAR data in several studies [10,11,12]. Persson et al. [11] integrated LIDAR data and aerial colour-infrared (CIR) imagery to classify tree species into three classes: spruce, pine and deciduous. LIDAR data was used to segment trees. Segments were mapped to the corresponding aerial image. The classification was done using 10% of the brightest pixels of each tree crown. Each chosen pixel was represented by two angle values, which were calculated from the green, red, and infrared components of the pixel. A sample tree was represented by the mean of the pixel angle values within the tree segment. Spectral band ratio filtering was suggested for the reduction of shadowing effects. An overall classification accuracy of 90% was reported for the training set. A spectral rationing algorithm and formation of a hybrid colour composite image has been also used to reduce shadow effects in other studies, e.g., in Bork et al. [13].
LIDAR data were used to delineate tree crowns in a study, where five tree species were classified from aerial images taken with ADS40 and RS30 digital cameras [14]. The training set consisted of CIR images. An overall classification accuracy of 86% was reached. LIDAR collected digital surface model (DSM) data was also used to delineate trees in Heinzel et al. [12]. Histogram linearized CIR true orthophotos were transformed into hue (H), saturation (S), and intensity (I) channels. The detected tree polygon was fitted to the spectral data and shaded areas with very low intensity values were removed. The classification was done in two steps: first using the hue channel histogram and second using NIR band. The overall classification accuracy for tree classes of oak / hornbeam, beech, and conifer was 84%.
Korpela et al. integrated LIDAR data with aerial images and used them to classify seedling trees in a raster cell setup with an approximate resolution of 0.5 m [10]. Conifers, deciduous broad-leaved trees, other low vegetation, and abiotic surfaces were used as reference classes. The achieved classification accuracy varied between the study stands with minimum 61.1% and maximum 77.8%. Respective minimum and maximum accuracies changed to 61.6 and 78.9 percent, when the used tree samples were limited to those in direct sunlight.
Temporal LIDAR intensity data has been also applied in tree species classification [15]. Both leaf-off and leaf-on datasets were measured from the same forest site located in Washington Park Arboretum, Seattle, Washington, USA. Eight deciduous and seven coniferous tree species were included in the study. Resulting classification accuracies between deciduous and coniferous trees were 73.1% for leaf-on dataset, 83.4% for leaf-off dataset, and 90.6%, when all datasets were used.
Successful tree species classification results have been also reported using only aerial images [16]. Healthy and damaged spruce, pine, fir and beech trees were classified semi-automatically using CIR images of 0.5 m resolution. The achieved average classification accuracy was 80%.
According to the recent studies reviewed above, the most typical approach to object based tree species classification is the use of LIDAR data for tree crown delineation and for selection of the corresponding image pixels. The species classification is typically done either by combining only features based on image colour channels or by combining them with LIDAR based structural features. Shadowing problems are handled using filtering and pixel selection.

1.3. Methodology Comparison Studies

New forest type and tree species classification methods are usually developed using individual datasets. This makes it difficult to compare their results and performance with each other. However, some comprehensive projects have been done during last few years.
A EuroSDR tree extraction project, where different extraction methods were tested on freely available datasets, took place in 2008 [17]. Twelve different groups participated in the project. Only two participants classified tree species. The tree species classification results were 78% correctly classified trees using airborne photographs (57% of the trees were classified) and 54% correctly classified trees using laser data (64% of the trees were classified). The abovementioned results are of interest because of the great variation between their classification percents and the ones published in articles presenting classification methods with over 80% classification accuracy. We assume that the good previous results have been obtained by having controlled conditions. The EuroSDR test showed that the tree classification accuracies published before (e.g., in tree finding) did not match with the results obtained in the joint test. Thus, methods that work in nonoptimal forest conditions are still needed. More research should be also focused on method comparison.

1.4. Tree Canopy Division into Illuminated and Shaded Parts

We assume that there could be features with measurable differences between different tree species when the illuminated and shaded parts of the tree canopies are first separated and then compared with each other. This assumption is based on the fact that foliages of different tree species have different light scattering properties due to their general shape and leaf properties, e.g., [18]. We anticipate that the transmittance of the tree canopy affects the image brightness on the shadowed side of the tree. Separation of the illuminated and the shaded parts of a tree canopy should also allow better utilisation of the available dataset as heavy filtering is not needed for shadow removal. Different viewing geometries are also taken into account as long as the locations of both the camera and the sun are known. The separation of an individual tree canopy into illuminated and shaded parts has been done before in Leaf Area Index (LAI), Normalized Difference Vegetation Index (NDVI) and canopy radiative transfer studies [19,20,21]. This approach has been utilised in single tree species classification as well [22,23]. Sharp contrast differences between the shaded and illuminated parts of the trees were used in automatic individual tree crown delineation and tree top finding already in the 1990s from aerial photographs [24,25,26] where it gave good results.
In this study, we test to see if we are able to find classification aiding features by separating a single tree dataset into illuminated and shaded parts. The dataset and the methods used in data extraction are introduced in Section 2. Tested features and the used classification methods are presented in Section 3. The classification results are given in Section 4 and the discussion of the results and possible further studies are in Section 5.

2. Data Acquisition

2.1. Test Area

The test area was located in Espoo city, southern Finland (N60° 9.0', S24° 39.4'). Vegetation in the test area was a mix of common city lawn, planted deciduous trees (linden, alder, for example), and mixed natural growing stock consisting of original domestic trees like birches, pines, and spruces. The test area had a varying contoured topography with heights between 0–30 meters above sea level.
Figure 1. Espoonlahti test area in the city of Espoo, southern Finland. Map: Wikipedia, created by user Care.
Figure 1. Espoonlahti test area in the city of Espoo, southern Finland. Map: Wikipedia, created by user Care.
Remotesensing 02 00019 g001

2.2. ALS Data

The ALS dataset used in this research was acquired on the 12th of July, 2005. The sensor used was Optech ALTM 3100 (Optech Incorporated, Vaughan, ON, Canada). The main flight lines were flown 1.000 metres over the test area. The average point density on these lines was 2–4 points/m2. In overlapping areas of flight lines the laser point density rose up to 10–12 points/m2.
All scanned laser points were used to form rasterized digital surface and terrain models (DSM and DTM) of the test area. Both DSM and DTM were created with TerraScan (Terrasolid, Jyväskylä, Finland). The highest point in a rasterized cell was used to determine its elevation in the DSM. In the DTM, height was determined from the average of a raster cell for points classified as ground. The height value of a raster cell was interpolated, if there were no laser points in the cell area. The raster cell size was set to 30 cm and each cell was georeferenced into a national coordinate system (EUREF-FIN). A canopy height model (CHM) was calculated from the difference of DSM and DTM.

2.3. Digital Aerial Images

Digital aerial images were taken on the 1st of September, 2005. The used sensor was Intergraph's Digital Mapping Camera (DMC) (Intergraph Corporation, Huntsville, AL, USA). Only the data from four parallel multispectral colour cameras of DMC were utilised to form composite images. Each multispectral camera had a resolution of 3,072 × 2,048 pixels with the pixel size of 12 µm. Focal length of the cameras was 25 mm. The multispectral colour cameras were sensitive in the following spectral bands: Blue (400–580 nm), Green (500–650 nm), Red (590–675 nm), and NIR (675–850 nm). One pixel footprint size on the ground was approximately 0.25 × 0.25 m2.

2.4. Tree Sample Data

The dataset used for tree species classification consisted of 295 sample trees. Location and species of each sample tree were verified in the field. In some cases a sample consisted of more than one tree of the same species. In such a case trees in the sample were growing so closely to each other that they had a common canopy. Tree samples were chosen from the three most common tree species (birch, pine, and spruce) in Finnish forests. Both individual trees and ones growing in copses of various sizes were included in the dataset. The total number of tree samples was 151 birches, 99 pines, and 45 spruces. Samples were chosen so that they represented different ages and sizes of their species.

3. Methods

3.1. Tree Crown Delineation

Data for each tree sample were manually extracted and registered from ALS data and a digital aerial image. The sample extraction was done using an interactive interface built with MATLAB (Mathworks, Natick, MA, USA). Manual delineation was chosen as it was considered the most accurate method for a limited number of tree samples.
A dataset of a tree sample consisted of data cells. The position coordinates, elevation, canopy height, RGBNIR colour values, visibility to the camera, and shading status of each data cell were saved. The number of data cells for each tree sample varied from tens to several hundreds, depending on the size of the crown. Metadata describing surroundings and the used extraction parameters were also saved for each tree sample in addition to the data stored in data cells.
Height and position values for data cells located in each selected sample canopy were extracted from both the CHM and the DSM. A 3 × 3 median filter was applied to all CHM and DSM raster cells within the selected canopy. Height values were smoothened to avoid cases, where the laser beam had penetrated the canopy giving a raster cell a low height value compared to its neighbours. Unit normal vectors pointing towards the sun and the camera were also calculated for each data cell after height and position extraction.

3.2. Data Cell Visibility and Shading Determination

Different viewing angle geometries were considered after height value extraction. Both DSM and CHM were created as if the viewer would always be straight above each data cell. In an aerial image each pixel was viewed from a different angle depending on the location of the sensor and the pixel's footprint on the ground. This meant that a varying number of data cells in each side of the selected tree sample were not seen by the sensor. This situation is presented in Figure 2.
Figure 2. Visibility inspection of a cell. Tree crowns were delineated from the DSM raster which views canopies straight above. The used sensor sees each tree crown from a variable angle depending on the location of the tree. Light grey lines in the figure depict the sensor's field of view for the tree. To match the DSM pixels with the seen ones, the height value of the closest seen pixel between the sensor and the original pixel was picked from the DSM raster. The meaning of x0 and xnew is explained in the text.
Figure 2. Visibility inspection of a cell. Tree crowns were delineated from the DSM raster which views canopies straight above. The used sensor sees each tree crown from a variable angle depending on the location of the tree. Light grey lines in the figure depict the sensor's field of view for the tree. To match the DSM pixels with the seen ones, the height value of the closest seen pixel between the sensor and the original pixel was picked from the DSM raster. The meaning of x0 and xnew is explained in the text.
Remotesensing 02 00019 g002
The following procedure was carried out to check whether an extracted data cell was visible to the sensor or not. A vector pointing towards the sensor was drawn from the data cell location. Then the height component of the vector was compared with the height values of all data cells it crossed on the xy-plane. If the vector's height component was smaller than the height value of any of the crossed data cells in the same location, then the original data cell was taken as occluded. The occluded data cell was then discarded after the visibility determination and it was replaced with a new one. The outermost visible cell of the occluding ones was taken as the new data cell. Data cell shifting preserved the number of data points. The explained procedure can be written as follows:
x = x 0 + n * c
i , j ( z ( i , j ) < z D S M ( i , j ) ) =   = i , j ( z 0 +   n *   c z ( i , j ) <   z D S M ( i , j ) ) =   { L ,   c e l l   i s   v i s i b l e   a n d   x =   x 0 > L ,   c e l l   i s   o c c l u d e d   a n d   x =   x n e w
where x is the vector pointing towards the camera, x0 is the original data cell location in ground coordinates, xnew is the closest visible data cell in case the original was excluded, c is the unit vector towards the camera, and the n is the preset number of iteration steps taken along c during visibility determination. The n works thus as an effective cutoff range. The length of a step was set to 30 cm for the test dataset. z(i,j), cz(i,j), z0, and zDSM(i,j) are the corresponding height components in the raster cell point (i,j). L is a preset number of occluding pixels that are blocking the line of sight between the sun and the data cell. It serves as an estimate for transmittance of direct sunlight through the canopy. It is used because the rasterized elevation model cannot make a difference between full and partial occlusion. Completely opaque materials are described with L = 0. This means, in terms of L, that even one blocking data cell in (2) causes a complete occlusion of the original data cell. Same type of visibility determination is also used in true orthophoto generation from several aerial images, for example, in Bang et al. [27].
Individual data cell shading was inspected after the data cell visibility verification. The data cell shading inspection was analogous to that done in visibility verification, but this time the vector drawn from the inspected data cell was pointing towards the sun. If any of the other data cells along the vector's line were higher than the vector's height component in any point, the data cell was marked as shaded. Otherwise the data cell was marked as illuminated. This situation is shown in Figure 3. The location of the sun was calculated using the flight time records and a MATLAB routine presented in [28] by V. Roy. The routine uses algorithms presented in Reda et al. [29]. Shadowing inspection was done only for the direct shading. Transmittance properties of different species and possible diffuse effects were not considered. In this study, trees were approximated as opaque objects in both visibility and shadow detections. Thus the value of L = 0 was used in both visibility and shading calculations.

3.3. Colour Value Extraction

Colour values were linked to each visible data cell by registering and extracting them from an original digital aerial image. All colour values for one tree sample were taken from a single aerial image. Registration and extraction were done by calculating the projected location of each data cell on the original image using collinearity equations [30]. These equations are given in (3).
x = x 0   f r 11 ( X   X 0 ) + r 12 ( Y   Y 0 ) + r 13 ( Z   Z 0 ) r 31 ( X   X 0 ) + r 32 ( Y   Y 0 ) + r 33 ( Z   Z 0 ) ,
y = y 0   f r 21 ( X   X 0 ) + r 22 ( Y   Y 0 ) + r 23 ( Z   Z 0 ) r 31 ( X   X 0 ) + r 32 ( Y   Y 0 ) + r 33 ( Z   Z 0 )
where (x,y) is the coordinate of a pixel on the aerial image corresponding the projected cell, (X,Y,Z) is the cell location on the DSM, and (X0,Y0,Z0) is the sensor location. f is the focal length of the sensor, and rij are the elements of the (ω, φ, κ) rotation matrix.
Figure 3. Illumination status inspection of a data cell. Height values of the chosen data cell and the ones along a line towards the sun are compared with each other.
Figure 3. Illumination status inspection of a data cell. Height values of the chosen data cell and the ones along a line towards the sun are compared with each other.
Remotesensing 02 00019 g003

3.4. Feature Set Selection

3.4.1. Illumination Dependent Colour Channels (IDCC)

A new feature set, Illumination Dependent Colour Channels (IDCC), was formed using previously mentioned tree crown delineation, data cell visibility, and shading inspection procedures. These procedures were utilised to divide a tree sample into an illuminated and a shaded part using height information derived from the surface and canopy models. Information from this division was then used in tree species classification.
The IDCC feature set consisted of different combinations of colour channel values and their illumination status, which was either illuminated or shaded. An average of each colour channel was calculated separately for both the lit and the shaded sections of every tree sample. The intensity ratio between the shaded and the illuminated part of the canopy was also calculated for every colour channel. These separations provided a total of 12 features (four values for illuminated parts, four for shaded parts, and four intensity ratios) to be used in tree species classification. Different combinations of these features were then tested for the best possible classification result.
A tree sample was removed from the dataset, if it was seen completely illuminated (or shaded). This removal was done, because it was not then possible to calculate the intensity ratio between illuminated and shaded parts. The total number of removed tree samples was 11 in the whole dataset.

3.4.2. Reference Feature Set

Another feature set was used as a reference. Its extraction procedure has been presented in Persson et al. [11]. Features for this set were collected by filtering the brightest 10% of the pixels in tree sample data. Then a normalized (colour space) unit vector was formed for every tree sample by using the averaged intensities of the green, red and near-IR colour channels. Two descriptive angles, azimuth and elevation, were calculated between the colour vector components after the normalization. These two angles were then used as classification features.
We used this feature set as a reference as its extraction is straightforward. We did not use pan-sharpened images which were utilised in the referenced article [11]. The colour channel analysis was done instead with the original multispectral aerial images to preserve extracted colour values as well as possible. The suggested feature extraction procedure was followed otherwise.

3.4.3. Classification Methods

Supervised parametric classification was used in this work with a cross validation setup. In cross validation, n groups consisting of m samples were formed. Then, n-1 groups were used as training sets and all trees in the one remaining group were classified. Classification was then repeated for each sample group in a loop with n rounds using each sample group as a test set. The division into training and test sets was the same for each of the tested feature spaces and algorithms. Cross validation was implemented as a leave-one-out setup, where sample size m = 1. Leave-one-out setup was chosen because the total number of sample trees was low. Some tests were also run using different k-fold cross-validations. Their results varied a lot due to small training set sizes and heterogeneous tree composition.
The used classification algorithms were quadratic, linear, and Mahalanobis distance based discrimination functions. The classification was done using MATLAB Statistics Toolbox. Only the best discrimination function results are presented for each feature set. This is because the optimal decision surfaces are dependent on the used features.

4. Results

4.1. Classification Results

The error matrices for the tree species classification with the proposed and the reference feature sets are presented in Table 1. Only the best cases are shown for each feature set. The best results were achieved in all studied cases using all four colour channels (RGBNIR). The proposed feature set, IDCC, gave the best classification result when a combination of illuminated and shaded colour channel values and their ratios were used. The overall accuracy of the classification with this feature set was 70.8%. Coniferous and deciduous trees were separated from each other with the overall accuracy of 78.0%. These results were achieved with the quadratic discrimination function. The two angular features presented in the reference (Persson et al. [11]) resulted in an overall tree species classification accuracy of 64.4%. The classification accuracy between conifers and deciduous trees was 75.6%. Best results were achieved using linear discriminant analysis function. Both quadratic and mahalanobis distance using discrimination functions gave notably lower classification results.
Table 1. Error matrices of the best classification results obtained with used methods. Classification type with the best result is given in brackets below the name of the method.
Table 1. Error matrices of the best classification results obtained with used methods. Classification type with the best result is given in brackets below the name of the method.
Reference data
Aerial images, no filtering (linear)Reference (Persson, [11] (linear)
Classification resultsBirchPineSpruceBirchPineSpruce
Birch104138123368
Pine347314174515
Spruce131323111822
Correctly classified10473231234522
Total15199451519945
Completeness68.9%73.7%51.1%81.5%45.5%48.9%
Overall accuracy67.8%64.4%
Coniferous vs. deciduous76.9%75.6%
IDCC (quadratic)Five step classifier (linear & quadratic)
Classification resultsBirchPineSpruceBirchPineSpruce
Birch106128114104
Pine327812257712
Spruce13925121229
Correctly classified10678251147729
Total15199451519945
Completeness70.2%78.8%55.6%75.5%77.8%64.4%
Overall accuracy70.8%74.5%
Coniferous vs. deciduous78.0%82.7%
We also tested the validity of the proposed feature set by classifying extracted tree samples using only colour values of original aerial images with no extra processing. The DSM and CHM data were used to delineate tree crowns, but the height information was not used otherwise in tree species classification. The overall classification accuracy was relatively high: 67.8%. The accuracy of the deciduous and coniferous classification was 76.9%. These results were obtained with linear classification.
Classification results showed four interesting issues. Firstly, the reference feature set gave the lowest overall recognition percentage of all three feature sets. This resulted most likely from the strong filtration done to the tree data. Secondly, the strengths of the features were divided: raw colour channel data and the features derived from IDCC recognized pine trees with a good accuracy. The angular features suited the best for birch classification with a wide margin compared to the two other feature sets. None of the used feature sets gave a definite classification result for spruces and they were mixed mostly with pine trees. The third issue was that the classification accuracies between the deciduous and coniferous tree species were relatively close to each other in all used feature sets.
The used feature sets classified different tree species with a good accuracy. This result was utilised to develop a more accurate classification tree. The improved classification tree consisted of five steps. All the steps were based on the posterior probabilities given by the used MATLAB algorithm. The posterior probabilities gave an estimate of the classification's quality for each possible class. Before taking any steps in the decision tree all trees were classified separately with both IDCC and Persson feature sets.
In the first step all trees classified similarly by both feature sets were accepted as correct classifications. Birches were then searched from the unclassified trees using the following condition for the posterior probabilities derived from the Persson feature set: pBirch,Per > 0.5. This condition was based on the good performance of the Persson method feature set in birch classification. The third and fourth steps were performed in a similar fashion, but with different conditions: pPine,IDCC > 0.5 and pSpruce,IDCC > 0.5. Trees that were not recognized after the first four steps were classified based on the largest posterior probability taken from the Persson feature set.
The combined five-step classifier gave a clear improvement in both the overall classification and in deciduous and coniferous tree separation. Table 1 shows that the combined classification tree managed to classify both birches and pines almost as well as the best feature sets. At the same time it gave the best classification for spruces.

4.2. Factors Affecting the Quality of Results

Aerial image data was taken on an afternoon hour in late summer. This means that the sun was already relatively close to the horizon, which was normal at the measurement latitude. The low position of the sun also caused more shadowing, both between the trees and their surroundings. The sun position was also changing most rapidly at this time of the year. The solar zenith angle changed 4.6° during flight in the test area. Aerial images were not radiometrically corrected so the sun movement had an effect on colour channel values between different images.
The flight time was not optimal from the spectral point of view. The colour of deciduous leaves gets dark green as the leaves age during summer. This is due to the increase of their chlorophyll concentration, e.g., Rautiainen et al. [31]. Darkening of a leaf may not be such a notable issue in well illuminated conditions, but a shadowed dark leaf might get mixed with illuminated needles within the used colour channels.
Data synchronization between aerial images and elevation models was not exact as there was a month and a half between flights. Tree shapes had changed as the canopies had grown. Canopy shapes could have been affected by the local weather conditions, mainly by the wind. Different wind directions during flights cause notable changes in the canopy shape, e.g., Litkey et al. [32]. Both aerial images and the laser scanning for elevation models need to be taken at the same time for optimal results.
The used classification algorithms may not have performed optimally. The size of the sample tree data was relatively small, only 295 trees in total, and trees were heterogeneous. Thus feature variation within all tree species was large. The spatial resolution of each data cell was quite low. This means that the data in each cell represents a general average of the area it is covering. The low resolution could also explain why the results of the reference method were similar with those gained from nonfiltered, original aerial image data. Heavy colour channel filtering seems to suit better for aerial images with high resolution, like the pan-sharpened images used in the reference article [11].
Extracted height values in data cells were mainly interpolated. The normal laser point coverage in the DSM region was 2–4 laser hits/m2. This meant that approximately 70% of rasterized height values were interpolated, when raster cell size was 30 × 30 cm2. In more densely covered areas (10–12 laser hits/m2) most of the extracted data cells contained a measured value. However, these point densities seemed to be sufficient for the shading detection. The classification results showed a clear improvement compared to the cases, where height data were not used to extract features. It would be worthwhile to study how ALS derived point density scales with the general classification accuracy in this type of measurement.
Some additional uncertainty was also caused because the height data were median filtered to remove possible single gaps within the tree crowns. The filtration lowers the extracted height values within the tree crowns. It is also known that the ALS data tends to underestimate the actual heights of the scanned objects [17,33]. These height uncertainties affected the projections on the original aerial images. Thus some of the pixels in the edge of illuminated and shaded sides might have been falsely flagged and adding the noise in the classification process.

4.3. Further Development

The ALS derived surface models were only utilised to determine shadowing of each tree sample in this study. It could be possible, however, to derive other canopy describing features out of them. Such a new feature could be the general inclination of canopy structure.
Tree crowns were delineated manually in this study. There should be, however, no limitations to implement an automated delineation, such as [34,35,36], method for single tree extraction. Automated delineation allows processing of large datasets in an efficient manner.
The calculation of DSM (and DTM/CHM) and registration errors have an effect on the shadowed area determination. The extent of the shadowed area depends on the original point density, the process which removes the penetrated hits, and the filtering method applied to the DSM. In practice, the laser always sees the trees smaller in size and height than what they are in reality due to the penetration of the laser hits inside the crown. The back projection of tree position and crown segment to images has therefore some error sources. These and other registration error effects should be studied further in the future. The approach used in five-step classification process could be further developed and optimized by adding or subtracting step number or by changing step order. However, the possibility to create a general classification tree might not be achievable, but the process must be calibrated every time for a new dataset.

5. Conclusions

A new feature set extraction method for tree species classification, named Illumination Dependent Colour Channels (IDCC), was presented in this research. The presented method is based on dividing a single tree crown into illuminated and shaded parts, and then using averaged colour values (RGBNIR) of both parts and their ratios as classification features. The division of a tree crown is done using height information collected from synchronized ALS data. Performance of the IDCC method was compared with another feature extraction method [11], which used filtering for colour image data, but no ALS data, and with the original, unprocessed, aerial image colour data. The comparison was performed by doing a leave-one-out classification for a test dataset comprising of 295 common Finnish trees.
The results showed that the IDCC gave a small, but notable, improvement of a few percentage units in overall classification accuracy over the two references. It was also noticed that the species-wise classification results varied between different methods. The IDCC had good performance in coniferous tree classification, while Persson's extraction method gave the best results in birch classification. This notification led to the development of a combined five-step classification tree. The combined classification tree utilised the strengths of both the IDCC and the Persson's reference method. The developed classification tree enhanced the overall classification accuracy of all tree species by an additional three percentage units.
The results from the developed feature set extraction method, IDCC, and the five-step classification tree are notably better when compared to the reference ones. Feature variations were large for all tree species within the used dataset. The tree samples of each species were of different ages and sizes, and they were located in different growing places with varying surroundings. Even though earlier studies have shown high classification accuracy for boreal forest tree species classification, the practically obtained accuracy has been from 50% to 70% at individual tree level. The proposed method should work especially well in high latitudes where aerial photographing must be often done with the sun close to horizon.
Generalisation of the proposed feature set extraction method should be straightforward for other types of cameras and hyperspectral sensors. Usage of hyperspectra, especially in the NIR region, is expected to yield more species dependent spectral features to improve a classification procedure. The effects caused by shadow movements during different times of the day are accounted for as the sun location is known at all times. Radiometric changes due to the sun movement as well as the atmospheric changes need to be noted separately and improvements in radiometry of images are expected to improve the classification results.

Acknowledgements

The authors would like to thank Academy of Finland and Tekes for financial support of the projects "Towards improved characterization of map objects", "Improving forest supply chain by means of advanced laser measurements" and "Development of Automatic, Detailed 3D Model Algorithms for Forests and Built Environment". The authors would also like to thank Leena Matikainen for her help and comments with the surface model generation and writing process.

References

  1. Næsset, E. Determination of mean tree height of forest stands using airborne laser scanner data. ISPRS J. Photogramm. Remote Sens. 1997, 52, 49–56. [Google Scholar] [CrossRef]
  2. Næsset, E. Predicting forest stand characteristics with airborne scanning laser using a practical twostage procedure and field data. Remote Sens. Environ. 2002, 80, 88–99. [Google Scholar] [CrossRef]
  3. Hyyppä, J.; Inkinen, M. Detecting and estimating attributes for single trees using laser scanner. Photogramm. J. Finl. 1999, 16, 27–42. [Google Scholar]
  4. Goetz, S.; Steinberg, D.; Dubayah, R.; Blair, B. Laser remote sensing of canopy habitat heterogeneity as a predictor of bird species richness in an eastern temperate forest, USA. Remote Sens. Environ. 2007, 108, 254–263. [Google Scholar] [CrossRef]
  5. Maltamo, M.; Packalén, P.; Peuhkurinen, J.; Suvanto, A.; Pesonen, A.; Hyyppä, J. Experiences and possibilities of ALS based forest inventory in Finland. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2007, 36, 270–279. [Google Scholar]
  6. Franklin, J.; Phinn, S.R.; Woodcock, C.E.; Rogan, J. Rationale and Conceptual Framework for Classification Approaches to Assess Forest Resources and Properties; Kluwer Academic Publishers: Boston, MA, USA, 2003. [Google Scholar]
  7. Falkowski, M.J.; Evans, J.S.; Martinuzzi, S.; Gessler, P.E.; Hudak, A.T. Characterizing forest succession with LIDAR data: an evaluation for the Inland Northwest, USA. Remote Sens. Environ. 2009, 113, 946–956. [Google Scholar] [CrossRef]
  8. Donoghue, D.N.M.; Watt, P.J.; Cox, N.J.; Wilson, J. Remote sensing of species mixtures in conifer plantations using LiDAR height and intensity data. Remote Sens. Environ. 2007, 110, 509–522. [Google Scholar] [CrossRef]
  9. Ørka, H.O.; Næsset, E.; Bollandsås, O.M. Classifying species of individual trees by intensity and structure features derived from airborne laser scanner data. Remote Sens. Environ. 2009, 113, 1163–1174. [Google Scholar] [CrossRef]
  10. Korpela, I.; Tuomola, T.; Tokola, T.; Dahlin, B. Appraisal of seedling stand vegetation with airborne imagery and discrete-return LiDAR an exploratory analysis. Silva Fenn. 2008, 42, 753–772. [Google Scholar] [CrossRef]
  11. Persson, Å.; Holmgren, J.; Söderman, U.; Olsson, H. Tree species classification of individual trees in Sweden by combining high resolution laser data with high resolution near-infrared digital images. Int. Arch. Photogramm., Remote Sens. Spatial Inf. Sci. 2004, 36, 204–207. [Google Scholar]
  12. Heinzel, J.N.; Weinacker, H.; Koch, B. Full automatic detection of tree species based on delineated single tree crowns a data fusion approach for airborne laser scanning data and aerial photographs. In Proceedings of Silvilaser 2008: 8th International Conference on LiDAR Appli-cations in Forest Assortment and Inventory, Edinburgh, UK, September 17–19, 2008.
  13. Bork, E.W.; Su, J.G. Integrating LIDAR data and multispectral imagery for enhanced classification of rangeland vegetation: a meta-analysis. Remote Sens. Environ. 2007, 111, 11–24. [Google Scholar] [CrossRef]
  14. Waser, L.T.; Ginzler, C.; Kuechler, M.; Baltsavias, E. Potential and limits of extraction of forest attributes by fusion of medium point density LiDAR data with ADS40 and RC30 images. In Proceedings of Silvilaser 2008: 8th International Conference on LiDAR Applications in Forest Assortment and Inventory, Edinburgh, UK, September 17–19, 2008.
  15. Kim, S.; McGaughey, R.J.; Andersen, H.E.; Schreuder, G. Tree species differentiation using intensity data derived from leaf-on and leaf-off airborne laser scanner data. Remote Sens. Environ. 2009, 113, 1575–1586. [Google Scholar] [CrossRef]
  16. Meyer, P.; Staenz, K.; Itten, K.I. Semi-automated procedures for tree species identification in high spatial resolution data from digitized colour infrared aerial photography. ISPRS J. Photogramm. Remote Sens. 1996, 51, 5–16. [Google Scholar] [CrossRef]
  17. Kaartinen, H.; Hyyppä, J. Tree Extraction. In EuroSDR Official Publication; Report of EuroSDR project, 53; EuroSDR: Frankfurt, A.M., Germany, 2008; p. 60. [Google Scholar]
  18. Kaasalainen, S.; Rautiainen, M. Backscattering measurements from individual Scots pine needles. Appl. Opt. 2007, 46, 4916–4922. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, J.M.; Leblanc, S.G. Multiple-scattering scheme useful for geometric optical modeling. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1061–1071. [Google Scholar] [CrossRef]
  20. Hall, F.G.; Hilker, T.; Coops, N.; Lyapustin, A.; Huemmrich, K.F.; Middleton, E.; Margolis, H.; Drolet, G.; Black, T.A. Multi-angle remote sensing of forest light use efficiently by observing PRI variation with canopy shadow fraction. Remote Sens. Environ. 2008, 112, 3201–3211. [Google Scholar] [CrossRef]
  21. Hilker, T.; Coops, N.; Hall, F.G.; Black, T.A.; Wulder, M.A.; Krishnan, Z.N.P. Separating physiologically and directionally induced changes in PRI using BRDF models. Remote Sens. Environ. 2008, 112, 2777–2788. [Google Scholar] [CrossRef]
  22. Korpela, I. Individual Tree Measurements by Means of Digital Aerial Photogrammetry. Ph.D. Thesis, University of Helsinki, Helsinki, Finland, 2004. [Google Scholar]
  23. Larsen, M. Single tree species classification with a hypothetical multi-spectral satellite. Remote Sens. Environ. 2007, 110, 523–532. [Google Scholar] [CrossRef]
  24. Dralle, K.; Rudemo, M. Stem number estimation by kernel smoothing of aerial photos. Can. J. For. Res. 1996, 1228–1236. [Google Scholar] [CrossRef]
  25. Gougeon, F.A. A crown-following approach to the automatic delineation of individual tree crowns in high spatial resolution aerial images. Can. J. Remote Sens. 1995, 274–284. [Google Scholar] [CrossRef]
  26. Larsen, M.; Rudemo, M. Optimizing templates for finding trees in aerial photographs. Pattern Recognit. Lett. 1998, 19, 1153–1162. [Google Scholar] [CrossRef]
  27. Bang, K.; Habib, A.; Shin, S.; Kim, K. Comparative analysis of alternative methodologies for true ortho-photo generation from high resolution satellite imagery. In Am. Soc. Photogramm. Remote Sens. (ASPRS) Annual Conference: Identifying Geo-Spatial Solution, Tampa, FL, USA, May 7–11, 2007.
  28. Roy, V. Sun position calculator for MATLAB. 2004. Available online: http://www.mathworks.com/matlabcentral/fileexchange/4605 (accessed on August 5, 2008).
  29. Reda, I.; Andreas, A. Solar Position Algorithm for Solar Radiation Application; Technical Report NREL/TP-560-34302; NREL Golden, Co, USA, 2003. Available online: http://www.nrel.gov/docs/fy08osti/34302.pdf (accessed on August 5, 2008).
  30. Kraus, K. Photogrammetry: Fundamentals and Standard Processes, 4th ed.; Kraus, K., Ed.; Dümmler: Bonn, Germany, 1993. [Google Scholar]
  31. Rautiainen, M.; Nilson, T.; Lükk, T. Seasonal reflectance trends of hemiboreal birch forests. Remote Sens. Environ. 2009, 113, 805–815. [Google Scholar] [CrossRef]
  32. Litkey, P.; Rönnholm, P.; Lumme, J.; Liang, X. Waveform features for tree identification. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2007, 36, 258–263. [Google Scholar]
  33. Hyyppä, J.; Hyyppä, H.; Litkey, P.; Yu, X.; Haggrèn, H.; Rönnholm, P.; Pyysalo, U.; Pitkänen, J.; Maltamo, M. Algoritms and methods of airborn laser scanner for forest measurements. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 2004, 36, 82–89. [Google Scholar]
  34. Koch, B.; Heyder, U.; Weinacker, H. Detection of individual tree crowns in airborne LIDAR data. Photogramm. Eng. Remote Sens. 2006, 72, 357–363. [Google Scholar] [CrossRef]
  35. Pouliot, D.A.; King, D.J.; Pitt, D.G. Development and evaluation of an automated tree detection-delineation algorithm for monitoring regenerating coniferous forests. Can. J. For. Res. 2005, 35, 2332–2345. [Google Scholar] [CrossRef]
  36. Erikson, M.; Olofsson, K. Comparison of three individual tree crown detection methods. Mach. Vis. Appl. 2005, 16, 258–265. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Puttonen, E.; Litkey, P.; Hyyppä, J. Individual Tree Species Classification by Illuminated—Shaded Area Separation. Remote Sens. 2010, 2, 19-35. https://doi.org/10.3390/rs2010019

AMA Style

Puttonen E, Litkey P, Hyyppä J. Individual Tree Species Classification by Illuminated—Shaded Area Separation. Remote Sensing. 2010; 2(1):19-35. https://doi.org/10.3390/rs2010019

Chicago/Turabian Style

Puttonen, Eetu, Paula Litkey, and Juha Hyyppä. 2010. "Individual Tree Species Classification by Illuminated—Shaded Area Separation" Remote Sensing 2, no. 1: 19-35. https://doi.org/10.3390/rs2010019

APA Style

Puttonen, E., Litkey, P., & Hyyppä, J. (2010). Individual Tree Species Classification by Illuminated—Shaded Area Separation. Remote Sensing, 2(1), 19-35. https://doi.org/10.3390/rs2010019

Article Metrics

Back to TopTop