Next Article in Journal
Prediction of Aboveground Biomass from Low-Density LiDAR Data: Validation over P. radiata Data from a Region North of Spain
Previous Article in Journal
Regional and Local Moisture Gradients Drive the Resistance to and Recovery from Drought of Picea crassifolia Kom. in the Qilian Mountains, Northwest China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Tree Species Composition Using OHS-1 Hyperspectral Data and Deep Learning Algorithms in Changbai Mountains, Northeast China

1
Key Laboratory of Wetland Ecology and Environment, Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun 130102, China
2
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
3
School of Nuclear Resource Engineering, University of South China, Hengyang 421001, China
*
Author to whom correspondence should be addressed.
Forests 2019, 10(9), 818; https://doi.org/10.3390/f10090818
Submission received: 9 August 2019 / Revised: 9 September 2019 / Accepted: 16 September 2019 / Published: 19 September 2019
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
The accurate characterization of tree species distribution in forest areas can help significantly reduce uncertainties in the estimation of ecosystem parameters and forest resources. Deep learning algorithms have become a hot topic in recent years, but they have so far not been applied to tree species classification. In this study, one-dimensional convolutional neural network (Conv1D), a popular deep learning algorithm, was proposed to automatically identify tree species using OHS-1 hyperspectral images. Additionally, the random forest (RF) classifier was applied to compare to the algorithm of deep learning. Based on our experiments, we drew three main conclusions: First, the OHS-1 hyperspectral images used in this study have high spatial resolution (10 m), which reduces the influence of mixed pixel effect and greatly improves the classification accuracy. Second, limited by the amount of sample data, Conv1D-based classifier does not need too many layers to achieve high classification accuracy. In addition, the size of the convolution kernel has a great influence on the classification accuracy. Finally, the accuracy of Conv1D (85.04%) is higher than that of RF model (80.61%). Especially for broadleaf species with similar spectral characteristics, such as Manchurian walnut and aspen, the accuracy of Conv1D-based classifier is significantly higher than RF classifier (87.15% and 71.77%, respectively). Thus, the Conv1D-based deep learning framework combined with hyperspectral imagery can efficiently improve the accuracy of tree species classification and has great application prospects in the future.

1. Introduction

Tree species identification is important for the effective management of forests as a natural resource, which influences the accuracy of timber volume estimation [1]. The number and type of tree species in a forest stand are also related to ecosystem parameters like biodiversity and habitat quality and are, therefore, important indicators for describing the ecological value of a forest [2]. An effective environmental management of ecosystems requires accurate and spatially detailed assessments of tree species numbers and distributions [3].
Remote sensing is complementary to traditional field surveys for obtaining species information, particularly within large and inaccessible areas [4]. Over the last four decades, advances in remote sensing technology have enabled the classification of tree species using several image types [5]. Fassnacht et al. [1] concluded that most previous studies were concentrated in temperate regions and applied images with different resolutions and different sensors to continuously improve the classification accuracy of tree species. The type of imagery is a major factor in tree species classification, as the spatial and spectral resolution can influence the accuracy of a classification [6]. High spatial resolution satellite images, such as those acquired by Systeme Probatoire d’Observation de la Terre (SPOT), QuickBird [7], IKONOS, and WorldView-3 [8] satellites, can distinguish detailed color and texture features. Due to the limited spectral information of high spatial resolution images, it is difficult to distinguish tree species with the same texture features such as white birch and aspen [4,6]. Although medium spatial resolution images, such as Landsat 8 and Sentinel-2 imagery [9], can provide more band information, lower spatial resolution limits tree classification accuracy. Light detection and ranging (LiDAR) data can provide a range of features related mainly to the structure of trees [5]. However, LiDAR sensors at the moment are mounted mainly on airborne platforms, which makes it difficult to obtain a large collection region [1]. Although satellite LiDAR sensors exist, it is limited by the distance of the pulse footprint so that full detailed coverage data cannot be obtained [10]. Hyperspectral images contain multiple continuous narrow bands, providing significant levels of detail, which allow for the distinction of fine spectral variations among tree species [2,11]. Where multispectral classification fails to capture the slight spectral differences that occur between tree species, data-rich hyperspectral imagery can improve classifications by providing sufficient information to discriminate between spectrally similar targets [12]. This has resulted in the extensive use of hyperspectral imagery for tree species classifications [6,13,14]. Compared with the previous hyperspectral data, such as Hyperion and HJ-1 images, the Orbita Hyperspectral Satellite images (OHS-1) used in this study have a higher spatial resolution (10 m), which has greater potential to classify tree species.
In terms of classification methods, the commonly used methods include traditional machine learning methods such as support vector machine (SVM), random forest (RF), and fuzzy mathematics, as well as object-oriented methods [8,9,15]. Although many adopted methods of tree species classification have achieved high accuracy results, there is still a lack of highly automated classification algorithms that automatically extract a given classification task without pre-defined feature crafting algorithms due to various environmental conditions such as temperature, precipitation, and terrain. Thus, advanced data-driven approaches to learn forest species classification automatically through high-level feature representations are highly desirable [16]. Recently, deep learning has been widely used in land use classification [17,18,19], cloud detection [20], building extraction [21,22], and crop classification [23,24]. Deep learning models, or deep artificial neural networks (ANNs) with more than two hidden layers, provide sufficient model complexity to learn feature representations from data in an end-to-end regime instead of manual feature engineering based on human experience and prior knowledge. Benefiting from the application of deep convolutional features, the methods based on deep learning have achieved high accuracies in image classification tasks [18,19,21], and the accuracy is continuously being promoted with the development of new techniques. However, up-to-date tree species classification using deep learning algorithms has not yet been conducted.
In this study, we aim to explore the potential of deep learning method in tree species classification combined with the OHS-1 hyperspectral imagery. The specific objectives of this research were to: (1) develop an effective algorithm of tree species classification using hyperspectral data and deep learning; (2) compare the performance of deep learning model with random forests model for tree species classification. The method and technical route of this paper can also provide an important reference for the future application of deep learning in tree species classification research.

2. Materials and Methods

2.1. Study Area

This study was conducted in a sample area with an average elevation of 690.4 m (127°49′–127°53′ E, 42°58′–43°01′ N), which spanned over 1561 ha. It is located in the southeast region of Jilin Province, Northeast China, near the Changbai Mountains (Figure 1). The climate of the region is temperate [25], with a total annual precipitation between 700 mm and 1400 mm and an average annual temperature between 3 °C and 7 °C [26,27]. In this study area, the dominant tree species include Amur linden (Tilia amurensis), Chinese pine (Pinus tabuliformis), Dahurian larch (Larix gmelinii), aspen (Populus tremula), and Manchurian walnut (Juglans mandshurica). Besides common tree species, rare species such as white birch (Betula platyphylla) and Manchurian ash (Fraxinus mandshurica) are also included.

2.2. Data and Pre-Processing

2.2.1. OHS-1 Imagery

OHS-1 is the first satellite constellation that was built and operated by a privately listed company in China. The whole constellation consists of 34 satellites, including video, hyperspectral, high-resolution optical, and infrared satellites. The hyperspectral image of the constellation was used in this study. The Orbita hyperspectral satellites (OHS) were successfully launched on 26 April 2018. They were in a sun-synchronous orbit of 500 kilometers. The hyperspectral image contains 5056 × 5056 pixels, and the imaging spectrum ranges from 400 to 1000 nm. The spectral average of 400–1000 nm is divided into 32 spectral segments through a filter. The image has a spatial resolution of 10 meters and a spectral resolution of 2.5 nm. The OHS-1 data have the potential to be an important data source for tree species mapping, as they are available to the public at the official website of the Orbita Hyperspectral Satellite (www.obtdata.com; Zhuhai Orbita Aerospace Science & Technology Inc., Zhuhai, China).
The OHS-1 image was recorded under cloudless conditions over the site in the middle of the growing season on 19 September 2018 (orbit altitude: 520 km, solar elevation angles: 50.53°, lateral angular: −2.194°). The study sites were subset into the forest farm centers and their surrounding regions with spatial extents of 387 × 404 pixels. The pre-processing of the hyperspectral images was performed using the image processing system ENVI (Environment for Visualizing Images; Exelis Visual Information Solutions Inc. Boulder, CO, USA). Firstly, according to the digital elevation model (DEM) [28], orthorectification was executed for the OHS-1 image. Then a relative radiometric normalization was applied to the images based on the mean-standard deviation normalization algorithm [4]. Finally, the FLAASH model was used in atmospheric correction, and the parameters of FLAASH were selected based on the acquisition time and location for the imagery and other ancillary data that were provided by Zhuhai Orbita Aerospace Science & Technology Inc. in Zhuhai, China. The DN values of the OHS-1 images were transferred into the top of atmosphere (TOA) radiance with given spectral response functions by the ENVI software [27].

2.2.2. Forest Survey Data

The field surveys were conducted from June to July 2017. The distribution of sampling plots was randomly generated, and a total of 117 50 × 50 m plots were sampled. Location of each sampling plot was measured by Global Positioning System (GPS) and real-time kinematic (RTK), accurate within 1 m. The sample parameters recorded were species names, tree heights, crown cover areas, basal area (tree diameters over 10 cm), and Differential Global Positioning System (DGPS) coordinates in the universal transverse mercator (UTM) system. Based on the basal area factor (BAF) [29], any tree species with a basal area frequency above 50% in each plot was selected as the dominant tree species [30]. According to the field surveys, seven dominant tree species classes were identified, comprising Amur linden, Chinese pine, Dahurian larch, aspen, white birch, Manchurian walnut, and Manchurian ash.

2.2.3. Dataset Partition

In order to maximize pixels collected in sampling plots, if a tree species had a basal area frequency above 80% in each plot, we extended the sample data by creating a 100 m buffer based on sampling plots boundary [30]. According to these land parcels, we chose pixels with different tree species from OHS-1 hyperspectral imagery as a dataset. Then the sample data of the whole study area were split into two datasets: Training and validation sets [23]. All samples were randomly assigned to one of the two sets approximately following the ratio of 70%:30% (Table 1). In addition, the training set was subset to train individual classification algorithms and optimize model (80% and 20%, respectively). The final classification results were evaluated with the validation set. Dataset partition needs to follow two principles: (1) these sets are independent from each other, (2) the class distributions in all sets are similar [23].

2.3. Classification

Convolutional neural network (CNN), a well-established and popular deep learning method, has made considerable improvements in image analysis [31,32]. Particularly, 2D CNNs have been widely used to extract spatial features from the dimensions of width and height for object detection and semantic segmentation of high-resolution images [24,33,34]. Another major application of CNNs is hyperspectral image classification, in which CNNs are used to extract spatial-spectral features, through either 1D convolution across the spectral dimension [35], 2D across the spatial dimensions [33], or 3D across the spectral and the spatial dimensions simultaneously [36]. Guidici et al. [35] concatenated hyperspectral images from three seasons and applied 1D convolution to the spectral domain for land cover classification. One-dimensional convolutional neural network (Conv1D) has also been used to effectively identify 13 crops through time series images [23]. In this study, we adopted the one-dimensional convolutional neural network (Conv1D) for classification of tree species, which has a good classification effect on continuous sequence data such as speech, multi-temporal data, and text [37,38]. We selected random forest model as the representative non-deep-learning classifier to compare with the Conv1D algorithm, since random forest classifier is renowned for high performance and is often established as the baseline model in classification tasks [39]. The main steps of the data processing work flow adopted in this study are presented in Figure 2.

2.3.1. One-Dimensional Convolutional Neural Network Classifiers

Deep learning in neural networks is the approach of composing networks into multiple layers of processing with the aim of learning multiple levels of abstraction [40]. In doing so, the network can adaptively learn low-level features from raw data and higher-level features from low-level ones in a hierarchical manner, nullifying the over-dependence of shallow networks on feature engineering [23]. The building blocks of the Conv1D used in this study are illustrated in Figure 3, including an input layer, two convolution layers with rectified linear unit (ReLU) as nonlinear activation function, a maxpool layer, a flatten layer, and two fully connected layers. The convolutional layer and pooling layer perform as hierarchical feature extractors [41], while the fully connected layer acts as a classifier that produces the predictive probabilities of all the object categories in the input data [20,41].
According to the hyperspectral data, three features from the image were selected as inputs to the one-dimensional convolutional neural network (Conv1D), including the spectral and crown texture features (entropy and mean). Crown texture information is mainly related to crown-internal shadows, foliage properties (size, density, and reflectivity), and branching [1]. Thus, crown texture information has also been exploited to improve tree species classification [28]. Combining spectral and texture features often improves the accuracy of tree species classifications [1]. According to Franklin et al. [42], texture layers are known to improve the classification accuracy by up to 10%–15%. These feature data with 10,269 pixel samples are convoluted in the first layer of the model with 96 filters and a kernel size of 5 to produce feature maps in 96 × 28 sizes (Figure 3). Then the convolution layer is paired with a pooling layer that can be thought of as a spectral down-sampling of the convolutional feature map. A max pooling operation was utilized here and pooling layers were fixed as max-pooling with a window size of 2. This layer accepts the convolutional feature map, evaluates pairs of data elements across the spectral dimension of the feature maps, and passes the maximum value onto the next layer. This process reduces the size of the feature map while preserving the features observed within the convolutional feature map. The second layer of the model is another convolution layer with 128 filters; this layer generates new feature maps in 28 × 128 sizes by using the output of the previous layer. The last stage of a convolutional neural network (CNN) is a classifier. It is called a dense layer (fully connected layer), which is an artificial neural network (ANN) classifier. We needed to convert the output of the convolutional part of the CNN into a feature vector to be used by the ANN part of it, so we needed to add a flatten layer to connect the convolutional layer and the fully connected layer. It gets the output of the convolutional layers and flattens all its structure to create a single long feature vector to be used by the dense layer for the final classification [40]. Additionally, our network was further refined by adding dropout and regularization of the fully connected layer. Dropout is a technique for improving neural networks by reducing overfitting. Hinton et al. [43] pointed out that dropout of 50% of the hidden units and 20% of the input units significantly improved classification for a variety of different network architectures, but so far there is still no universal formula to set dropout rate. Dropout rate is related not only to the number of input variables but also to the number of the hidden units [44]. In this study, considering the input units are not large enough, we did not set the dropout rate in the input layer. Meanwhile, in the hidden layer, we found that when the dropout was set to 40%, classification accuracy could be improved obviously after testing. Therefore, 40% was selected for the dropout rate in this study. The output of the hidden layer is connected to a final Softmax output layer that produces a probabilistic output per class, or a vector of length of the number of classes, with each value representing the probability that the input data belong to a specific class. The last layer contains 7 neurons corresponding to the probability of the 7 classes. As a result, there are an extremely large number of potential network architectures and it is impossible to try them all. Thus, we started with a relatively simple model with only one convolutional layer. Then we generated new models by adding a new layer, re-ordering layers, or replacing a part of the network with a more complex component. In this way, the tested model grew in size and complexity until classification results did not improve further.
This model was trained using the stochastic gradient descent (SGD) optimizer [45]. Parameters of SGD were fixed as: Decay = 1 × 10−6, momentum = 0.9, and a learning rate decay of 0.01. The epoch was set to 20. As the training set is unbalanced, we used weighted cross-entropy loss function with weights inversely proportional to class abundance. Classification models were built and evaluated using the Keras library on top of TensorFlow.

2.3.2. Random Forest Classifier

The main idea of random forest is to use bootstrap re-sampling method to select multiple sub-samples from original samples and conduct decision tree modeling one by one [46]. After each tree is classified separately, the final output classification result is obtained by voting. Random forests do not need to estimate the distribution of data, which is very meaningful for input variables of different types or different scales, and random forests have interpretability for results [39]. These excellent properties make random forest very suitable for forest-type recognition of remote-sensing image processing [47].
In this study, we adopted object-oriented random forest classification [47,48]. Object-oriented classification includes two main processes: Image segmentation and feature extraction. Firstly, the image is segmented in multiple scales and the optimal segmentation scale is selected. This algorithm is a bottom-to-top segmentation algorithm, which gradually merges up from a single pixel until the threshold is reached [49]. Then, since the object is composed of multiple pixels, in addition to the spectral features, other features can be added to participate in the classification to achieve better classification accuracy. In this study, minimum noise fraction (MNF) [33,48] and some vegetation index features, such as normalized difference vegetation index (NDVI) [33,50], enhanced vegetation index (EVI) [27], and ratio vegetation index (RVI) [33,51], are selected to participate in classification. However, when all features (including spectral features) are involved in classification, it will lead to many problems, such as high dimension [2,52], slow computation speed, and data overload [49,53]. Therefore, feature extraction is of great significance for the processing of hyperspectral data. Finally, these extracted features are input into the random forest classifier for training, and the results of classification can be obtained.

2.4. Accuracy Assessment

As accuracy assessment of two classification approaches is essential, a confusion matrix containing overall accuracy, user’s accuracy, producer’s accuracy, and the kappa coefficient of the classification results of tree species was presented. Each pixel of the validation set was considered a sample. The overall accuracy represents the percentage of the pixel points correctly identified [9]. The user’s accuracy demonstrates the likelihood that a classified object matches the total object pixels [27]. The producer’s accuracy is the proportion of object pixels that were correctly classified [27,54]. The performance of the accuracy assessment was carried out with Python 3.6 and the scikit-learn library.

3. Results

3.1. Impacts of Kernel Size and Layer Numbers on the Conv1D Model

In the one-dimensional convolutional neural network model, the size of the convolution kernel has a great influence on the classification accuracy [34,41]. The smaller the kernel size is, the more detailed the extracted features will be, but relevant input information will be lost [35]. On the contrary, for large-size kernels, relevant information will be retained but will result in the absence of detailed information [35,41]. We have tried several convolution kernels of different sizes, among which the convolution kernels with the highest classification accuracy are selected for classification. As shown in Figure 4a, with the increasing number of epochs, when the convolution kernel size is 5, 7, 9, and 11, the accuracy presents a stable trend. Based on the accuracy of the last five epochs, the accuracy of these four convolution kernels are 0.999, 0.996, 0.997, and 0.993, respectively. As a result, we select the convolution kernel of 5 as the optimal parameter for classification.
Besides changing the size of the convolution kernel to improve the accuracy, we also tried to increase the number of layers to improve the classification accuracy and training efficiency. We increased the number of layers from the first layer, and evaluated the classification accuracy and efficiency. With the increase of the number of layers, the training time also increases, but the classification accuracy is not improved, as shown in Figure 4b. When the number of layers is two, better accuracy can be achieved without consuming too much training time. L. Zhong et al. [23] and H.M. Fayek et al. [37] also indicated that when the amount of sample data is not particularly large, it does not need many convolution layers. Therefore, a one-dimensional convolutional neural network with two layers was selected for tree species classification.

3.2. Segmentation and Feature Selection for Object-Oriented RF Model

Image segmentation parameters include scale, compactness, and shape [55], which has great influence on the classification accuracy. The scale parameter determines the maximum size of the created objects, and the unit of scale parameter is indicated by pixel [27]. When the scale of segmentation is too small, the number of objects and the amount of computation increase greatly. On the contrary, if the segmentation scale is too large, the number of objects will decrease, which easily causes different ground objects to be merged into the same object and reduces the classification accuracy [49]. In addition, users apply weights range 0–1 for the compactness and the shape parameter to adjust the smoothness and the spectral homogeneity of an object [47]. By testing several different segmentation parameters, we found that setting the scale parameter to 30 with the compactness and shape parameter set to 0.1 and 0.6 can achieve better segmentation performance. Ren et al. [27] also point out that, for forest objects, the scale of 30 and the shape with less weight are conducive to classification. In this study, eCognition Developer 8.64, an image-classification software, was used to perform segmentation.
In this specific feature extraction process, minimum noise fraction (MNF) transformation was implemented on hyperspectral images. The previous three principal components are attained, of which signal to noise ratio (SNR) is 87.69%, 75.65%, and 20.47%, respectively. Then, according to the spectral features of tree species [1], we chose different bands for vegetation indices calculation [1,56]. The vegetation index we selected included normalized difference vegetation index (NDVI) [33,50,56], ratio vegetation index (RVI) [33,51], and enhanced vegetation index (EVI) [27]. Finally, 38 features were selected and extracted as shown in Table 2.
The extracted features were used to calculate the contribution rate based on the sample data (Figure 5). As shown in Figure 5, the first 10 features alone can achieve good accuracy. Therefore, we selected the first 10 features for training the random forest model: EVI, Band 24, Band 26, Band 27, NDVI, Band 30, Band 29, Band 16, and MNF1. We input the first 10 features into the random forest classifier for training and classification. Although the previous three principal components of minimum fraction are attained, only the first PC was selected to participate in the final classification.

3.3. Classification Results of Conv1D Model and Random Forest Model

We computed confusion matrices, overall accuracy (OA), and kappa coefficients (kappa) of the validation set to evaluate the performance of two classifiers. Table 3 and Table 4 show confusion matrices yielded by RF and Conv1D-based classifiers, respectively. Classification errors are represented by pixel numbers off the diagonal in the confusion matrix.
According to the confusion matrices, the overall accuracies of the Conv1D-based and RF classifiers were 85.04% and 80.61%, and the kappa coefficients were 0.81 and 0.75, respectively. Compared to the results by the RF classifier, the Conv1D-based classifier has better performance and balance between producer’s and user’s accuracies. For the Conv1D-based classifier, it is worth noting that aspen, white birch, Manchurian walnut, and Manchurian ash reached the highest accuracies. The producer’s accuracies for these four classes were 87.15%, 88.65%, 89.10%, and 94.30%, respectively. However, only the white birch and Manchurian ash achieved higher classification accuracy (88.97% and 91.15%) in the RF classification results. The species that obtained the lowest producer’s accuracies was Chinese pine in both Conv1D-based and RF classifier (59.58% and 56.78%). Regarding the user’s accuracy, except for the accuracy of Manchurian ash being slightly lower than RF results; other tree species had better classification accuracies in the Conv1D-based classifier. It is particularly noticeable that there was significant variation when considering the accuracies of individual species. For example, the user accuracy of aspen based on the Conv1D classifier was 87.23%, while that of the RF classifier was 57.99%.
Figure 6 shows the classification maps obtained with the Conv1D-based classifier and the RF classifier. Assessed against field observations, a comparative analysis of the two classification maps shows that, on the whole, the two classification methods can roughly distinguish different species types. However, for some species with a small proportion, the classification results are quite different. In general, the RF map tended to be more speckled than the Conv1D map. The RF map had more obvious problems with classifying white birch and Amur linden in the northwest corner of the map. The insets show a classical example of the classifications in the broadleaf zone. Compared with the Conv1D map, the RF map has obvious over-mapping for broad-leaved species. In addition, according to the coordinate position of validation sample data, we found that for less abundant tree species (Chinese pine, Dahurian larch, and aspen), the Conv1D-based classification results are superior to the RF classification results, but misclassification still exists. In addition, for some broadleaf species with similar spectral characteristics (white birch, Manchurian ash, and Manchurian walnut), Conv1D classifier tended to correctly map these areas. In general, the RF classifier tends to omit species and identify more pixels as other species, and the Conv1D-based classifier is more likely to seek for further separation of the seven tree species. Thus, this further confirms the higher capability of the Conv1D-based classifier to discriminate tree species with respect to the RF classifier.

4. Discussion

4.1. Discrimination of Tree Species With Hyperspectral Data

In tree canopies the amount of radiation that is reflected in the different wavelength regions is related to plant chemical properties of the tissue, leaf morphology, canopy structure, and tree size compared to neighboring trees [1]. Hence, the low spectral resolution of the very-high-resolution (VHR) image cannot allow a detailed tree species classification, even if many studies exploit VHR data for forest classification [8,57,58]. According to chemical plant components, fine spectral resolution hyperspectral sensors can differentiate tree species [4,52]. As an example, Oldelland et al. [59] and Vyas et al. [60] used the dense sampling and narrow band measures of the tree spectral signature to relate each portion of the spectrum to specific characteristics of the plants, which can be exploited for classification purposes, as well as the identification of some plant diseases or attribute estimation. In this study, we used two classification methods to classify tree species with hyperspectral images. The results showed that both methods have good performance but there are different accuracies for different tree species that can be explained by different spectral reflectance [51]. In some cases, an individual species was misclassified as other classes, as was demonstrated by the white birch class commonly being incorrectly classified as Amur linden. As discussed by Heinzel et al. [61], misclassification within the same tree types (conifers and broadleaf) occur at a higher rate than they do between tree types. Some broadleaf tree classes (aspen, white birch, Manchurian walnut, Amur linden) were misclassified as other broadleaf species, likely as a result of similar spectral signatures among the broadleaf species. Figure 7 shows the boxplots of the values of the 32 hyperspectral bands versus the tree species; it can be seen that the spectral reflectivity (SWIR region) of broadleaf trees is significantly higher than that of coniferous trees. It is also clear that the spectral response of the analyzed species was quite similar in the visible range, while for the infrared channels there were significantly different responses considering coniferous and broadleaf species. It is worth noting that, in general, the spectral value range was quite broad across all classes. Even species of broadleaf trees with similar spectral reflectance such as Manchurian walnut and aspen can be well distinguished. These differences in levels of reflectance are the main drivers to discriminate species in the VIS-SWIR region.
In order to prove the performance of OHS-1 hyperspectral data, we compared the classification accuracy in the same area or adjacent area. Based on HJ-1A hyperspectral images, Junming [62] applied the classification method of tree species spectral feature knowledge to classify the tree species in the Wangqing forest region of Jilin province. They classified three tree species (Mongolian oak, white birch, and Dahurian larch) and overall classification accuracy was 75%. Although the classification accuracy of a single tree species can reach 87.5%, other tree species in this region are not separated. In this study, we apply all hyperspectral bands to classify tree species. Compared with HJ-1A images (100 m), the spatial resolution of an OHS-1 image is 10 m, which reduces the influence of mixed pixel effect and greatly improves the classification accuracy. Although some tree species such as Chinese pine and Amur linden have lower classification accuracy (59.58% and 70.75%), most tree species (white birch, Dahurian larch, aspen, Manchurian walnut and Manchurian ash) have accuracies higher than 80%.

4.2. Application of Deep Learning in Tree Species Identification

Many studies have been conducted on tree species classification using remote sensing datasets through traditional machine-learning techniques, and so far, very few of them have used deep learning models. Contrary to the popular object-based approach to tree species classification, deep learning eliminates the manual feature extraction step by examining the local spatial arrangement and structural patterns characterized by the low-level feature. In the specific classification process, compared with 2D convolution, the hierarchical feature generation process of the Conv1D-based model provides a flexible way to formulate and identify complex sequential patterns in hyperspectral data. In this study, we used 10,269 pixels as training samples from hyperspectral images. For one-dimensional convolutional neural network (Conv1D) classifiers, in addition to adding all spectral features, we also added texture features of various bands as a low-level feature but did not add vegetation indices. In the convolution process of a one-dimensional convolutional neural network, the front and rear nodes have strong correlation and continuity, the single vegetation index feature cannot improve accuracy and has certain interference [35,41]. Based on the same training samples, we also adopted random forest (RF) classifiers as a comparison. The results show that the Conv1D classifiers had 5% higher overall accuracy than the RF classifier. This is similar to a 7% increase in overall accuracy observed for 1-D CNN over RF in a study focused on land-cover classification with multi-seasonal hyperspectral imagery in Northern California, USA [35].
The Chinese pine class with the lowest number of test samples was misclassified into several different types, and the classification accuracy was the lowest. Although the spectral response of Chinese pine in the near-infrared region was significantly different from that of other tree species (Figure 7), it has been found in our specific field survey that Chinese pine always appeared as a single plant and had many broadleaf tree species around it. Because it is difficult to divide the sample in this situation as Chinese pine, we obtained very few Chinese pine sample data. Thus, a possible reason for lower classification accuracy of Chinese pine in this research could be the complexity and heterogeneity of forests and insufficient training samples. Ballanti et al. [6] gave the same explanation when applying support vector machine (SVM) and random forest (RF) classifier to classify tree species. Similarly, Duro et al. [63] also experienced similar issues with their classification, highlighting that limited testing samples can result in inaccurate classifications. Although using the same training samples, we found that by comparing the producer’s accuracy of the two classifiers there were significant differences in classification accuracy for some tree species, such as Manchurian walnut and aspen. For instance, the producer’s accuracy of Manchurian walnut with Conv1D-based classifier was 89.10%, while that of RF classifier was 71.77%. We believe that the algorithm of two kinds of classifiers is the main reason that the classification accuracy of the same tree species differs greatly. For deep convolutional neural networks, each neuron is no longer connected with all neurons in the upper layer but only some of them [23,40,64]. Furthermore, the use of an activation function increases the nonlinearity of the neural network [24]. Compared with other functions [65], the ReLU function can effectively alleviate the problem of overfitting and increase the accuracy of tree species classification [66], so the convolutional neural network can achieve better learning by keeping as many important parameters as possible and removing a large number of unimportant parameters [64,67]. Although random forest has strong generalization ability, it is easy to overfit the training data with large noise [39,54,68]. Noise in the Manchurian walnut’s training data may have had a greater impact on the random forest model, which led to lower classification accuracy.

4.3. Future Work with DCNN Hyperspectral Image Classification

Deep learning, especially deep CNNs, should have great potential for tree species classification in the future. However, the extrapolation ability of deep learning is crucial for automation but has remained unsatisfactory in remote-sensing image recognition, especially if a source dataset varies significantly from a target dataset [19]. In this research, although the accuracy of a training set can reach 99.99%, the accuracy of a validation set is only 85.04%. In addition to the differences in the sample set itself, there is also an indication of insufficient generalization ability of deep learning. Thus, there is still much work to be done. For tree species classification, we plan to test different deep CNN frameworks and different CNN models, such as U-net, Segnet, and 3DCNN, to improve our classification performance. Moreover, we do not consider the structure or multi-temporal correlation and only concentrate on the spectral signatures in the current work. We believe that some structure-spectral-temporal techniques can also be applied to further improve the CNN-based classification. Future research will, therefore, mine the tree structure information from multispectral and Synthetic Aperture Radar (SAR) data to characterize spatial distribution and dynamic changes of tree species.

5. Conclusions

In this study, the main aim was to implement and explain how a convolutional neural network (CNN) with a one-dimensional architecture could be applied to tree species classification with OHS-1 hyperspectral images. This model has an end-to-end complete architecture and does not require separated feature extraction and classification stages. In order to explore the potential of this model in tree species classification, popular random forest (RF) classifiers were also used to classify tree species based on the same training samples. The results demonstrated that the application of this model produced high recognition performance on different tree species. The overall classification accuracy of the Conv1D-based classifier was 85.04%, higher than that of the RF classifier (80.61%), and the kappa coefficient was 0.81 and 0.75, respectively. Moreover, although some broadleaf species such as Manchurian walnut and aspen have similar spectral signatures, the classification accuracy of these two tree species based on the Conv1D-based classifier reached 87.15% and 89.10%, respectively. In general, compared with traditional classification algorithms, deep learning algorithms have better classification performance in tree species classification. Although this study initially explores the tree species classification based on the Conv1D-based classifier, it still needs to try more deep learning methods and further improve the CNN performance, scalability, and incorporation of spatial and temporal information.

Author Contributions

Y.X. calculated and analyzed the data in addition to writing the paper. C.R. designed the research project and analyzed the data in addition to writing the paper. Z.W. calculated and analyzed the data in addition to writing the paper. S.W. optimized the algorithm and gave suggestions for the whole study. J.B. processed hyperspectral data in addition to writing the paper. H.X. and B.Z. gave suggestions for the whole study. L.C. contributed to the field survey and gave suggestions for the whole study.

Funding

This study is supported by the National Key Research and Development projected of China (No. 2016YFC0500300), the Jilin Scientific and Technological Development Program (No. 20170301001NY) and Strategic Planning Project from Institute of Northeast Geography and Agroecology (IGA), Chinese Academy of Sciences (Y6H2091001).

Acknowledgments

We thank the National Earth System Science Data Center for providing geographic information data.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
  2. Fassnacht, F.E.; Neumann, C.; Forster, M.; Buddenbaum, H.; Ghosh, A.; Clasen, A.; Joshi, P.K.; Koch, B. Comparison of Feature Reduction Algorithms for Classifying Tree Species with Hyperspectral Data on Three Central European Test Sites. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2547–2561. [Google Scholar] [CrossRef]
  3. Turner, W.; Spector, S.; Gardiner, N.; Fladeland, M.; Sterling, E.; Steininger, M. Remote sensing for biodiversity science and conservation. Trends Ecol. Evol. 2003, 18, 306–314. [Google Scholar] [CrossRef]
  4. Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
  5. Cho, M.A.; Mathieu, R.; Asner, G.P.; Naidoo, L.; Van Aardt, J.; Ramoelo, A.; Debba, P.; Wessels, K.; Main, R.; Smit, I.P.; et al. Mapping tree species composition in South African savannas using an integrated airborne spectral and LiDAR system. Remote Sens. Environ. 2012, 125, 214–226. [Google Scholar] [CrossRef]
  6. Ballanti, L.; Blesius, L.; Hines, E.; Kruse, B. Tree Species Classification Using Hyperspectral Imagery: A Comparison of Two Classifiers. Remote Sens. 2016, 8, 445. [Google Scholar] [CrossRef]
  7. Ke, Y.; Quackenbush, L.J.; Im, J. Synergistic use of QuickBird multispectral imagery and LIDAR data for object-based forest species classification. Remote Sens. Environ. 2010, 114, 1141–1154. [Google Scholar] [CrossRef]
  8. Li, D.; Ke, Y.; Gong, H.; Li, X. Object-Based Urban Tree Species Classification Using Bi-Temporal WorldView-2 and WorldView-3 Images. Remote Sens. 2015, 7, 16917–16937. [Google Scholar] [CrossRef] [Green Version]
  9. Immitzer, M.; Vuolo, F.; Atzberger, C. First Experience with Sentinel-2 Data for Crop and Tree Species Classifications in Central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
  10. Hovi, A.; Korhonen, L.; Vauhkonen, J.; Korpela, I. LiDAR waveform features for tree species classification and their sensitivity to tree- and acquisition related parameters. Remote Sens. Environ. 2016, 173, 224–237. [Google Scholar] [CrossRef]
  11. Shang, X.; Chisholm, L.A. Classification of Australian Native Forest Species Using Hyperspectral Remote Sensing and Machine-Learning Classification Algorithms. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2481–2489. [Google Scholar] [CrossRef]
  12. Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
  13. Van Aardt, J.A.N.; Wynne, R.H. Examining pine spectral separability using hyperspectral data from an airborne sensor: An extension of field-based results. Int. J. Remote Sens. 2007, 28, 431–436. [Google Scholar] [CrossRef]
  14. Jones, T.G.; Coops, N.C.; Sharma, T. Assessing the utility of airborne hyperspectral and LiDAR data for species distribution mapping in the coastal Pacific Northwest, Canada. Remote Sens. Environ. 2010, 114, 2841–2852. [Google Scholar] [CrossRef]
  15. Wessel, M.; Brandmeier, M.; Tiede, D. Evaluation of Different Machine Learning Algorithms for Scalable Classification of Tree Types and Tree Species Based on Sentinel-2 Data. Remote Sens. 2018, 10, 1419. [Google Scholar] [CrossRef]
  16. Ganivet, E.; Bloomberg, M. Towards rapid assessments of tree species diversity and structure in fragmented tropical forests: A review of perspectives offered by remotely-sensed and field-based data. For. Ecol. Manag. 2019, 432, 40–53. [Google Scholar] [CrossRef]
  17. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef]
  18. Huang, B.; Zhao, B.; Song, Y. Urban land-use mapping using a deep convolutional neural network with high spatial resolution multispectral remote sensing imagery. Remote Sens. Environ. 2018, 214, 73–86. [Google Scholar] [CrossRef]
  19. Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sens. Environ. 2018, 216, 57–70. [Google Scholar] [CrossRef] [Green Version]
  20. Li, Z.; Shen, H.; Cheng, Q.; Liu, Y.; You, S.; He, Z. Deep learning based cloud detection for medium and high resolution remote sensing images of different sensors. ISPRS J. Photogramm. Remote Sens. 2019, 150, 197–212. [Google Scholar] [CrossRef] [Green Version]
  21. Ji, S.; Wei, S.; Lu, M. Fully Convolutional Networks for Multisource Building Extraction From an Open Aerial and Satellite Imagery Data Set. IEEE Trans. Geosci. Remote Sens. 2019, 57, 574–586. [Google Scholar] [CrossRef]
  22. Ji, S.; Wei, S.; Lu, M. A scale robust convolutional neural network for automatic building extraction from aerial and satellite imagery. Int. J. Remote Sens. 2018, 40, 3308–3322. [Google Scholar] [CrossRef]
  23. Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
  24. Koirala, A.; Walsh, K.B.; Wang, Z.; McCarthy, C. Deep learning—Method overview and review of use for fruit detection and yield estimation. Comput. Electron. Agric. 2019, 162, 219–234. [Google Scholar] [CrossRef]
  25. Chen, L.; Wang, Y.; Ren, C.; Zhang, B.; Wang, Z. Optimal Combination of Predictors and Algorithms for Forest Above-Ground Biomass Mapping from Sentinel and SRTM Data. Remote Sens. 2019, 11, 414. [Google Scholar] [CrossRef]
  26. Chen, L.; Ren, C.; Zhang, B.; Wang, Z.; Xi, Y. Estimation of Forest Above-Ground Biomass by Geographically Weighted Regression and Machine Learning with Sentinel Imagery. Forests 2018, 9, 582. [Google Scholar] [CrossRef]
  27. Ren, C.; Zhang, B.; Wang, Z.; Li, L.; Jia, M. Mapping Forest Cover in Northeast China from Chinese HJ-1 Satellite Data Using an Object-Based Algorithm. Sensors 2018, 18, 4452. [Google Scholar] [CrossRef] [PubMed]
  28. Dalponte, M.; Ørka, H.O.; Ene, L.T.; Gobakken, T.; Næsset, E. Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data. Remote Sens. Environ. 2014, 140, 306–317. [Google Scholar] [CrossRef]
  29. Elledge, J. Basal Area: A Measure Made for Management; Alabama Cooperative Extension System: Mobile, AL, USA, 2010. [Google Scholar]
  30. Azadeh Abdollahnejad, D.P. Shaban Shataee Joybari and Peter Surový Prediction of Dominant Forest Tree Species Using QuickBird and Environmental Data. Forests 2017, 8, 42. [Google Scholar] [CrossRef]
  31. Carter, C.; Liang, S. Evaluation of ten machine learning methods for estimating terrestrial evapotranspiration from remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2019, 78, 86–92. [Google Scholar] [CrossRef]
  32. Heydari, S.S.; Mountrakis, G. Meta-analysis of deep neural networks in remote sensing: A comparative study of mono-temporal classification to support vector machines. ISPRS J. Photogramm. Remote Sens. 2019, 152, 192–210. [Google Scholar] [CrossRef]
  33. Wan, X.; Zhao, C.; Wang, Y.; Liu, W. Stacked sparse autoencoder in hyperspectral data classification using spectral-spatial, higher order statistics and multifractal spectrum features. Infrared Phys. Technol. 2017, 86, 77–89. [Google Scholar] [CrossRef]
  34. Chen, X.; Xiang, S.; Liu, C.-L.; Pan, C.-H. Vehicle Detection in Satellite Images by Hybrid Deep Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1797–1801. [Google Scholar] [CrossRef]
  35. Guidici, D.; Clark, M.L. One-Dimensional Convolutional Neural Network Land-Cover Classification of Multi-Seasonal Hyperspectral Imagery in the San Francisco Bay Area, California. Remote Sens. 2017, 9, 629. [Google Scholar] [CrossRef]
  36. Paoletti, M.; Haut, J.; Plaza, J.; Plaza, A. A new deep convolutional neural network for fast hyperspectral image classification. ISPRS J. Photogramm. Remote Sens. 2018, 145, 120–147. [Google Scholar] [CrossRef]
  37. Fayek, H.M.; Lech, M.; Cavedon, L. Evaluating deep learning architectures for Speech Emotion Recognition. Neural Netw. 2017, 92, 60–68. [Google Scholar] [CrossRef] [PubMed]
  38. Gan, J.; Wang, W.; Lu, K. A new perspective: Recognizing online handwritten Chinese characters via 1-dimensional CNN. Inf. Sci. 2019, 478, 375–390. [Google Scholar] [CrossRef]
  39. Belgiu, M.; Drăguț, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  40. Yildirim, O.; Baloglu, U.B.; Acharya, U.R. A Deep Learning Model for Automated Sleep Stages Classification Using PSG Signals. Int. J. Environ. Res. Public Health 2019, 16, 599. [Google Scholar] [CrossRef]
  41. Huang, S.; Tang, J.; Dai, J.; Wang, Y. Signal Status Recognition Based on 1DCNN and Its Feature Extraction Mechanism Analysis. Sensors 2019, 19, 2018. [Google Scholar] [CrossRef]
  42. Franklin, S.E.; Hall, R.J.; Moskal, L.M.; Maudie, A.J.; Lavigne, M.B. Incorporating texture into classification of forest species composition from airborne multispectral images. Int. J. Remote Sens. 2010, 21, 61–79. [Google Scholar] [CrossRef]
  43. Hinton, G.E.; Srivastava, N.; Krizhevsky, A. Improving neural networks by preventing co-adaptation of feature detectors. arXiv 2012, arXiv:1207.0580. [Google Scholar]
  44. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  45. Sharma, A. Guided Stochastic Gradient Descent Algorithm for inconsistent datasets. Appl. Soft Comput. 2018, 73, 1068–1080. [Google Scholar] [CrossRef]
  46. Breiman, L. RandomForests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  47. Puissant, A.; Rougier, S.; Stumpf, A. Object-oriented mapping of urban trees using Random Forest classifiers. Int. J. Appl. Earth Obs. Geoinformation 2014, 26, 235–245. [Google Scholar] [CrossRef]
  48. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
  49. Kavzoglu, T. Object-Oriented Random Forest for High Resolution Land Cover Mapping Using Quickbird-2 Imagery. In Handbook of Neural Computation; Elsevier BV: Amsterdam, The Netherlands, 2017; pp. 607–619. [Google Scholar]
  50. Shi, Y.; Skidmore, A.K.; Wang, T.; Holzwarth, S.; Heiden, U.; Pinnel, N.; Zhu, X.; Heurich, M. Tree species classification using plant functional traits from LiDAR and hyperspectral data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 207–219. [Google Scholar] [CrossRef]
  51. Clark, M.L.; Roberts, D.A. Species-Level Differences in Hyperspectral Metrics among Tropical Rainforest Trees as Determined by a Tree-Based Classifier. Remote Sens. 2012, 4, 1820–1855. [Google Scholar] [CrossRef] [Green Version]
  52. Jensen, R.R.; Hardin, P.J.; Hardin, A.J. Classification of urban tree species using hyperspectral imagery. Geocarto Int. 2012, 27, 443–458. [Google Scholar] [CrossRef]
  53. Stumpf, A.; Kerle, N. Object-oriented mapping of landslides using Random Forests. Remote Sens. Environ. 2011, 115, 2564–2577. [Google Scholar] [CrossRef]
  54. Ghosh, A.; Fassnacht, F.E.; Joshi, P.; Koch, B. A framework for mapping tree species combining hyperspectral and LiDAR data: Role of selected classifiers and sensor across three spatial scales. Int. J. Appl. Earth Obs. Geoinf. 2014, 26, 49–63. [Google Scholar] [CrossRef]
  55. Jia, M.; Zhang, Y.; Wang, Z.; Song, K.; Ren, C. Mapping the distribution of mangrove species in the Core Zone of Mai Po Marshes Nature Reserve, Hong Kong, using hyperspectral data and high-resolution data. Int. J. Appl. Earth Obs. Geoinf. 2014, 33, 226–231. [Google Scholar] [CrossRef]
  56. Brantley, S.T.; Zinnert, J.C.; Young, D.R. Application of hyperspectral vegetation indices to detect variations in high leaf area index temperate shrub thicket canopies. Remote Sens. Environ. 2011, 115, 514–523. [Google Scholar] [CrossRef] [Green Version]
  57. Van Lier, O.R.; Fournier, R.A.; Bradley, R.L.; Thiffault, N. A multi-resolution satellite imagery approach for large area mapping of ericaceous shrubs in Northern Quebec, Canada. Int. J. Appl. Earth Obs. Geoinf. 2009, 11, 334–343. [Google Scholar] [CrossRef]
  58. Kim, S.-R.; Lee, W.-K.; Kwak, D.-A.; Biging, G.S.; Gong, P.; Lee, J.-H.; Cho, H.-K. Forest Cover Classification by Optimal Segmentation of High Resolution Satellite Imagery. Sensors 2011, 11, 1943–1958. [Google Scholar] [CrossRef] [Green Version]
  59. Oldeland, J.; Dorigo, W.; Wesuls, D.; Jürgens, N. Mapping Bush Encroaching Species by Seasonal Differences in Hyperspectral Imagery. Remote Sens. 2010, 2, 1416–1438. [Google Scholar] [CrossRef] [Green Version]
  60. Vyas, D.; Krishnayya, N.; Manjunath, K.; Ray, S.; Panigrahy, S. Evaluation of classifiers for processing Hyperion (EO-1) data of tropical vegetation. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 228–235. [Google Scholar] [CrossRef]
  61. Heinzel, J.; Koch, B. Investigating multiple data sources for tree species classification in temperate forest and use for single tree delineation. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 101–110. [Google Scholar] [CrossRef]
  62. Li, J. Research on HJ_1A Hyperspectral Remote Sensing Tree Species Recognition. Master’s Thesis, Northeast Forestry University, Ha’erbin, China, 2013. [Google Scholar]
  63. Duro, D.C.; Franklin, S.E.; Dubé, M.G. A comparison of pixel-based and object-based image analysis with selected machine learning algorithms for the classification of agricultural landscapes using SPOT-5 HRG imagery. Remote Sens. Environ. 2012, 118, 259–272. [Google Scholar] [CrossRef]
  64. Liu, B.; Li, Y.; Li, G.; Liu, A. A Spectral Feature Based Convolutional Neural Network for Classification of Sea Surface Oil Spill. ISPRS Int. J. Geo-Inf. 2019, 8, 160. [Google Scholar] [CrossRef]
  65. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep Convolutional Neural Networks for Hyperspectral Image Classification. J. Sensors 2015, 2015, 1–12. [Google Scholar] [CrossRef] [Green Version]
  66. Eckle, K.; Schmidt-Hieber, J. A comparison of deep networks with ReLU activation function and linear spline-type methods. Neural Netw. 2019, 110, 232–242. [Google Scholar] [CrossRef] [PubMed]
  67. Wang, H.; Wang, Y.; Zhang, Q.; Xiang, S.; Pan, C. Gated Convolutional Neural Network for Semantic Segmentation in High-Resolution Images. Remote Sens. 2017, 9, 446. [Google Scholar] [CrossRef]
  68. Zhao, C.; Gao, B.; Zhang, L.; Wan, X. Classification of Hyperspectral Imagery based on spectral gradient, SVM and spatial random forest. Infrared Phys. Technol. 2018, 95, 61–69. [Google Scholar] [CrossRef]
Figure 1. The location of the study site in the Changbai Mountain, the RGB composition (670 nm, 566 nm, and 480 nm) of the hyperspectral image, and the distribution of the samples data.
Figure 1. The location of the study site in the Changbai Mountain, the RGB composition (670 nm, 566 nm, and 480 nm) of the hyperspectral image, and the distribution of the samples data.
Forests 10 00818 g001
Figure 2. Framework of the tree species classification based on one-dimensional convolutional neural network (Conv1D) and random forest classifier.
Figure 2. Framework of the tree species classification based on one-dimensional convolutional neural network (Conv1D) and random forest classifier.
Forests 10 00818 g002
Figure 3. Architecture of the optimal Conv1D-based model.
Figure 3. Architecture of the optimal Conv1D-based model.
Forests 10 00818 g003
Figure 4. (a) Impact of kernel size on accuracy, and note that the y-axis scale of the six figures is different; (b) impact of layer on classification accuracy and efficiency
Figure 4. (a) Impact of kernel size on accuracy, and note that the y-axis scale of the six figures is different; (b) impact of layer on classification accuracy and efficiency
Forests 10 00818 g004
Figure 5. Feature contribution rate and its impact on accuracy.
Figure 5. Feature contribution rate and its impact on accuracy.
Forests 10 00818 g005
Figure 6. (a) Classification map by Conv1D-based classifier; (b) classification map by random forest (RF) classifier.
Figure 6. (a) Classification map by Conv1D-based classifier; (b) classification map by random forest (RF) classifier.
Forests 10 00818 g006
Figure 7. Boxplots of the reflectance of the 32 hyperspectral bands selected for the seven tree species analyzed. The midline of the box plot represents the median, and the hinges (end of the boxes) represent the 25th and 75th quartiles. The lines are drawn from each hinge to 1.5 times the spread (75th–25th quartile) or to the most extreme value (if smaller). Any point outside these values is represented as a circular point.
Figure 7. Boxplots of the reflectance of the 32 hyperspectral bands selected for the seven tree species analyzed. The midline of the box plot represents the median, and the hinges (end of the boxes) represent the 25th and 75th quartiles. The lines are drawn from each hinge to 1.5 times the spread (75th–25th quartile) or to the most extreme value (if smaller). Any point outside these values is represented as a circular point.
Forests 10 00818 g007
Table 1. Ground reference data in units of pixels for each of the classes analyzed.
Table 1. Ground reference data in units of pixels for each of the classes analyzed.
Category CodeDescriptionTotal Number of ParcelsNumber of Pixels
ALAmur linden224827
CPChinese pine132853
DLDahurian larch71536
APAspen112414
WBWhite birch132853
MWManchurian walnut265705
MAManchurian ash255486
Table 2. Remote sensing indices from OHS-1 hyperspectral data for features extraction.
Table 2. Remote sensing indices from OHS-1 hyperspectral data for features extraction.
Feature TypesIndicesDescription
MNFMNF1The first principal component of minimum noise fraction
MNF2The second principal component of minimum noise fraction
MNF3The third principal component of minimum noise fraction
Band reflectanceB1, B2, B3Blue, B1:466nm, B2:480 nm, B3:500 nm
B4, B5, B6, B7, B8Green, B4:520 nm, B5:536 nm, B6:550 nm, B7:566 nm, B8:580 nm
B9, B10, B11, B12, B13, B14, B15, B16Red, B9:596 nm, B10:610 nm, B11:626 nm, B12:640 nm,
Red, B13:656 nm, B14:670 nm, B15:689 nm, B16:700 nm
B17, B18, B19, B20, B21, B22Red edge, B17:716 nm, B18:730 nm, B19:746 nm, B20:760 nm, Red edge, B21:776 nm, B22:790
B23, B24, B25, B26, B27, B28, B29, B30, B31, B32Near infrared, B23:806 nm, B24:820 nm, B25:836 nm,
B26:850 nm, B27:866 nm, B28:880 nm, B29:896 nm,
Near infrared, B30:910 nm, B31:926 nm, B32:940 nm
Vegetation indicesRVIRatio vegetation index, B28–B14
NDVINormalized difference vegetation index,
(B23 – B14)/(B23 + B14)
EVIEnhanced vegetation index,
2.5 × (B23 – B14)/(B23 + 6.0 × B14 – 7.5 × B2 + 1)
Table 3. Confusion matrix of the test set by the Conv1D-based classifier.
Table 3. Confusion matrix of the test set by the Conv1D-based classifier.
Reference ClassesClassifiedProducer’s Accuracy (%)
ALCPDLAPWBMWMATotal
Amur linden (AL)1148061826112861162370.75%
Chinese pine (CP)0185242929031159.58%
Dahurian larch (DL)1922201322927680.52%
Aspen (AP)1800251154028887.15%
White birch (WB)2611749873841111488.65%
Manchurian walnut (MW)404612281736123194989.10%
Manchurian ash (MA)152229120362019214194.30%
Total12472302952871333194523647702
User’s Accuracy (%)92.07%80.50%75.45%87.23%74.06%89.25%85.43%
Table 4. Confusion matrix of the test set by the RF classifier.
Table 4. Confusion matrix of the test set by the RF classifier.
Reference ClassesClassifiedProducer’s Accuracy (%)
ALCPDLAPWBMWMATotal
Amur linden (AL)1253016611408667162377.18%
Chinese pine (CP)017742321145531156.78%
Dahurian larch (DL)01521611033327678.02%
Aspen (AP)21012222196728877.28%
White birch (WB)2702109915529111488.97%
Manchurian walnut (MW)1362178250139983194971.77%
Manchurian ash (MA)1130218181011952214191.15%
Total14482243103831448166322257702
User’s Accuracy (%)86.52%78.72%69.44%57.99%68.43%84.08%87.73%

Share and Cite

MDPI and ACS Style

Xi, Y.; Ren, C.; Wang, Z.; Wei, S.; Bai, J.; Zhang, B.; Xiang, H.; Chen, L. Mapping Tree Species Composition Using OHS-1 Hyperspectral Data and Deep Learning Algorithms in Changbai Mountains, Northeast China. Forests 2019, 10, 818. https://doi.org/10.3390/f10090818

AMA Style

Xi Y, Ren C, Wang Z, Wei S, Bai J, Zhang B, Xiang H, Chen L. Mapping Tree Species Composition Using OHS-1 Hyperspectral Data and Deep Learning Algorithms in Changbai Mountains, Northeast China. Forests. 2019; 10(9):818. https://doi.org/10.3390/f10090818

Chicago/Turabian Style

Xi, Yanbiao, Chunying Ren, Zongming Wang, Shiqing Wei, Jialing Bai, Bai Zhang, Hengxing Xiang, and Lin Chen. 2019. "Mapping Tree Species Composition Using OHS-1 Hyperspectral Data and Deep Learning Algorithms in Changbai Mountains, Northeast China" Forests 10, no. 9: 818. https://doi.org/10.3390/f10090818

APA Style

Xi, Y., Ren, C., Wang, Z., Wei, S., Bai, J., Zhang, B., Xiang, H., & Chen, L. (2019). Mapping Tree Species Composition Using OHS-1 Hyperspectral Data and Deep Learning Algorithms in Changbai Mountains, Northeast China. Forests, 10(9), 818. https://doi.org/10.3390/f10090818

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop