Next Article in Journal
A Dynamic Dashboarding Application for Fleet Monitoring Using Semantic Web of Things Technologies
Next Article in Special Issue
A Simple, Reliable, and Inexpensive Solution for Contact Color Measurement in Small Plant Samples
Previous Article in Journal
Reply to Comments: Comparison of Methods Study between a Photonic Crystal Biosensor and Certified ELISA to Measure Biomarkers of Iron Deficiency in Chronic Kidney Disease Patients
Previous Article in Special Issue
A Visual and Persuasive Energy Conservation System Based on BIM and IoT Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel LiDAR Data Classification Algorithm Combined CapsNet with ResNet

1
The Higher Educational Key Laboratory for Measuring & Control Technology and Instrumentations of Heilongjiang, Harbin University of Science and Technology, Harbin 150080, China
2
Department of Computer Science, Chubu University, Aichi 487-8501, Japan
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(4), 1151; https://doi.org/10.3390/s20041151
Submission received: 24 January 2020 / Revised: 12 February 2020 / Accepted: 18 February 2020 / Published: 19 February 2020
(This article belongs to the Special Issue Environmental Sensors and Their Applications)

Abstract

:
LiDAR data contain feature information such as the height and shape of the ground target and play an important role for land classification. The effect of convolutional neural network (CNN) for feature extraction on LiDAR data is very significant, however CNN cannot resolve the spatial relationship of features adequately. The capsule network (CapsNet) can identify the spatial variations of features and is widely used in supervised learning. In this article, the CapsNet is combined with the residual network (ResNet) to design a deep network-ResCapNet for improving the accuracy of LiDAR classification. The capsule network represents the features by vectors, which can account for the direction of the features and the relative position between the features. Therefore, more detailed feature information can be extracted. ResNet protects the integrity of information by passing input information to the output directly, which can solve the problem of network degradation caused by information loss in the traditional CNN propagation process to a certain extent. Two different LiDAR data sets and several classic machine learning algorithms are used for comparative experiments. The experimental results show that ResCapNet proposed in this article `improve the performance of LiDAR classification.

1. Introduction

LiDAR launched in the 1980s and successfully detected the lunar surface for the American Apollo mission to the moon. Because of its huge technical potential, many research scholars have studied it to promote the development continuously and progress of theory and technology. Thus, it becomes an indispensable detection technology in the field of science and technology. LiDAR has many advantages, such as high resolution, good concealment, and strong anti-interference ability. It is widely used in many different fields. For example, it can elevate the measure accuracy of projects that are difficult to measure in construction engineering [1]; it can build the 3D models for historical buildings to record information in terms of cultural relics; it can detect underwater distances to provide data for environmental protection programs [2]; it also can be used to detect landslides and other disasters [3]. In recent years, deep learning has developed rapidly and has achieved remarkable results in various fields [4,5,6,7]. Therefore, this article also uses deep learning algorithms for pixel-level classification of LiDAR data.
The data used in this article are the LiDAR-derived rasterized Digital Surface Models (LiDAR-DSM), which were obtained by processing the points cloud data acquired from the airborne LiDAR system by denoising and rasterization [8]. LiDAR-DSM mainly includes the terrain change of the target area and the feature height of the target object in the area, which is suitable for distinguishing classification tasks with different height targets and measuring planning. It plays an important role in the process for the measurement, planning, and construction of cities [9].
In recent years, the convolutional neural network (CNN) has been introduced into the LiDAR data classification [10], which solves the problem of the parameters to be difficult to adjust and laborious caused by the traditional manual extraction of LiDAR-DSM features. Accurate classification of DSM data plays an important role in distinguishing different feature categories. The classification task of this data is usually based on pixel classification; that is, the interpretation process of remote sensing images [11].
At present, there are many studies on LiDAR classification. In 2006, Lodha et al. used Support Vector Machine (SVM) to classify the DSM data, which obtained higher accuracy and convincing visual results [12]. In 2012, Sasaki et al. used decision tree to each land category for analyzing the average height to achieve classification [13]. Naidoo et al. used automated random forest model to classify eight common savanna species [14]. In 2015, Khodadadzadeh et al. developed a new efficient classification strategy for hyperspectral and DSM fusion, integrating multiple types of features and achieving better classification results [15]. In 2016, Ghamisi et al. proposed a method of using DSM data as extended attribute for joint classification with CNN to improve classification accuracy [16]. In 2017, Ghamisi et al. proposed a method to extract spatial and background information of DSM data in an unsupervised manner to obtain higher classification accuracy [17]. In 2018, Wang et al. combined morphology (MPs) and CNN to provide more feature information for DSM classification [10]. Subsequently, He et al. used spatial transformer networks (STN) for identifying the best input image of CNN for LiDAR classification [18]. Xia et al. combined hyperspectral image (HSI) and DSM by using integrated classifiers to process morphological features and classify them [19]. In 2019, Ge et al. proposed a new framework for fusion of HSI and LiDAR data based on the extinction profiles, local binary pattern (LBP), and kernel collaborative representation classification [20]. Wang et al. used spatial transformation network(STN) and densely connected convolutional network (DenseNet) are combined to form STN-DenseNet, which makes the input data adaptively deform according to the network needs, making full use of all information from the front layers of the network [21]. Subsequently, Wang et al. used the Fire modules of SqueezeNet to replace the traditional convolution layers in OctConv to form a new dual neural architecture: OctSqueezeNet, which improved the accuracy and efficiency of the network simultaneously [22].
However, CNN uses scalar to represent the information in many image processing fields. It is difficult for CNN to identify the features when the spatial location of feature information changes. It needs to deepen the layers of network constantly to extract more information [23,24,25,26,27,28,29,30,31]. The capsule network (CapsNet) represents the feature information by a vector, and it can represent the positional relationship between different features and the direction of the feature information. When the same target occurs in position or angle change, it can still be identified accurately by CapsNet [32].
In recent years, CapsNet has been used in many image applications fields. In 2018, Wang et al. proposed a hybrid method based on CapsNet and triple generative adversarial network (TripleGAN) to avoid overfitting and extract the effective features [33]. Ahmad et al. proposed a new architecture for 3D object classification, which is an extension of the Capsule Network to 3D data [34]. In 2019, Zhu et al. proposed a deep capsule network for HSI classification to improve the performance of the CNNs [35]. Paoletti et al. proposed a new CNN architecture based on spectral–spatial capsule networks in order to achieve a highly accurate classification of HIS while reducing the network design complexity [36]; Afshar et al. proposed a modified CapsNet architecture for brain tumor classification, which takes the tumor coarse boundaries as extra inputs within its pipeline to increase the CapsNet’s focus [37]; Yin et al. proposed an alternative data-driven HSI classification model based on CapsNet [38]; Wang et al. proposed a Caps-TripleGAN framework for sample generation and integrated CapsNet for hyperspectral image classification [39].
In addition, for the traditional CNN, with the depth of the network increasing, the performance of network may degrade; that is, when the accuracy of training tends to be flat, the training error becomes larger. Residual network (ResNet) [40] was proposed to solve the problem. ResNet establishes a bypass connection and sends the input to the output directly to avoid the loss of information and to mitigate the degradation of the network. ResNet has significant benefits in many areas. In 2018, Mou et al. propose a novel network architecture, fully Conv–Deconv network, for unsupervised spectral–spatial feature learning of hyperspectral images, which is able to be trained in an end-to-end manner [41]. In the same year, Zhong et al. designed an end-to-end spectral–spatial residual network (SSRN) that takes raw 3-D cubes as input data without feature engineering for hyperspectral image classification [42]; Qin et al. proposed a deep residual neural network based on leukocyte classifier constructed at first, which can imitate the domain expert’s cell recognition process, and extract salient features robustly and automatically [43]. In 2019, Paolett et al. presented a new deep CNN architecture specially designed for the HSI data. A new model pursues to improve the spectral–spatial features uncovered by the convolutional filters of the network [44]. Zhan et al. proposed an attention residual learning convolutional neural network (ARL-CNN) model for skin lesion classification in dermoscopy images, which is composed of multiple ARL blocks, a global average pooling layer, and a classification layer [45].
We combine the advantages of ResNet and CapsNet to design the ResCapNet to obtain more detailed information of LiDAR data for classification applications. The main contributions of this article are as follows.
(1)
Combine the CapsNet and ResNet to form a new network framework named ResCapNet. The input features are extracted using ResNet and the outputs of ResNet are sent to CapsNet for further classification.
(2)
The proposed method is tested on two different LiDAR data sets to predict for each pixel the land type associated with that pixel while the number of training samples is limited.
The organization of this article is as follows. Section 2 and Section 3 present the CapsNet and ResNet, respectively. Section 4 is dedicated to the details of the proposed classification method in this article and Section 5 reports the experimental results and analysis. Section 6 is the conclusions of the proposed framework.

2. Capsule Network

The CapsNet is made up of capsules rather than neurons. A capsule is a small group of neurons that can examine a particular object, such as a rectangle, and learns from a certain area of the feature maps. The output of CapsNet is an n-dimensional vector. The length of each vector represents the estimated probability of the existence of the object and the direction of each vector records the attitude parameters of object, such as the exact position, rotation, thickness, inclination, and size of the object. If the object changes slightly, such as moving, rotating, or changing the size, the CapsNet will obtain an output vector of the same length but with a slight change in direction. Therefore, the feature extraction of CapsNet is not affected by the changes of space for features. Traditional CNNs require additional components to identify each detail of the objects automatically, and CapsNet can represent the hierarchical structure of each detail part directly. CapsNet has two main characteristics: The first is layer-based compression, and the second is dynamic routing.

2.1. Layer-Based Compression

As shown in Figure 1, both input u i and output v j are vectors. Multiply the transformation matrix W i j with the output u i of the previous capsule for turning the u i to u ^ j | i . Then, in Equation (1) and Equation (2), calculate the weighted sum s i according to the weight C i j . C i j is the coupling coefficient, which is calculated through the iteration of dynamic routing process, and specifies the sum of j c i j is 1. C i j measures how likely can capsule i activate capsule j .
u ^ j | i = W i j u i
s j = i c i j u ^ j | i
The activation function of s j is squash instead of ReLU, so the length of the final output vector v j of the capsule is between 0 and 1. This function compresses small vectors to zero and large vectors to unit vectors. The activation function Squash is shown as Equation (3).
v j = s j 2 1 + s j 2 s j s j

2.2. Dynamic Routing

Capsule calculates the output by calculating the intermediate value C i j through the iterative dynamic routing. In Equation (1) and Equation (2), the prediction vector u ^ j | i is the prediction (vote) from capsule i and has an impact on the output of capsule j . If the activation vector has a high similarity with the prediction vector, the two capsules are highly correlated. This similarity is measured by the scalar product of the prediction vector and the activation vector.
Therefore, in Equation (4), the similarity score b i j will consider both the possibility of feature existence and the attribute of the feature, unlike neurons, which only consider the possibility of feature existence. At the same time, if the activation u i of the capsule i is very low, since the length of u ^ j | i is proportional to u i , b i j will still be low; that is, if the capsule of the detail feature is not activated, the correlation between the detail feature and the overall feature is very low. The coupling coefficient C i j is calculated by the softmax of b i j in Equation (5):
b i j u ^ j | i × v j
C i j exp ( b i j ) k exp ( b i k )
The process of dynamic routing is shown in Algorithm 1 as follows:
Algorithm 1 Dynamic Routing
Routing ( u ^ j | i , r , l )
for all capsule i in layer l - 1 and j in layer l : b i j 0
for r iterations do
   for all capsule i in layer l - 1 : C i softmax ( b i )
   for all capsule j in layer l : s j i c i j u ^ j | i
   for all capsule j in layer l : v j = squash ( s j )
   for all capsule i in layer l - 1 and j in layer l : b i j b i j + u ^ j | i · v j
return v j
Dynamic routing is not a complete replacement for backpropagation. The transformation matrix W i j is still trained by backpropagation, while the dynamic path is only used to calculate the output of the capsule. Calculate the C i j to quantify the connection between the child capsule and its parent capsule. Each data point is re-initialized to 0 before performing dynamic routing calculations [43].

3. Residual Network

Deep convolutional networks integrate the characteristics of different levels, such as global features and detail features. The levels of features can be enriched by deepening the network. Therefore, a deeper network structure is used to obtain more detail features generally. However, there is a problem of degradation on traditional CNN when using too deep network layers. When the network layer reaches a certain level and the network is too complicated, the accuracy rate will saturate and then decrease rapidly.
ResNet was proposed by He et al. in 2015 [42]. Because hierarchical networks have many redundancies, ResNet is designed to optimize network layer. The aim of ResNet is to complete the identity mapping and ensure that the input and output of the identity layer are the same. The identity layer of the network is determined automatically through training. ResNet changed several layers of the original network into a residual block.
The specific structure of the residual block is shown in Figure 2, where x is the input of this residual block and the residual is F ( x ) . F ( x ) is the output after the linear transformation and the activation of the first layer. After the linear transformation of the second layer, the input x of this layer is added to F ( x ) , and total activated by ReLU for getting output. The initial input x is added to the output of the second layer and then activated. This path is called shortcut connection. Establishing a direct correlation channel between the input and the output can make the parameterized layers focus on learning the residuals from the input to the output.
Residual operation is shown as Equations (6)–(8), where σ in Equation (6) represents the non-linear function ReLU. In Equation (7), y is the common output of the shortcut and the second ReLU. In Equation (8), when the input and output dimensions need to be changed, such as changing the number of channels, a linear transformation W s can be performed on x by the shortcut operation.
F = W 2 σ ( W 2 x )
y = F ( x , { W i } ) + x
y = F ( x , { W i } ) + W s x

4. ResCapNet for LiDAR Classification

The proposed method by us is shown as Figure 3. The network structure consists of two parts, the upper part is ResNet for extracting features and the lower part is CapsNet for classification.

4.1. Proposed Network Structure

We adopt the structure of ResNet34 and modify it to fit LiDAR data. Resnet-34 consists of four parts, each of which has three, four, six, and three identity blocks. Every identify block in each part has 64, 128, 256, and 512 filters, respectively. In the experiments of this article, because the size of the input is small, we reduced the size of the convolution kernel in the first convolution layer from 7 to 3 to ensure that the network can extract useful information. Meanwhile, reduce the number of filters used for each identify block in the four parts respectively to 16, 28, 40, and 52 and no output classification layer is used. Figure 4 shows the identity block used in this article, which consists of two convolutional layers and two batch normalization (BN) layers.
The parameter of dynamic routing in digit caps for the two data sets are all set to 3. The size of convolution kernel in primary caps is 3 × 3 and the channel is set to 3. Because there are seven land classes in Bayview Park data set, the number of vectors in primary caps and digit caps are both set to 7 and the number of capsules in digit caps is also set to 7. Meanwhile, there are 11 land classes in the Recology data set, so the number of vectors in primary caps and digit and the number of capsules in digit caps are all set to 11.

4.2. Adaptive Learning Optimization Algorithm

In this article, the Stochastic Gradient Descent (SGD) with momentum is used to back-propagate and update the network parameters for obtaining the optimal framework of ResCapNet, as shown in Equations (9) and (10),
v = β · v α · ω
x x + v
where α represents the learning rate and v represents the momentum factor. The gradient acts on v directly. When the direction of the negative gradient is the same with the direction of v , the direction of update is correct, and the weight will be updated quickly.

4.3. Loss and Activate Function

This article uses the ReLU function as the activation function of the network. In Equation (11), some outputs of the neuron are set to zero, which can reduce the dependency between the parameters and alleviate the overfitting phenomenon of the network.
g ( x ) = max ( 0 , x )
We adopt the softmax function to classify and choose the exponential form of softmax in Equation (12).
a j L = e ( Z j L ) K e ( Z K L )
The input of the last layer is Z j L , the output of the last layer is a j L and e is a constant. The inputs of all neurons in the L t h layer is K e ( Z K L ) . Therefore, the loss function is cross-entropy loss in Equation (13).
Loss i = log y i = log e ( Z j L ) K e ( Z K L )

5. Experimental Results and Analysis

5.1. Algorithm Data Description

In this article, two different LiDAR data sets were used to evaluate the proposed method; one is Bayview Park data set and the other is Recology data set. They were obtained from the 2012 IEEE International Remote Sensing Image Convergence Competition. The Bayview Park data set was collected in June 2010 by the sensor WorldView2 in San Francisco, USA, as shown in Figure 5. The data set had a spatial resolution of 1.8m and contains 300 × 200 pixels. It had seven land classes, which were building1, building2, building3, road, trees, soil, and seawater.
Figure 6. shows the Recology data set, which was also acquired in an urban location in San Francisco, USA. It contained 200 × 250 pixels and had a spatial resolution of 1.8 m. It had 11 land classes, which were building1, building2, building3, building4, building5, building6, building6, trees, parking lot, soil, and grass.

5.2. Experimental Setup

The experiments in this article were carried out under Windows system and accelerated with Nvidia RTX2060(Asus, Taiwan, China) graphics card. The codes take tensorflow as the backend and are implemented through the Keras and the python (Anaconda, Austin, Texas). The data sets were divided into training sets and test sets. We selected 400, 500, 600, and 700 samples randomly in the data sets as the training set, and the rest for testing the effect of the model. Verified by experiments, it was better to set the size of the input for ResCapNet to 38 × 38 pixels, meanwhile the input size of all comparative experiments was set to 38 × 38 pixels, and the DSM data were linearly mapped to [−0.5, 0.5]. The training batch size of the data sets was 32. Set 150 epochs for training, and when the classification accuracy of the network no longer increases (exceeding 20 epochs), the training will stop early. Selecting the ‘same’ for the fill pattern of each layer’s feature maps, so that the length and width of each layer’s inputs and outputs are unchanged. The structure of CNN is shown in Table 1.
We use SGD algorithm with momentum as the gradient optimizer. The momentum was selected to 0.9 and the descent rate was selected to 10−6. When training the ResCapNet model, the initial learning rate for the Bayview Park data set and the Recology data set were set to 0.001, and when training the CNN and the ResNet models, the initial learning rate were also set to 0.001. For the Bayview Park data set, the maximum depth of the decision tree was set to 100, and for the Recology data set, the maximum depth of the decision tree was set to 25. The kernel function of the SVM was set to the radial basis function (rbf), the rbf coefficient defaults to “auto”, and the penalty parameter of the error term was set to 100. The value of k for the KNN was set to 1, the leaf_size was set to 30, and the metric distance select to Euclidean distance. The estimates of the Random Forest for the two data sets were set to 30.

5.3. Experimental Results and Aanlysis

We adopted overall accuracy (OA), average accuracy (AA), kappa coefficient (K), recall, precision, and RGB false color map to evaluate the performance of the model. Table 2 and Table 3 provide the classification results of different methods for Bayview Park data set and Recology data set when selecting 400, 500, 600, and 700 training samples, respectively.
We can see that ResCapNet always achieved the highest accuracy and the best OA were 96.12% ± 0.51% for the Bayview Park data set and 96.39% ± 0.79% for the Recology data set. The best OA of Bayview Park data set was 0.70%, 1.33%, 5.95%, 5.51%, 5.69%, 10.06%, 18.91%, and 19.27% higher than OctSqueezeNet, ResNet, CapsNet, CNN, Random Forest, KNN, SVM, and Decision Tree, respectively. The best OA of Recology data set increased 0.48%, 0.67%, 6.22%, 3.91%, 4.68%, 8.03%, 19.18%, and 20.09% compared to OctSqueezeNet, ResNet, CapsNet, CNN, Random Forest, KNN, SVM, and Decision Tree, respectively.
Figure 7 is a comparison of the test results of different methods when 700 training samples were selected for the two data sets. It can be intuitively seen that the method proposed by us had the best classification effect. Table 4 and Table 5 give the precision and recall of each class for 700 samples on Bayview Park data set and Recology data set. Table 6 and Table 7 give the classification accuracy of per class on Bayview Park data set and Recology data set. According to the classification results of each land classes shown in these four tables, when CapsNet was used alone, the classification effect of land classes with lower height was good, because it was sensitive to spatial features, but its overall classification accuracy was not high. When ResNet was used alone, the classification accuracy of land classes with higher height was high, but it was difficult to identify the land classes with lower height. The combination of the two greatly reduced the influence for the height of the land classes on the classification results, and the classification accuracy of each category was very high.
Figure 8 and Figure 9 visually show the classification results of each class on the two data sets. It can be clearly seen that classification results of ResCapNet for each class were excellent. Figure 10 and Figure 11 provides classification maps for different classifiers.

6. Conclusions

This article designs a deep learning model-ResCapNet, which combines the advantages of ResNet and CapsNet for improving the original structure to effectively classify remotely sensed LiDAR data. The two well-known LiDAR data sets are considered in this article, and eight established algorithms are used to compare with our proposed method, it can be seen that, competitive with state-of-the-art classification methods for LiDAR, our proposed method can achieve better classification results. It achieves 96.12% and 96.39% in terms of OA on the Bayview Park and Recology data sets, respectively, when the number of training samples is selected 700.
The shortcut channel of ResNet can retain more complete feature information and alleviate the problem of network performance degradation caused by the inappropriate depth of CNN. At the same time, it automatically extracts effective features from the data. This enables subsequent CapsNet to learn more useful feature information. Meanwhile, because the sensitivity of CapsNet to space transformation of features, it can extract more detailed feature information and retain more valuable information compared to ordinary CNNs. Thus, the combination of the two structure obtains a very good classification effect.
In addition, the practical effects of this methods on other remote sensing data sets need to be continuously verified. Meanwhile, we need to further explore how to automatically generate an optimal network model suitable for LiDAR classification.

Author Contributions

This article was completed by all authors. A.W. and M.W. designed and implemented the classification algorithm. H.W. and K.J. made an experimental analysis of the algorithm. Y.I. participated in the writing of the article. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by National Natural Science Foundation of China (NSFC-61671190), the University Nursing Program for Young Scholars with Creative Talents in Heilongjiang Province (UNPYSCT-2017086) and Fundamental Research Foundation for Universities of Heilongjiang Province (LGYC2018JQ014).

Acknowledgments

The authors would like to thank the support of the laboratory and university.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, L.J.; Li, Q.; Wang, Z.Z.; Liu, H.J.; Li, Z.S.; Gui, Y.; Kletzli, R.; Yang, X.; Chen, S.; Liu, Y. Lidar Application in Selection and Design of Power Line Route. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 3109–3111. [Google Scholar]
  2. Gao, J.; Sun, J.F.; Wei, J.S.; Wang, Q. Research of Underwater Target Detection Using a Slit Streak Tube Imaging Lidar. In Proceedings of the 2011 Academic International Symposium on Optoelectronics and Microelectronics Technology, Harbin, China, 12–16 October 2011; pp. 240–243. [Google Scholar]
  3. Liu, J.K.; Shih, T.Y.; Liao, Z.Y.; Lau, C.C.; Hsu, P.H. The Geomorphometry of Rainfall-Induced Landslides in Alishan Area Obtained by Airborne Lidar and Digital Photography. In Proceedings of the IGARSS 2008—2008 IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008. [Google Scholar]
  4. Zhang, X.Y.; Wang, S.P.; Yun, X.C. Bidirectional Active Learning: A Two-way Exploration into Unlabeled and Labeled Dataset. IEEE Trans. Neural Netw. Learn. Syst. (TNNLS) 2015, 26, 3034–3044. [Google Scholar] [CrossRef]
  5. Zhang, X.Y.; Shi, H.C.; Li, C.S. Learning Transferable Self-Attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Honolulu, HI, USA, 27 January–1 February 2019; pp. 1–8. [Google Scholar]
  6. Zhang, X.Y.; Li, C.S.; Shi, H.C.; Zhu, X.B.; Li, P.; Dong, J. AdapNet: Adaptability Decomposing Encoder-decoder Network for Weakly Supervised Action Recognition and Localization. IEEE Trans. Neural Netw. Learn. Syst. (TNNLS) 2020, 1–12. [Google Scholar] [CrossRef]
  7. Zhang, X.Y.; Shi, H.C.; Zhu, X.B.; Li, P. Active Semi-Supervised Learning based on Self-Expressive Correlation with Generative Adversarial Networks. Neurocomputing 2019, 345, 103–113. [Google Scholar] [CrossRef]
  8. Lo, C.S.; Lin, C. Growth-competition-based Stem Diameter and Volume Modeling for Tree Level Forest Inventory Using Airborne LiDAR Data. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2216–2226. [Google Scholar] [CrossRef]
  9. Qi, C.R.; Yi, L.; Su, H. PointNet++: Deep Hierarchical Feature Learning on Points a Metric Space. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; pp. 5099–5108. [Google Scholar]
  10. Wang, A.L.; He, X.; Ghamisi, P.; Chen, Y.S. LiDAR Data Classification Using Morphological Profiles and Convolutional Neural Networks. IEEE Geosci. Remote Sens. Lett. 2018, 15, 74–778. [Google Scholar] [CrossRef]
  11. Liu, Y.; Ren, Y.; Hu, L.; Liu, Z. Study on Highway Geological Disasters Knowledge base for Remote Sensing Images Interpretation. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar]
  12. Lodha, S.K.; Kreps, E.J.; Helmbold, D.P.; Fitzpatrick, D.N. Aerial LiDAR data classification using support vector machines (SVM). In Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06), Chapel Hill, NC, USA, 14–16 June 2006. [Google Scholar]
  13. Sasaki, T.; Imanishi, J.; Ioki, K.; Morimoto, Y.; Kitada, K. Object-based Classification of Land Cover and Tree Species by integrating airborne LiDAR and high spatial resolution imagery data. Landsc. Ecol. Eng. 2012, 8, 157–171. [Google Scholar] [CrossRef]
  14. Naidoo, L.; Cho, M.A.; Mathieu, R.; Asner, G. Classification of Savanna Tree Species, in the Greater Kruger National Park Region, by Integrating Hyperspectral and LiDAR Data in a Random Forest Data Mining Environment. ISPRS J. Photogramm. Remote Sens. 2012, 69, 167–179. [Google Scholar] [CrossRef]
  15. Khodadadzadeh, M.; Li, J. Fusion of Hyperspectral and LiDAR Remote Sensing Data Using Multiple Feature Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2971–2983. [Google Scholar] [CrossRef]
  16. Ghamisi, P.; Höfle, B.; Zhu, X.X. Hyperspectral and LiDAR Data Fusion Using Extinction Profiles and Deep Convolutional Neural Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 3011–3024. [Google Scholar] [CrossRef]
  17. Ghamisi, P.; Hofle, B. LiDAR Data Classification Using Extinction Profiles and a Composite Kernel Support Vector Machine. IEEE Geosci. Remote Sens. Lett. 2017, 14, 659–663. [Google Scholar] [CrossRef]
  18. He, X.; Wang, A.L.; Ghamisi, P.; Li, G.; Chen, Y.S. LiDAR Data Classification Using Spatial Transformation and CNN. IEEE Geosci. Remote Sens. Lett. 2018, 16, 125–129. [Google Scholar] [CrossRef]
  19. Xia, J.S.; Yokoya, N.T.; Iwasaki, A. Fusion of Hyperspectral and LiDAR Data with a Novel Ensemble Classifier. IEEE Geosci. Remote Sens. Lett. 2018, 15, 957–961. [Google Scholar] [CrossRef]
  20. Ge, C.; Du, Q.; Li, W.; Li, Y.S.; Sun, W.W. Hyperspectral and LiDAR Data Classification Using Kernel Collaborative Representation Based Residual Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1963–1973. [Google Scholar] [CrossRef]
  21. Wang, A.L.; Wang, M.H.; Jiang, K.Y.; Zhao, L.F.; Iwahori, Y.J. A Novel Lidar Data Classification Algorithm Combined Densenet with STN. In Proceedings of the 2019 International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019; pp. 2483–2486. [Google Scholar]
  22. Wang, A.L.; Wang, M.H.; Jiang, K.Y.; Cao, M.Q.; Iwahori, Y.J. A Dual Neural Architecture Combined SqueezeNet with OctConv for LiDAR Data Classification. Sensors 2019, 19, 4927. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Ito, S.; Hiratsuka, S.; Ohta, M.; Matsubara, H.; Ogawa, M. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle. Sensors 2018, 18, 177. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Kwon, S.K.; Jung, H.S.; Baek, W.K.; Kim, D. Classification of Forest Vertical Structure in South Korea from Aerial Orthophoto and Lidar Data Using an Artificial Neural Network. Appl. Sci. 2017, 7, 1046. [Google Scholar] [CrossRef] [Green Version]
  25. Shao, J.; Qu, C.; Li, J.; Peng, S. A Lightweight Convolutional Neural Network Based on Visual Attention for SAR Image Target Classification. Sensors 2018, 18, 3039. [Google Scholar] [CrossRef] [Green Version]
  26. Gao, F.; Huang, T.; Wang, J.; Sun, J.; Hussain, A.; Yang, E. Dual-Branch Deep Convolution Neural Network for Polarimetric SAR Image Classification. Appl. Sci. 2017, 7, 447. [Google Scholar] [CrossRef] [Green Version]
  27. Gao, Q.; Lim, S.; Jia, X. Hyperspectral Image Classification Using Convolutional Neural Networks and Multiple Feature Learning. Remote Sens. 2018, 10, 299. [Google Scholar] [CrossRef] [Green Version]
  28. Zhu, X.B.; Li, Z.Z.; Zhang, X.Y.; Li, P. Deep Convolutional Representations and Kernel Extreme Learning Machines for Image Classification. Multimed. Tools Appl. (MTA) 2018, 78, 29271–29290. [Google Scholar] [CrossRef]
  29. Jiang, Y.G.; Wu, Z.X.; Tang, J.H.; Li, Z.C.; Xue, X.Y.; Chang, S.H. Modeling Multimodal Clues in a Hybrid Deep Learning Framework for Video Classification. IEEE Trans. Multimed. (TMM) 2018, 78, 3137–3147. [Google Scholar] [CrossRef] [Green Version]
  30. Jiang, Y.G.; Wu, Z.X.; Wang, J.; Xue, X.Y.; Chang, S.H. Exploiting Feature and Class Relationships in Video Categorization with Regularized Deep Neural Network. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2018, 40, 352–364. [Google Scholar] [CrossRef] [PubMed]
  31. Yang, P.; Zhao, P.; Gao, X.; Liu, Y. Robust Cost-sensitive Learning for Recommendation with Implicit Feedback. In Proceedings of the 2018 SIAM International Conference on Data Mining, San Diego, CA, USA, 3–5 May 2018; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2018; pp. 621–629. [Google Scholar]
  32. Sabour, S.; Frosst, N.; Hinton, G.E. Dynamic Routing Between Capsules. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2017; pp. 3856–3866. [Google Scholar]
  33. Wang, X.; Tan, K.; Chen, Y. CapsNet and Triple-GANs Towards Hyperspectral Classification. In Proceedings of the 2018 Fifth International Workshop on Earth Observation and Remote Sensing Applications (EORSA), Xi’an, China, 18–20 June 2018. [Google Scholar]
  34. Ahmad, A.; Kakillioglu, B.; Velipasalar, S. 3D Capsule Networks for Object Classification from 3D Model Data. In Proceedings of the 2018 Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 28–31 October 2018; pp. 2225–2229. [Google Scholar]
  35. Zhu, K.Q.; Chen, Y.S.; Ghamisi, P.; Jia, X.P.; Benediktsson, J.A. Deep Convolutional Capsule Network for Hyperspectral Image Spectral and Spectral-Spatial Classification. Remote Sens. 2019, 11, 223. [Google Scholar] [CrossRef] [Green Version]
  36. Paoletti, M.E.; Haut, J.M.; Beltran, R.F.; Plaza, J.; Plaza, A.; Li, J.; Pla, F. Capsule Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 2145–2160. [Google Scholar] [CrossRef]
  37. Afshar, P.; Plataniotis, K.N.; Mohammadi, A. Capsule Networks for Brain Tumor Classification Based on MRI Images and Coarse Tumor Boundaries. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 1368–1372. [Google Scholar]
  38. Yin, J.H.; Li, S.; Zhu, H.M.; Luo, X.Y. Hyperspectral Image Classification Using CapsNet with Well-Initialized Shallow Layers. IEEE Geosci. Remote Sens. Lett. 2019, 16, 1095–1099. [Google Scholar] [CrossRef]
  39. Wang, X.; Tan, K.; Du, Q.; Chen, Y.; Du, P. Caps-TripleGAN: GAN-Assisted CapsNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7232–7245. [Google Scholar] [CrossRef]
  40. He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  41. Mou, L.C.; Ghamisi, P.; Zhu, X.X. Unsupervised Spectral–Spatial Feature Learning via Deep Residual Conv–Deconv Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 391–406. [Google Scholar] [CrossRef] [Green Version]
  42. Zhong, Z.L.; Li, J.; Luo, Z.M.; Chapman, M. Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework. IEEE Trans. Geosci. Remote Sens. 2018, 56, 847–858. [Google Scholar] [CrossRef]
  43. Qina, F.W.; Gaoa, N.; Penga, Y.; Wub, Z.Z.; Shenc, S.Y.; Grudtsina, A. Fine-grained Leukocyte Classification with Deep Residual Learning for Microscopic Images. Comput. Methods Programs Biomed. 2018, 162, 243–252. [Google Scholar] [CrossRef] [PubMed]
  44. Paoletti, M.E.; Haut, J.M.; Beltran, R.F.; Plaza, J.; Pla, F. Deep Pyramidal Residual Networks for Spectral–Spatial Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 740–754. [Google Scholar] [CrossRef]
  45. Zhang, J.P.; Xie, Y.T.; Xia, Y.; Shen, C.H. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Calculation chart of Capsule.
Figure 1. Calculation chart of Capsule.
Sensors 20 01151 g001
Figure 2. The identify block of ResNet.
Figure 2. The identify block of ResNet.
Sensors 20 01151 g002
Figure 3. Architecture of the proposed method. The proposed architecture is composed of two subnetworks: 1) ResNet and 2) CapsNet. (1) The structure of the ResNet is modified based on ResNet-34 to make it suitable for LiDAR data sets. (2) The outputs of ResNet are sent to CapsNet for LiDAR classification.
Figure 3. Architecture of the proposed method. The proposed architecture is composed of two subnetworks: 1) ResNet and 2) CapsNet. (1) The structure of the ResNet is modified based on ResNet-34 to make it suitable for LiDAR data sets. (2) The outputs of ResNet are sent to CapsNet for LiDAR classification.
Sensors 20 01151 g003
Figure 4. The identify block of ResNet used in this article.
Figure 4. The identify block of ResNet used in this article.
Sensors 20 01151 g004
Figure 5. Bayview Park data set: (a) DSM map; (b) Groundtruth map.
Figure 5. Bayview Park data set: (a) DSM map; (b) Groundtruth map.
Sensors 20 01151 g005
Figure 6. Recology data set: (a) DSM map; (b) Groundtruth map.
Figure 6. Recology data set: (a) DSM map; (b) Groundtruth map.
Sensors 20 01151 g006
Figure 7. Classification results of different methods: (a) Bayview Park data set; (b) Recology data set.
Figure 7. Classification results of different methods: (a) Bayview Park data set; (b) Recology data set.
Sensors 20 01151 g007aSensors 20 01151 g007b
Figure 8. Classification results of different methods for each class on Bayview Park.
Figure 8. Classification results of different methods for each class on Bayview Park.
Sensors 20 01151 g008
Figure 9. Classification results of different methods for each class on Recology data set.
Figure 9. Classification results of different methods for each class on Recology data set.
Sensors 20 01151 g009
Figure 10. Classification results on Bayview Park data set: (a) Ground-truth map; (b) Decision Tree; (c) SVM; (d) KNN; (e) Random Forest; (f) CNN; (g) CapsNet; (h) ResNet; (i) OctSqueezeNet; (j) ResCapNet.
Figure 10. Classification results on Bayview Park data set: (a) Ground-truth map; (b) Decision Tree; (c) SVM; (d) KNN; (e) Random Forest; (f) CNN; (g) CapsNet; (h) ResNet; (i) OctSqueezeNet; (j) ResCapNet.
Sensors 20 01151 g010
Figure 11. Classification results on Recology data set: (a) Ground-truth map; (b) Decision Tree; (c) SVM; (d) KNN; (e) Random Forest; (f) CNN; (g) CapsNet; (h) ResNet; (i) OctSqueezeNet; (j) ResCapNet.
Figure 11. Classification results on Recology data set: (a) Ground-truth map; (b) Decision Tree; (c) SVM; (d) KNN; (e) Random Forest; (f) CNN; (g) CapsNet; (h) ResNet; (i) OctSqueezeNet; (j) ResCapNet.
Sensors 20 01151 g011
Table 1. Architecture of CNN.
Table 1. Architecture of CNN.
NO.ConvReLUPoolStride
13 × 3 × 1 × 20Yes2 × 21
23 × 3 × 20 × 20Yes2 ×21
Table 2. Classification results of different training samples on Bayview Park data set.
Table 2. Classification results of different training samples on Bayview Park data set.
Training
Samples
Index400500600700
Methods
Decision
Tree
OA%76.84 ± 0.5176.46 ± 0.7176.66 ± 1.5376.85 ± 1.55
AA%71.24 ± 1.4371.80 ± 2.3172.04 ± 2.2972.23 ± 3.14
K×10068.04 ± 1.6968.35 ± 1.2167.71 ± 2.1169.73 ± 0.60
SVMOA%72.48 ± 2.1276.79 ± 0.3176.91 ± 2.0177.21 ± 0.88
AA%76.87 ± 1.4278.59 ± 1.9778.85 ± 1.1581.19 ± 2.31
K×10067.32 ± 1.6968.39 ± 1.0468.82 ± 1.6769.81 ± 2.33
KNNOA%79.51 ± 0.2781.90 ± 0.3885.25 ± 0.1986.06 ± 0.77
AA%81.35 ± 0.1683.42 ± 0.0684.92 ± 0.8287.47 ± 0.37
K×10073.80 ± 0.2276.49 ± 0.3779.94 ± 0.3581.95 ± 0.36
Random
Forest
OA%86.78 ± 0.4087.75 ± 0.3188.16 ± 0.4490.43 ± 0.67
AA%88.75 ± 1.7489.20 ± 0.1789.33 ± 0.4889.95 ± 0.95
K×10082.33 ± 0.6283.61 ± 0.3884.06 ± 0.5986.57 ± 0.87
CNNOA%87.35 ± 1.9187.91 ± 1.1688.33 ± 0.7390.61 ± 1.89
AA%88.90 ± 1.0389.63 ± 2.7189.51 ± 2.0490.23 ± 0.68
K×10082.72 ± 1.6785.02 ± 1.8586.03 ± 1.9886.72 ± 2.34
CapsNetOA%85.01 ± 1.4787.05 ± 1.1990.07 ± 1.1890.11 ± 0.91
AA%83.89 ± 2.1387.78 ± 1.7091.34 ± 1.2491.64 ± 1.73
K×10080.21 ± 1.8182.85 ± 0.7986.81 ± 1.4586.92 ± 1.22
ResNetOA%89.91 ± 2.0791.57 ± 1.7693.12 ± 1.5194.79 ± 0.90
AA%91.03 ± 1.8893.23 ± 0.8194.25 ± 1.0695.78 ± 1.34
K×10086.62 ± 1.9988.84 ± 2.3990.91 ± 2.0793.53 ± 1.17
OctSqueezeNetOA%91.99 ± 0.8192.79 ± 0.4194.09 ± 1.2395.42 ± 0.91
AA%93.21 ± 0.4395.02 ± 0.9095.75 ± 1.2596.43 ± 1.37
K×10089.48 ± 1.0090.48 ± 0.4792.23 ± 1.6493.99 ± 1.97
ResCapNetOA%93.05 ± 0.6394.39 ± 0.5794.87 ± 0.5696.12 ± 0.51
AA%94.36 ± 0.8495.45 ± 0.7996.03 ± 0.7697.01 ± 1.09
K×10090.77 ± 0.9892.56 ± 0.5393.22 ± 0.7794.89 ± 1.14
Table 3. Classification results of different training samples on Recology data set.
Table 3. Classification results of different training samples on Recology data set.
Training
Samples
Index400500600700
Methods
Decision
Tree
OA%68.73 ± 1.2273.08 ± 0.1374.11 ± 0.2876.30 ± 0.29
AA%60.49 ± 2.0264.28 ± 1.3566.27 ± 0.6268.58 ± 1.37
K×10063.01 ± 1.4068.10 ± 0.0169.38 ± 0.3270.06 ± 0.33
SVMOA%72.48 ± 2.1276.79 ± 0.3176.91 ± 2.0177.23 ± 0.88
AA%76.87 ± 1.4278.59 ± 1.9778.85 ± 1.1581.19 ± 2.31
K×10067.32 ± 1.6968.39 ± 1.0468.82 ± 1.6769.81 ± 2.33
KNNOA%77.62 ± 0.8284.73 ± 0.1685.58 ± 0.0388.36 ± 1.24
AA%80.29 ± 0.9885.78 ± 2.9885.31 ± 0.4089.27 ± 1.05
K×10073.54 ± 0.7680.29 ± 0.1283.08 ± 0.0886.29 ± 1.04
Random
Forest
OA%85.17 ± 1.3587.22 ± 0.8388.79 ± 2.0791.71 ± 1.02
AA%88.19 ± 2.1389.85 ± 3.0690.01 ± 1.4591.15 ± 1.43
K×10082.16 ± 0.7686.26 ± 1.5786.54 ± 2.1189.01 ± 1.22
CNNOA%85.91 ± 1.3388.51 ± 1.2290.47 ± 0.6292.48 ± 1.69
AA%88.46 ± 2.3690.36 ± 0.4390.31 ± 1.0492.07 ± 1.95
K×10083.03 ± 1.5187.08 ± 0.7986.67 ± 0.7789.96 ± 1.80
CapsNetOA%81.17 ± 1.4685.04 ± 1.7387.02 ± 0.8490.17 ± 1.18
AA%82.75 ± 2.3486.82 ± 1.4487.62 ± 1.6091.17 ± 1.87
K×10077.43 ± 1.8982.13 ± 1.0284.56 ± 1.0388.23 ± 1.43
ResNetOA%90.53 ± 1.8393.51 ± 1.3995.43 ± 0.6695.72 ± 0.95
AA%88.70 ± 2.0894.47 ± 1.1394.28 ± 1.2595.16 ± 1.75
K×10088.77 ± 2.3392.94 ± 1.6894.92 ± 0.7995.06 ± 1.14
OctSqueezeNetOA%92.94 ± 0.2193.75 ± 1.2395.07 ± 0.4895.91 ± 0.73
AA%93.63 ± 0.1793.72 ± 0.6095.36 ± 1.1595.89 ± 0.17
K×10092.79 ± 0.7493.79 ± 0.9994.13 ± 0.6395.13 ± 0.11
ResCapNetOA%93.34 ± 1.2294.21 ± 1.2496.23 ± 0.9896.39 ± 0.79
AA%94.25 ± 0.8195.27 ± 0.4297.16 ± 1.0597.31 ± 1.02
K×10091.17 ± 0.8093.10 ± 1.0395.51 ± 0.8895.70 ± 0.65
Table 4. Precision and recall of each class for 700 samples on Bayview Park data set.
Table 4. Precision and recall of each class for 700 samples on Bayview Park data set.
precisionClasses Sensors 20 01151 i001 Sensors 20 01151 i002 Sensors 20 01151 i003 Sensors 20 01151 i004 Sensors 20 01151 i005 Sensors 20 01151 i006 Sensors 20 01151 i007
Methods
Decision
Tree
0.590.520.830.760.880.810.62
SVM0.830.800.780.800.840.610.88
KNN0.980.770.970.820.990.700.70
Random
Forest
0.840.941.001.000.910.820.89
CNN0.990.870.870.941.000.780.87
CapsNet0.930.980.860.980.920.850.79
ResNet0.971.001.000.860.970.900.82
OctSqueezeNet1.000.990.980.920.990.870.89
ResCapNet1.001.001.000.971.000.960.93
recallclasses Sensors 20 01151 i001 Sensors 20 01151 i002 Sensors 20 01151 i003 Sensors 20 01151 i004 Sensors 20 01151 i005 Sensors 20 01151 i006 Sensors 20 01151 i007
methods
Decision
Tree
0.700.740.780.660.810.790.72
SVM0.790.730.900.460.770.920.52
KNN0.950.960.960.870.760.940.74
Random
Forest
0.930.420.910.710.980.930.70
CNN0.960.850.940.800.940.990.66
CapsNet0.850.630.960.780.990.880.79
ResNet0.960.990.980.940.980.860.84
OctSqueezeNet0.990.991.000.930.950.950.86
ResCapNet0.991.001.000.980.990.970.93
Table 5. Precision and recall of each class for 700 samples on Recology data set.
Table 5. Precision and recall of each class for 700 samples on Recology data set.
precisionClasses Sensors 20 01151 i001 Sensors 20 01151 i002 Sensors 20 01151 i003 Sensors 20 01151 i004 Sensors 20 01151 i005 Sensors 20 01151 i006 Sensors 20 01151 i007 Sensors 20 01151 i008 Sensors 20 01151 i009 Sensors 20 01151 i010 Sensors 20 01151 i011
Methods
Decision
Tree
0.740.590.880.760.690.610.550.870.870.510.29
SVM0.740.780.960.910.770.770.840.860.650.761.00
KNN0.880.880.980.960.890.760.930.990.680.361.00
Random
Forest
0.980.920.881.000.970.981.000.860.860.811.00
CNN0.990.990.970.920.940.890.840.960.830.860.88
CapsNet0.820.870.950.950.970.890.940.920.900.830.85
ResNet0.980.990.980.991.000.980.950.980.910.900.95
OctSqueezeNet0.991.001.001.001.000.981.000.990.880.901.00
ResCapNet0.991.000.971.000.991.001.000.980.930.980.96
recallclasses Sensors 20 01151 i001 Sensors 20 01151 i002 Sensors 20 01151 i003 Sensors 20 01151 i004 Sensors 20 01151 i005 Sensors 20 01151 i006 Sensors 20 01151 i007 Sensors 20 01151 i008 Sensors 20 01151 i009 Sensors 20 01151 i010 Sensors 20 01151 i011
methods
Decision
Tree
0.630.760.840.510.790.560.930.840.840.580.33
SVM0.830.690.960.890.710.650.600.870.920.110.17
KNN0.970.890.940.860.940.910.960.800.680.720.54
Random
Forest
0.910.920.980.530.920.710.981.001.000.320.23
CNN0.990.990.970.920.940.890.840.960.830.860.88
CapsNet0.970.850.940.640.930.841.000.970.910.520.82
ResNet0.991.000.990.961.000.960.920.990.970.710.88
OctSqueezeNet0.991.001.000.990.990.990.751.000.970.670.94
ResCapNet0.991.001.000.920.990.980.961.000.980.730.87
Table 6. Classification results of each class for 700 samples on Bayview Park data set.
Table 6. Classification results of each class for 700 samples on Bayview Park data set.
ClassesDecision
Tree
SVMKNNRandom
Forest
CNNCapsNetRes-NetOctSque-ezeNetRes-CapNet
Sensors 20 01151 i00168.08 ± 5.1381.88 ± 3.9199.50 ± 1.0695.18 ± 3.8993.58 ± 1.5394.31 ± 1.4798.25 ± 1.5599.52 ± 0.0999.47 ± 0.53
Sensors 20 01151 i00253.69 ± 9.2884.01 ± 3.1280.88 ± 1.8998.81 ± 1.2092.78 ± 1.1295.32 ± 2.1399.62 ± 0.3899.93 ± 0.0799.82 ± 0.18
Sensors 20 01151 i00373.01 ± 4.4991.31 ± 5.0410010092.87 ± 1.4893.26 ± 1.8199.60 ± 2.8699.54 ± 0.46100
Sensors 20 01151 i00472.56 ± 0.1281.60 ± 4.4390.84 ± 2.6682.55 ± 6.3791.25 ± 1.4794.88 ± 1.1996.43 ± 2.7796.29 ± 2.7798.12 ± 1.22
Sensors 20 01151 i00586.68 ± 2.2983.67 ± 1.8698.15 ± 0.3190.48 ± 1.1686.43 ± 1.6192.74 ± 1.7097.72 ± 0.9398.67 ± 0.8898.52 ± 0.60
Sensors 20 01151 i00678.43 ± 5.2761.04 ± 3.4670.62 ± 1.0987.02 ± 0.5785.57 ± 1.6983.53 ± 0.7987.87 ± 2.1188.75 ± 3.2689.44 ± 2.63
Sensors 20 01151 i00766.10 ± 0.4686.23 ± 2.9272.26 ± 0.4384.03 ± 0.9190.69 ± 2.6885.51 ± 1.2290.99 ± 2.7692.47 ± 2.3093.68 ± 2.45
Table 7. Classification results of each class for 700 samples on Recology data set.
Table 7. Classification results of each class for 700 samples on Recology data set.
ClassesDecision
Tree
SVMKNNRandom
Forest
CNNCapsNetRes-NetOctSque-ezeNetRes-CapNet
Sensors 20 01151 i00171.87 ± 4.8471.87 ± 1.0190.66 ± 4.9991.04 ± 3.5998.34 ± 1.1992.09 ± 1.0998.54 ± 1.4699.06 ± 0.9498.13 ± 1.60
Sensors 20 01151 i00267.46 ± 2.2964.97 ± 1.9482.26 ± 4.2795.40 ± 4.7195.40 ± 1.3693.86 ± 1.2198.17 ± 1.8399.56 ± 0.4499.76 ± 0.24
Sensors 20 01151 i00383.85 ± 3.0492.74 ± 1.1095.07 ± 1.8493.99 ± 1.4993.99 ± 1.0793.21 ± 1.1298.03 ± 1.9798.12 ± 1.4398.41 ± 1.16
Sensors 20 01151 i00461.09 ± 1.4490.05 ± 2.1196.38 ± 0.6797.35 ± 0.3597.35 ± 1.2495.46 ± 1.1395.71 ± 1.8699.55 ± 0.4599.63 ± 0.37
Sensors 20 01151 i00566.72 ± 1.1285.98 ± 1.4291.53 ± 3.0496.30 ± 2.7796.30 ± 2.0297.96 ± 1.9598.92 ± 1.0810098.90 ± 1.10
Sensors 20 01151 i00648.55 ± 6.9370.04 ± 0.8189.26 ± 1.3894.91 ± 1.2494.91 ± 1.1887.28 ± 1.4196.56 ± 2.2995.35 ± 1.8198.58 ± 1.42
Sensors 20 01151 i00770.22 ± 9.4088.09 ± 2.9886.59 ± 2.6896.78 ± 2.8896.78 ± 1.8795.00 ± 1.7992.43 ± 2.5396.94 ± 1.6498.81 ± 1.19
Sensors 20 01151 i00887.54 ± 2.8587.05 ± 1.3087.88 ± 1.5595.57 ± 0.1895.57 ± 1.1690.22 ± 0.6197.11 ± 1.6297.54 ± 2.0195.41 ± 1.21
Sensors 20 01151 i00980.76 ± 1.4164.26 ± 1.7187.34 ± 1.0276.94 ± 0.0676.94 ± 1.2784.29 ± 0.7989.80 ± 2.0387.48 ± 1.2790.72 ± 1.76
Sensors 20 01151 i01052.37 ± 0.9381.03 ± 3.9980.25 ± 1.3073.16 ± 0.3273.16 ± 1.5175.42 ± 1.4688.97 ± 2.6789.68 ± 0.5395.68 ± 2.00
Sensors 20 01151 i01154.77 ± 3.3497.94 ± 1.4891.63 ± 1.2498.13 ± 1.3396.43 ± 1.4198.13 ± 1.2792.54 ± 2.4691.68 ± 1.6195.47 ± 2.48

Share and Cite

MDPI and ACS Style

Wang, A.; Wang, M.; Wu, H.; Jiang, K.; Iwahori, Y. A Novel LiDAR Data Classification Algorithm Combined CapsNet with ResNet. Sensors 2020, 20, 1151. https://doi.org/10.3390/s20041151

AMA Style

Wang A, Wang M, Wu H, Jiang K, Iwahori Y. A Novel LiDAR Data Classification Algorithm Combined CapsNet with ResNet. Sensors. 2020; 20(4):1151. https://doi.org/10.3390/s20041151

Chicago/Turabian Style

Wang, Aili, Minhui Wang, Haibin Wu, Kaiyuan Jiang, and Yuji Iwahori. 2020. "A Novel LiDAR Data Classification Algorithm Combined CapsNet with ResNet" Sensors 20, no. 4: 1151. https://doi.org/10.3390/s20041151

APA Style

Wang, A., Wang, M., Wu, H., Jiang, K., & Iwahori, Y. (2020). A Novel LiDAR Data Classification Algorithm Combined CapsNet with ResNet. Sensors, 20(4), 1151. https://doi.org/10.3390/s20041151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop