Next Article in Journal
Mesospheric Bore Observations Using Suomi-NPP VIIRS DNB during 2013–2017
Next Article in Special Issue
A Patch-Based Light Convolutional Neural Network for Land-Cover Mapping Using Landsat-8 Images
Previous Article in Journal
Rapid Invasion of Spartina alterniflora in the Coastal Zone of Mainland China: New Observations from Landsat OLI Images
Previous Article in Special Issue
A Noise-Resilient Online Learning Algorithm for Scene Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Collaborative Representation for Remote-Sensing Image Scene Classification

1
College of Information and Control Engineering, China University of Petroleum (Huadong), Qingdao 266580, China
2
Shandong Provincial Key Laboratory of Computer, Networks Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan 250101, China
*
Author to whom correspondence should be addressed.
Current address: No.66 Changjiang Road West, Huangdao District, Qingdao 266580, China.
Remote Sens. 2018, 10(12), 1934; https://doi.org/10.3390/rs10121934
Submission received: 14 October 2018 / Revised: 17 November 2018 / Accepted: 27 November 2018 / Published: 1 December 2018

Abstract

:
In recent years, the collaborative representation-based classification (CRC) method has achieved great success in visual recognition by directly utilizing training images as dictionary bases. However, it describes a test sample with all training samples to extract shared attributes and does not consider the representation of the test sample with the training samples in a specific class to extract the class-specific attributes. For remote-sensing images, both the shared attributes and class-specific attributes are important for classification. In this paper, we propose a hybrid collaborative representation-based classification approach. The proposed method is capable of improving the performance of classifying remote-sensing images by embedding the class-specific collaborative representation to conventional collaborative representation-based classification. Moreover, we extend the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance classification performance. Extensive experiments on several benchmark remote-sensing image datasets were conducted and clearly demonstrate the superior performance of our proposed algorithm to state-of-the-art approaches.

1. Introduction

Remote-sensing (RS) images are widely used for land-cover classification, target identification, and thematic mapping from local to global scales owing to its technical advantages such as multiresolution, wide coverage, repeatable observation, and multi/hyperspectral–spectral records [1]. During the past few decades, we have witnessed the rapid development of remote-sensing technology. Nowadays, a volume of heterogeneous RS images, with different spatial and spectral resolutions, provide a vivid service for a myriad Earth observation (EO) applications, from road detection and traffic monitoring to visual tracking and weather reports, respectively. To offer a better performance of such intelligent EO applications, RS image scene recognition has attracted widespread attention. Weifeng Liu et al. [2,3] applied p-Laplacian regularization to scene recognition. Generally, an integrated RS image scene-recognition system includes two components, i.e., a feature-learning approach [4] and a corresponding classifier, and both have a vital effect on the classification result.
As a core problem in image-related applications, image feature representation exhibits the trend of transference from handcrafted to learning-based methods. Specifically, most of the early literature is based on handcrafted features, such as bag-of-visual-words (BoVW) [5], part-based models [6], and models fusing global and local descriptors [7]. However, handcrafted features are limited in their ability to extract robust and transferable feature representation for image scene classification and ignore many effective cues hiding in the image. Then, an unsupervised feature-learning framework [8] was proposed by Zhang et al. for scene classification. After that, Fu et al. [9] proposed an unsupervised feature-learning method for high-resolution-image scene classification. In this method, a set of filter banks are learned from the unlabeled image patches, and then each image is encoded by these filter banks, generating a final feature vector to excellently represent the image scene. An effective geospatial object-detection framework [10] was proposed by Han et al., combining the weakly supervised learning (WSL) and high-level feature learning. Li et al. [11] developed a new framework combining multiple features including both linear and nonlinear features for the classification of hyperspectral scenes. In 2006, Hinton [12] pointed out that deep neural networks could learn more profound and essential features of objects of interest, which led to tremendous performance enhancement. After that, many attempts have been made to utilize deep-learning methods to feature learning in remote-sensing images. Li et al. [13] presented an unsupervised deep-learning method to learn a data-driven feature for Urban Villages. Then, as one of the most popular deep-learning models in image processing, convolutional neural networks (CNNs) currently dominate computer vision literature, achieving state-of-the-art performance in almost every topic to which they are applied. However, for some classification tasks of remote-sensing images, there are no large amounts of labeled training data. Some efforts have been made to combine CNN with unlabeled data to tackle the problem by performing unsupervised learning. Yu et al. [14] proposed balanced data-driven sparsity to help train CNN in an unsupervised way. Chaib et al. [15] proposed a new feature-learning method, which combined deep features extracted from different pretrained CNN models of the Very High Resolution image scene.
Another core problem is to construct a visual classifier. Visual-classifier design is one of the fundamental issues in computer vision areas. Representation-residual-based classifiers have attracted more attention in recent years due to the emerging paradigm of compressed sensing (CS) [16,17,18]. The first representation-based classifier, the sparse representation-based classification (SRC) method was proposed within the framework of the nearest subspace. The SRC [19] method firstly obtained the sparse representation of the test sample, and then measured the residual error from the training samples of each class. After that, Zhang et al. [20] proposed the collaborative representation-based classification (CRC) algorithm by using collaborative representation ( 2 norm regularizer) instead of sparse representation ( 1 norm regularizer). Many researchers from the field of remote sensing are attracted by the superior performance of SRC and CRC. Wu et al. [21] introduced an improved sparse representation-based classification method, which represented the test samples with a feature dictionary. Then, a novel sparse representation classification method [22] was proposed by Tang et al., which added the Local Binary Pattern(LBP) feature into the SRC model to extract the local texture of the remote-sensing image. For CRC, Li et al. [16] reviewed several representation-based classification approaches for hyperspectral remote-sensing imagery, including representation-based classification with weighted regularization by measuring similarity between each atom and a testing sample, or with dictionary partition using class-specific labeled samples and other methods. Li et al. [23] proposed a joint collaborative representation (CR) classification method, which uses several complementary features to represent an image, including spectral value and spectral gradient features, Gabor texture features, and DMP features. Then, Jiang et al. [24] proposed a spatial-aware collaborative representation (CR) for hyperspectral images classification, which utilized both the spatial and spectral features to represent images. From the perspective of representation of a testing sample with training samples, both shared attributes and class-specific attributes exist in remote-sensing images, and both of them are vital in the representation of an image. However, these CRC methods used on RS images before did not consider these two image representations at the same time. Generally, existing nearest subspace approaches are mainly categorized into two types: shared representation-based classification (SRC and CRC), and class-specific representation-based classification (LRC). For shared representation-based classification, Deng et al. [25] proposed a superposed linear representation classifier (SLRC) by representing the test image in terms of superposition of the class centroids and the shared intraclass differences. For class-specific representation-based classification, Liu et al. [26,27] proposed a class-specific representation algorithm that can find the intrinsic relationship between base vectors and the original-image features. Wang et al. [28] proposed a label-constrained specific representation approach to preserve the structural information in the feature space.
In this paper, we propose a hybrid collaborative representation-based classification approach. The proposed method is capable of improving the performance of classifying remote-sensing images by embedding the class-specific collaborative representation to CRC. Moreover, we extend the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance the performance of classification. The scheme of our proposed method is listed in Figure 1. Our work foci are threefold:
  • We propose a novel hybrid collaborative representation-based classification method that considers both conventional collaborative representation and class-specific collaborative representation.
  • We extend our proposed hybrid collaborative representation-based classification method to arbitrary kernel space to find the nonlinear structures hidden in the image features.
  • The proposed hybrid collaborative representation-based classification method is evaluated on four benchmark remote-sensing image datasets and achieves state-of-the-art performance.
The rest of the paper is organized as follows. Section 2 overviews several classical visual-recognition algorithms and proposes our hybrid collaborative representation-based classification with kernels. Then, experimental results and analysis are shown in Section 3. Discussion about the experimental results and the proposed method is presented in Section 4. Finally, conclusions are drawn in Section 5.

2. Proposed Method

In this section, we review related work about CRC. Then, we introduce some work about class-specific CRC (CS-CRC). Finally, we focus on introducing our proposed approach.

2.1. Overview of CRC

Zhang et al.  [20] proposed the CRC. For CRC, all training samples are concatenated together as the base vectors to form a subspace, and the test sample is described in the subspace. To be specific, given the training samples X = [ X 1 , X 2 , X C ] R D × N , X c R D × N c represents the training samples from the c t h class, C represents the number of classes, N c represents the number of training samples in the c t h class ( N = c = 1 C N c ), and D represents the dimensions of the samples. Supposing that y R D × 1 is a test sample, the objective function of CRC is as follows:
s ^ = arg m i n s y X s 2 2 + λ s 2 2 .
Here, λ is the regularization parameter to control the tradeoff between fitting goodness and collaborative term (i.e., multiple entries in X participating into representing the test samples). The role of the regularization term is twofold. Firstly, the 2 norm makes the least-square solution stable. Secondly, it introduces a certain amount of “sparsity” to collaborative representation s ^ and indicates that it is the collaborative representation but not the 1 -norm sparsity that makes sparsity powerful for classification. Collaborative representation-based classification effectively utilizes all training samples for visual recognition and the objective function of CRC has analytic solutions.

2.2. Class-Specific Collaborative Representation

For class-specific collaborative representation, the training samples in each category are considered a subspace. A test sample is represented with the samples in the specific class. The objective function of class-specific collaborative representation is as follows:
s ^ = arg m i n s c = 1 C y X c s c 2 2 + γ s c 2 2
Here, γ is the regularization parameter to control the tradeoff between fitting goodness and collaborative term. The CS-CRC is capable of describing the sample y in each category.

2.3. Hybrid Collaborative Representation

Theorem 1.
The reconstruction error of collaborative representation is bounded by the reconstruction error of class-specific collaborative representation.
y X s 2 2 c = 1 C y X c s c 2 2
Proof of Theorem 1.
According to Cauchy inequality,
y X s 2 2 = 1 C C y C X s 2 2 = 1 C y X 1 s 1 + + y X c s c + + y X C s C 2 2 y X 1 s 1 2 2 + + y X c s c 2 2 + + y X C s C 2 2 C 2 1 2 + + 1 2 = 1 C c = 1 C y X c s c 2 2 c = 1 C y X c s c 2 2
 □
Shared collaborative representation is advantageous to reduce the reconstruction error (as shown in Theorem 1), while class-specific collaborative representation is conducive to capturing discriminant information. We propose a hybrid collaborative representation algorithm to combine these two approaches. The objective function is as follows:
s ^ = arg m i n s y X s 2 2 + λ s 2 2 + τ c = 1 C y X c s c 2 2 + γ s c 2 2
In Equation (5), the first two terms are the conventional collaborative representation and the latter two terms are the class-specific collaborative representation. Conventional collaborative representation guarantees residual error and robustness, while the class-specific collaborative representation obtains distinctiveness via different classes. Equation (5) can be further arranged as follows:
s ^ = arg m i n s y X s 2 2 + β s 2 2 + τ c = 1 C y X c s c 2 2
Here, β = λ + τ × γ . The latter form of Equation (6) arose in the estimation of latent-variable graphical models [29].

2.4. Hybrid Collaborative Representation with Kernels

Superior performance of visual recognition is often achieved in Reproducing Kernel Hilbert Space because a nonlinear structure often exists in image features. Our proposed hybrid collaborative representation algorithm is easily extended to arbitrary kernel space. Suppose there exists a kernel function κ ( x , y ) = ϕ T ( x ) ϕ ( y ) . Here, ϕ : R D R K ( D < K ) maps the image features to the high-dimensional feature space, ϕ ( X ) = [ ϕ ( X 1 ) , ϕ ( X 2 ) , · · · , ϕ ( X C ) ] R K × N . The objective function of our proposed hybrid collaborative representation algorithm with kernels is as follows:
s ^ = arg m i n s ϕ ( y ) ϕ ( X ) s 2 2 + β s 2 2 + τ c = 1 C ϕ ( y ) ϕ ( X c ) s c 2 2

2.5. Optimization of the Objective Function

The objective function in Equation (7) can be rewritten as follows:
f ( s ) = ϕ ( y ) ϕ ( X ) s 2 2 + β s 2 2 + τ c = 1 C ϕ ( y ) 0 , , ϕ X c , , 0 s 2 2
Here, let 0 , , ϕ X c , , 0 be ϕ ( X ^ ) . Equation (8) can be simplified as follows:
f ( s ) = ϕ ( y ) ϕ ( X ) s 2 2 + β s 2 2 + τ c = 1 C ϕ ( y ) ϕ ( X ^ ) s 2 2 = t r a c e ϕ y T ϕ y 2 ϕ y T ϕ X s + s T ϕ X T ϕ X s + β t r a c e s T s + τ c = 1 C t r a c e ϕ y T ϕ y 2 ϕ y T ϕ X ^ s + s T ϕ X ^ T ϕ X ^ s = 1 + τ C t r a c e κ y , y 2 1 + τ t r a c e κ y , X s + t r a c e s T κ X , X + β I + τ c = 1 C κ X ^ , X ^ s
It is easy to obtain the optimum s ^ for Equation (9), as follows:
s ^ = κ X , X + β I + τ c = 1 C κ X ^ , X ^ 1 1 + τ κ y , X

2.6. Hybrid Collaborative Representation-Based Classification with Kernels

After obtaining collaborative code s ^ , hybrid collaborative representation-based classification aims to find the minimum value of the residual error for each class:
i d ( y ) = arg m i n c ϕ ( y ) ϕ ( X c ) s ^ c 2 2 .
where, i d ( y ) is the label of the testing sample, and y belongs to the class that has the minimum residual error. The procedure of the hybrid collaborative representation-based classification with kernels is shown in Algorithm 1.
Algorithm 1 Algorithm for hybrid collaborative representation-based classification with kernels
Require: Training samples X R D × N , β , and test sample y
1:
for c = 1 ; c C ; c + + do
2:
    Code y with the hybrid collaborative representation algorithm with kernels.
3:
    Compute the residuals e c ( y ) = ϕ ( y ) ϕ ( X c ) s c 2 2
4:
end for
5:
i d ( y ) = arg m i n c e c
6:
return i d ( y )

3. Experimental Results

In this section, we show our experimental results on four remote-sensing image datasets. To illustrate the significance of our approach, we compared our method with several state-of-the-art methods. In the following section, we firstly introduce the experimental settings. Then, we illustrate the experimental results on each aerial-image dataset.

3.1. Experimental Settings

We tested our method on four datasets, the RSSCN7 dataset [30], UC Merced Land Use dataset [5], WHU-RS19 dataset [31], and AID dataset [32]. For all the datasets, we implemented traditional CNN feature representation, where the image is directly fed into the pretrained VGG model [33] and layer fc6 is utilized to extract a 4096-dimensional vector for each image. The final feature of each image is 2 -normalized for better performance [32]. To eliminate the randomness, we randomly (repeatable) split the dataset into the train set and test set for 10 times, respectively. Average accuracy was recorded.
We also expanded our method into kernel space and used four different kernels: the linear kernel ( κ ( x , y ) = x T y ), the polynomial kernel (POLY, κ ( x , y ) = ( p + x T y ) q ), the Hellinger kernel ( κ ( x , y ) = d = 1 D x d y d ), and the radial basis function kernel (RBF, κ x , y = exp γ x y 2 2 ). Here, we set p = 4 , q = 3 and γ = 2 2 .For convenience, we denote our proposed hybrid collaborative representation algorithm with kernels as Hybrid-KCRC. The proposed Hybrid-KCRC algorithm is compared with other classification algorithms, including nearest-neighbor classification (NN), LIBLINEAR [34], SOFTMAX, CRC [20], CS-CRC, and SLRC-L2 [25].

3.2. Experiment on UC Merced Land Use Dataset

The UC Merced Land Use Dataset [5] was manually extracted from large images from the USGS National Map Urban-Area Imagery collection for various urban areas around the country. The pixel resolution of this public-domain imagery is 1 foot. The UC Merced Land Use Dataset involves 21 categories of 2100 land-use images in total. Each image measures 256 × 256 pixels. There are 100 images for each of the following classes: agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golfcourse, harbor, intersection, medium residential, mobile homepark, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis court. In Figure 2, we list several samples from this dataset.

3.2.1. Parameter tuning on UC Merced Land Use Dataset

There are two parameters in the objective function of the Hybrid-KCRC algorithm that need to be specified. β is an important parameter in the Hybrid-KCRC algorithm, which is used to adjust the tradeoff between the reconstruction error and the collaborative representation. τ is another important factor in the algorithm, which is used to control the tradeoff between the shared collaborative representation and the class-specific collaborative representation. β and τ are tuned to achieve the best accuracy. β are tuned in the range of 2 9 , and 2 2 and τ are tuned in the range of 2 10 and 2 4 . We randomly chose 20 images as the training samples and testing samples from each category, respectively, in this section. Figure 3 shows the classification rate with different β and τ on four kernels. For the four kernels, linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, the optimal parameter β and τ are ( 2 4 , 2 6 ), ( 2 0 , 2 8 ), ( 2 7 , 2 7 ), ( 2 0 , 2 8 ), respectively.

3.2.2. Comparison with Several Classical Classifier Methods on UC Merced Land Use Dataset

First, we randomly chose 20 images as the training samples and testing samples from each category, respectively, in this section. Table 1 illustrates the effectiveness of Hybrid-CRC for classifying images. For the four kernels, Hybrid-KCRC algorithm achieves the highest accuracy of 91.43 % with the POLY kernel and RBF kernel. This is 1.03 % higher than the CRC method and 2.33 % higher than the CS-CRC method.
Second, we increased the number of training samples in each category to evaluate the performance of our proposed Hybrid-KCRC method. Figure 4 shows the classification rate on the UC-Merced dataset with 20, 40, 60, 80 training samples in each category. From Figure 4, we can draw the conclusion that our proposed Hybrid-KCRC method achieves superior performance to the liblinear, CRC, and CS-CRC methods.

3.2.3. Confusion Matrix on UC Merced Land Use Dataset

To further illustrate the superior performance of our proposed Hybrid-KCRC method, we evaluated the classification rate per class of our method on UC-Merced dataset using a confusion matrix. In this section, we randomly chose 80 images per class as the training samples and 20 images per class as the testing samples. To eliminate randomness, we also randomly (repeatable) split the dataset into a train set and test set for 10 times, respectively. The confusion matrices are shown in Figure 5. From Figure 5, for the Hybrid-KCRC methods, 13, 13, 12, 12 classes achieved classification accuracy greater than or equal to 0.99 for the linear kernel, polynomial kernel, RBF kernel, and Hellinger kernel, respectively. However, 9 and 6 classes achieved classification accuracy greater than or equal to 0.99 for CRC and CS-CRC, respectively. Compared with the CRC method, the Hybrid-KCRC methods achieved a significant performance boost on the denseresidential class. Compared with CS-CRC method, the Hybrid-KCRC methods achieved great performance improvement on the storagetanks and tenniscourt classes. It is worth noting that all methods did not perform well on denseresidental and medimuresidential classes.

3.2.4. Comparison with State-of-the-Art Approaches

For comparison, we referred to previous works in the literature [35,36] and randomly selected 80 % images from each class as the training set, and the remaining 20 % images as the testing set. Several baseline methods (e.g., liblinear, CRC, CS-CRC), as well as state-of-the-art remote-sensing image-classification methods were utilized as benchmarks.
Table 2 shows the overall accuracy of the classification rate with various remote-sensing image-classification methods. First, we compared our proposed Hybrid-KCRC method with the liblinear, CRC, and CS-CRC methods. By comparing our proposed Hybrid-KCRC with the three baseline methods mentioned above, we found that Hybrid-KCRC achieved superior performance to the three baseline methods. It is remarkable that our proposed Hybrid-KCRC is the improvement of the CRC and CS-CRC methods. Second, we compared our Hybrid-KCRC with state-of-the-art remote-sensing image-classification results. It is clear that our proposed Hybrid-KCRC achieved the top performance. It should be noted that the feature utilized by CNN-W + VLAD with SVM, CNN-R + VLAD with SVM, CaffeNet + VLAD is more effective than the feature extracted directly from CNN (e.g., CaffeNet method with 93.42 % versus CaffeNet + VLAD method with 95.39 % ).

3.3. Experiment on the RSSCN7 Dataset

The RSSCN7 dataset was collected from Google Earth 3, which contains 2800 aerial-scene images divided into seven classes, i.e., grassland, forest, farmland, industry, parking lot, residential, and river and lake region. There are 400 images in each class. All images had the same size of 400 × 400 pixels. Figure 6 shows several sample images from the dataset.

3.3.1. Comparison with Several Classical Classifier Methods on the RSSCN7 Dataset

First, we randomly selected 100 images as the training samples and 100 images as the testing samples from each category. For the four kernels, linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, optimal parameters β and τ were ( 2 4 , 2 7 ), ( 2 1 , 2 6 ), ( 2 6 , 2 6 ), ( 2 1 , 2 6 ), respectively. Recognition accuracy is shown in Table 3.
From Table 3, we can see that the Hybrid-KCRC algorithm outperformed other conventional methods, achieving accuracy of 86.39 % , 87.34 % , 86.71 % , 87.29 % for the linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, respectively. For the four kernels, the Hybrid-KCRC algorithm achieved the highest accuracy, 87.34 % , with the POLY kernel. This is 1.57 % higher than the CRC method, and 3.11 % higher than the CS-CRC method.
Second, we increased the number of training samples in each category to evaluate the performance of our proposed Hybrid-KCRC method. Figure 7 shows the classification rate on the RSSCN7 dataset with 100, 200, and 300 training samples in each category. From Figure 7, we find that our proposed Hybrid-KCRC algorithm achieved superior performance to the baseline methods. The experiment on the CS-CRC method with 200 training samples also achieved a lower classification rate than with 300 training samples. The reason might be that too many training samples make the CRC overfitted. Moreover, both the hybrid-KCRC (poly) and hybrid-KCRC (rbf) achieved top accuracy.

3.3.2. Confusion Matrix on the RSSCN7 Dataset

To further illustrate the superior performance of our proposed Hybrid-KCRC method, we evaluated the classification rate per class of our method on the RSSCN7 dataset using a confusion matrix. In this section, we randomly chose 200 images per class as the training samples and 100 images per class as the testing samples. To eliminate randomness, we also randomly (repeatable) split the dataset into a train set and test set for 10 times, respectively. The confusion matrices are shown in Figure 8. From Figure 8, compared with the CS-CRC method and CRC method, the Hybrid-KCRC methods achieved better performance in most categories.

3.4. Experiment on the WHU-RS19 Dataset

The WHU-RS19 dataset was collected from Google Earth imagery, which is composed of a total number of 1005 aerial images that belong to 19 classes. Figure 9 shows several samples belonging to this dataset.
We randomly selected 20 and 20 samples per class for training and testing, respectively. For the four kernels, linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, optimal parameters β and τ were ( 2 2 , 2 6 ), ( 2 2 , 2 6 ), ( 2 6 , 2 8 ), ( 2 1 , 2 4 ), respectively. Recognition accuracy is shown in Table 4.
From Table 4, we can see that the Hybrid-KCRC algorithm outperformed other conventional methods, achieving accuracy of 94.76 % , 95.34 % , 95.34 % , 95.39 % for the linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, respectively. For the four kernels, the Hybrid-KCRC algorithm achieved the highest accuracy of 95.39 % with the Hellinger kernel. This is 0.81 % higher than the CRC method and 1.44 % higher than the CS-CRC method.
Then, we changed the number of training samples in each category to illustrate the performance of our proposed method. Figure 10 shows the classification accuracy on the WHU-RS19 dataset with 10, 20, and 30 training samples in each category. We can clearly see from Figure 10 that our proposed Hybrid-KCRC algorithm achieved superior performance to classical methods.

3.5. Experiment on the AID Dataset

The AID dataset is a new large-scale aerial-image dataset that collects sample images from Google Earth imagery. The AID dataset is the most challenging dataset for the scene classification of aerial images. The dataset is made up of the following 30 aerial-scene types: airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks, and viaduct. The size of each aerial image is fixed to 600 × 600 pixels to cover a scene with various resolutions. There are 10,000 images labeled into 30 categories. In Figure 11, we show several images of this dataset.
We randomly selected 20 and 20 samples per class for training and testing, respectively. For the four kernels, linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, optimal parameters β and τ were ( 2 4 , 2 7 ), ( 2 0 , 2 8 ), ( 2 7 , 2 9 ), ( 2 3 , 2 6 ), respectively. Recognition accuracy is shown in Table 5.
From Table 5, we can see that the Hybrid-KCRC algorithm outperformed other conventional methods, achieving an accuracy of 81.07 % , 82.07 % , 82.05 % , 81.28 % for the linear kernel, POLY kernel, RBF kernel, and Hellinger kernel, respectively. For the four kernels, the Hybrid-KCRC algorithm achieved the highest accuracy of 82.07 % with the POLY kernel. This is 1.34 % higher than the CRC method and 4.15 % higher than the CS-CRC method. We also used different numbers of training samples in each category to evaluate the performance of the Hybrid-KCRC method. The classification rate on the AID dataset using 20, 40, 60, 80 training samples in each category is shown in Figure 12. From Figure 12, we find that our proposed Hybrid-KCRC algorithm was better than several classical classification methods.

4. Discussion

  • For RS image classification, both shared attributes and class-specific attributes are vital to the representation of testing samples with training samples. So, based on CRC, we propose a hybrid collaborative representation-based classification method that can decrease the reconstruction error and improve the classification rate. Through comparison with several state-of-the-art methods for RS image classification, we can see that our proposed method is capable of efficiently promoting the performance of classifying remote-sensing images.
  • Because of the existence of a nonlinear structure in image features, we extended our method into Reproducing Kernel Hilbert Space to further improve the performance of our method with kernel functions. From the experimental results of comparing with several classical classification methods, we can see that the classification rates of the Hybrid-KCRC method on these four datasets are all higher than that with the NN, LIBLINEAR, SOFTMAX, SLRC-L2, CRC and CS-CRC methods. Obviously, our proposed Hybrid-KCRC method achieved superior performance to these methods.
  • We took the UC-Merced dataset as an example and evaluated the performance of our proposed Hybrid-KCRC method per class with a confusion matrix. From the confusion matrix, we can see that the Hybrid-KCRC method is better than other methods in most categories.
  • It is true that there are several pretrained models to extract features and the performance of the resnet model outperforms the performance of the vgg model. In our paper, however, we paid more attention to the design of the classifier and not feature extraction. We only extracted the features of remote-sensing images to complete the classification task. The vgg is also very popular candidate models for extracting CNN activations of images. That are the reasons why we chose vgg. As a matter of fact, our method could be further improved by using other, better feature-extraction pretrained models. To demonstrate this, we also extracted CNN features with the pretrained Resnet model [44], and layer pool5 was utilized. The feature was a 2048-dimensional vector for each image. The final feature of each image was 2 -normalized. The experimental results are shown in Table 6. For fair comparison on each dataset, we fixed the ratio of the number of training sets of the UC-Merced dataset, the WHU-RS19 dataset, the RSSCN7 dataset, and the AID dataset to 80 % , 50 % , 60 % and 50 % , respectively. From Table 6, we can see that the features extracted via the pretrained resnet model performed better than the features extracted via the pretrained vgg model. Our proposed Hybrid-KCRC methods also achieved superior performance to classical methods, such as SVM and CRC.
  • Figures about classification rates with a different number of training samples clearly illustrate that RBF and polynomial kernel functions suit RS images better. Notably, classification rate increased by 1% with the Hybrid-KCRC method from linear kernel to polynomial kernel for the AID dataset. For the RBF kernel function, the metric was different from the linear kernel. The distance between two points x and y from the linear kernel space would be closer in the RBF kernel space if x and y were close, with the contrary conclusion if x and y were far away. This makes representation more discriminative and achieves higher classification accuracy. For the polynomial kernel, the linear kernel can be a special case of the polynomial kernel ( p = 0 , q = 1 ). Note that kernel function κ ( x , y ) can be approximated by ϕ T ( x ) ϕ ( y ) [45] to save time for the learning algorithm. We will adopt the approximation of kernel κ ( x , y ) to save time in future works.
  • In the literature [23], the author proposed a joint collaborative representation (CR) classification method that uses several complementary features to represent images, including spectral values and spectral gradient features, Gabor texture features, and DMP features. This multifeature fusion can also be implemented via our proposed method. In the literature [24], the author proposed a spatial-aware CR for hyperspectral image classification that utilized both spatial and spectral features to represent an image. The penalty term can also be added into the objective function of our proposed method.

5. Conclusions

In this paper, we proposed a hybrid collaborative representation-based algorithm via embedding class-specific collaborative representation into conventional collaborative representation-based classification to improve the performance of classifying remote-sensing images. The proposed method is capable of balancing class-specific collaborative representation and shared collaborative representation. Moreover, we extended the proposed method to arbitrary kernel space to explore the nonlinear characteristics hidden in remote-sensing image features to further enhance classification performance. Extensive experiments on four benchmark remote-sensing image datasets have demonstrated the superiority of our proposed hybrid collaborative representation algorithm.

Author Contributions

B.-D.L., W.X., J.M., and Y.L. conceived and designed the experiments; B.-D.L. and W.X. performed the experiments; Y.-J.W. analyzed the data; W.X. and B.-D.L. wrote the paper; all authors read and approved the final manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (Grant No. 61402535, No. 61271407), the Natural Science Foundation for Youths of Shandong Province, China (Grant No. ZR2014FQ001), the Natural Science Foundation of Shandong Province, China (Grant No. ZR2017MF069), the Qingdao Science and Technology Project (No. 17-1-1-8-jch), the Fundamental Research Funds for the Central Universities, the China University of Petroleum (East China) (Grant No. 16CX02060A), the Open Research Fund from Shandong Provincial Key Laboratory of Computer Network (No. SDKLCN-2018-01), and the Innovation Project for Graduate Students of China University of Petroleum (East China) (No. YCX2018063).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CRCCollaborative Representation-Based Classification
CS-CRCClass-Specific Collaborative Representation-Based Classification
RSRemote Sensing
EOEarth Observation
NSNearest Subspace
BovWBag-of-Visual-Words
CNNsConvolutional Neural Networks
LRCLinear Regression-Based Classification
POLYPolynomial Kernel
RBFRadial Basis Function Kernel
NNNearest Neighbor
SLRCSuperposed Linear Representation Classifier

References

  1. Navalgund, R.R.; Jayaraman, V.; Roy, P.S. Remote sensing applications: An overview. Curr. Sci. 2007, 93, 1747–1766. [Google Scholar]
  2. Liu, W.; Ma, X.; Zhou, Y.; Tao, D.; Cheng, J. p-Laplacian Regularization for Scene Recognition. IEEE Trans. Cybern. 2018, 1–14. [Google Scholar] [CrossRef] [PubMed]
  3. Ma, X.; Liu, W.; Li, S.; Tao, D.; Zhou, Y. Hypergraph p-Laplacian Regularization for Remotely Sensed Image Recognition. IEEE Trans. Geosci. Remote Sens. 2018, PP, 1–11. [Google Scholar] [CrossRef]
  4. Yang, X.; Liu, W.; Tao, D.; Cheng, J.; Li, S. Multiview Canonical Correlation Analysis Networks for Remote Sensing Image Recognition. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1855–1859. [Google Scholar] [CrossRef]
  5. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the Sigspatial International Conference on Advances in Geographic Information Systems, San Jose, CA, USA, 2–5 November 2010; ACM: New York, NY, USA, 2010; pp. 270–279. [Google Scholar]
  6. Cheng, G.; Han, J.; Guo, L.; Liu, Z.; Bu, S.; Ren, J. Effective and Efficient Midlevel Visual Elements-Oriented Land-Use Classification Using VHR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4238–4249. [Google Scholar] [CrossRef] [Green Version]
  7. Zou, J.; Li, W.; Chen, C.; Du, Q. Scene Classification Using Local and Global Features with Collaborative Representation Fusion. Inf. Sci. 2016, 348, 209–226. [Google Scholar] [CrossRef]
  8. Zhang, F.; Du, B.; Zhang, L. Saliency-guided unsupervised feature learning for scene classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2175–2184. [Google Scholar] [CrossRef]
  9. Fu, M.; Yuan, Y.; Lu, X. Unsupervised feature learning for scene classification of high resolution remote sensing image. In Proceedings of the IEEE China Summit and International Conference on Signal and Information Processing, Chengdu, China, 12–15 July 2015; pp. 206–210. [Google Scholar]
  10. Han, J.; Zhang, D.; Cheng, G.; Guo, L.; Ren, J. Object Detection in Optical Remote Sensing Images Based on Weakly Supervised Learning and High-Level Feature Learning. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3325–3337. [Google Scholar] [CrossRef] [Green Version]
  11. Li, J.; Huang, X.; Gamba, P.; Bioucas-Dias, J.M.; Zhang, L.; Benediktsson, J.A.; Plaza, A. Multiple Feature Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1592–1606. [Google Scholar] [CrossRef]
  12. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  13. Li, Y.; Huang, X.; Liu, H. Unsupervised Deep Feature Learning for Urban Village Detection from High-Resolution Remote Sensing Images. Photogramm. Eng. Remote Sens. 2017, 83, 567–579. [Google Scholar] [CrossRef]
  14. Yu, Y.; Zhong, P.; Gong, Z. Balanced data driven sparsity for unsupervised deep feature learning in remote sensing images classification. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Fort Worth, TX, USA, 23–28 July 2017; pp. 668–671. [Google Scholar]
  15. Chaib, S.; Yao, H.; Gu, Y.; Amrani, M. Deep feature extraction and combination for remote sensing image classification based on pre-trained CNN models. In Proceedings of the International Conference on Digital Image Processing, Hong Kong, China, 19–22 May 2017; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; p. 104203D. [Google Scholar]
  16. Li, W.; Du, Q. A survey on representation-based classification and detection in hyperspectral remote sensing imagery. Pattern Recognit. Lett. 2016, 83, 115–123. [Google Scholar] [CrossRef]
  17. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  18. Baraniuk, R.G. Compressive sensing [lecture notes]. IEEE Signal Process. Mag. 2007, 24, 118–121. [Google Scholar] [CrossRef]
  19. John, W.; Allen, Y.Y.; Arvind, G.; Shankar, S.S.; Yi, M. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar]
  20. Zhang, L.; Yang, M.; Feng, X. Sparse representation or collaborative representation: Which helps face recognition? In Proceedings of the IEEE International Conference on Computer vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 471–478. [Google Scholar]
  21. Wu, S.; Chen, H.; Bai, Y.; Zhu, G. A remote sensing image classification method based on sparse representation. Multimed. Tools Appl. 2016, 75, 12137–12154. [Google Scholar] [CrossRef]
  22. Tang, X.; Liu, Y.; Chen, J. Improvement of Remote Sensing Image Classification Method Based on Sparse Representation. Comput. Eng. 2016, 42, 254–258, 265. [Google Scholar] [CrossRef]
  23. Li, J.; Zhang, H.; Zhang, L.; Huang, X.; Zhang, L. Joint Collaborative Representation With Multitask Learning for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5923–5936. [Google Scholar] [CrossRef]
  24. Jiang, J.; Chen, C.; Yu, Y.; Jiang, X.; Ma, J. Spatial-Aware Collaborative Representation for Hyperspectral Remote Sensing Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 404–408. [Google Scholar] [CrossRef]
  25. Deng, W.; Hu, J.; Guo, J. Face recognition via collaborative representation: Its discriminant nature and superposed representation. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2513–2521. [Google Scholar] [CrossRef]
  26. Liu, B.D.; Shen, B.; Wang, Y.X. Class specific dictionary learning for face recognition. In Proceedings of the IEEE International Conference on Security, Pattern Analysis, and Cybernetics (ICSPAC), Wuhan, China, 18–19 October 2014; pp. 229–234. [Google Scholar]
  27. Liu, B.D.; Shen, B.; Gui, L.; Wang, Y.X.; Li, X.; Yan, F.; Wang, Y.J. Face recognition using class specific dictionary learning for sparse representation and collaborative representation. Neurocomputing 2016, 204, 198–210. [Google Scholar] [CrossRef]
  28. Wang, W.; Yan, Y.; Winkler, S.; Sebe, N. Category specific dictionary learning for attribute specific feature selection. IEEE Trans. Image Process. 2016, 25, 1465–1478. [Google Scholar] [CrossRef] [PubMed]
  29. Zorzi, M.; Sepulchre, R. AR Identification of Latent-Variable Graphical Models. IEEE Trans. Autom. Control 2016, 61, 2327–2340. [Google Scholar] [CrossRef] [Green Version]
  30. Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep Learning Based Feature Selection for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2321–2325. [Google Scholar] [CrossRef]
  31. Sheng, G.; Yang, W.; Xu, T.; Sun, H. High-resolution satellite scene classification using a sparse coding based multiple feature combination. Int. J. Remote Sens. 2012, 33, 2395–2412. [Google Scholar] [CrossRef]
  32. Xia, G.S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef]
  33. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
  34. Fan, R.E.; Chang, K.W.; Hsieh, C.J.; Wang, X.R.; Lin, C.J. LIBLINEAR: A library for large linear classification. J. Mach. Learn. Res. 2008, 9, 1871–1874. [Google Scholar]
  35. Yu, Y.; Gong, Z.; Wang, C.; Zhong, P. An Unsupervised Convolutional Feature Fusion Network for Deep Representation of Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 23–27. [Google Scholar] [CrossRef]
  36. Lu, X.; Zheng, X.; Yuan, Y. Remote sensing scene classification by unsupervised representation learning. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5148–5157. [Google Scholar] [CrossRef]
  37. Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), New York, NY, USA, 17–22 June 2006; pp. 2169–2178. [Google Scholar]
  38. Vaduva, C.; Gavat, I.; Datcu, M. Latent Dirichlet allocation for spatial analysis of satellite images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2770–2786. [Google Scholar] [CrossRef]
  39. Cheriyadat, A.M. Unsupervised feature learning for aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 439–451. [Google Scholar] [CrossRef]
  40. Penatti, O.A.; Nogueira, K.; dos Santos, J.A. Do deep features generalize from everyday objects to remote sensing and aerial scenes domains? In Proceedings of the IEEE International Conference on Computer vision (CVPR) Workshops, Boston, MA, USA, 7–12 June 2015; pp. 44–51. [Google Scholar]
  41. Hu, F.; Xia, G.S.; Hu, J.; Zhang, L. Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
  42. Lin, D.; Fu, K.; Wang, Y.; Xu, G.; Sun, X. MARTA GANs: Unsupervised representation learning for remote sensing image classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2092–2096. [Google Scholar] [CrossRef]
  43. Li, P.; Ren, P.; Zhang, X.; Wang, Q.; Zhu, X.; Wang, L. Region-Wise Deep Feature Representation for Remote Sensing Images. Remote Sens. 2018, 10, 871. [Google Scholar] [CrossRef]
  44. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  45. Zorzi, M.; Chiuso, A. The Harmonic Analysis of Kernel Functions. Automatica 2018, 94, 125–137. [Google Scholar] [CrossRef]
Figure 1. Scheme of our proposed hybrid collaborative representation algorithm. The left part is the collaborative representation to extract the shared attributes while the right part is the class-specific collaborative representation to extract class-specific attributes. Their combination forms our hybrid collaborative representation algorithm.
Figure 1. Scheme of our proposed hybrid collaborative representation algorithm. The left part is the collaborative representation to extract the shared attributes while the right part is the class-specific collaborative representation to extract class-specific attributes. Their combination forms our hybrid collaborative representation algorithm.
Remotesensing 10 01934 g001
Figure 2. Samples of UC Merced Land Use dataset.
Figure 2. Samples of UC Merced Land Use dataset.
Remotesensing 10 01934 g002
Figure 3. Parameter tuning on UC Merced Land Use Dataset.
Figure 3. Parameter tuning on UC Merced Land Use Dataset.
Remotesensing 10 01934 g003aRemotesensing 10 01934 g003b
Figure 4. Classification rate on UC-Merced dataset with different number of training samples in each category.
Figure 4. Classification rate on UC-Merced dataset with different number of training samples in each category.
Remotesensing 10 01934 g004
Figure 5. Confusion matrices on UC-Merced dataset.
Figure 5. Confusion matrices on UC-Merced dataset.
Remotesensing 10 01934 g005
Figure 6. Samples of RSSCN7 dataset.
Figure 6. Samples of RSSCN7 dataset.
Remotesensing 10 01934 g006
Figure 7. Classification rate on the RSSCN7 dataset with a different number of training samples in each category.
Figure 7. Classification rate on the RSSCN7 dataset with a different number of training samples in each category.
Remotesensing 10 01934 g007
Figure 8. Confusion matrices on the RSSCN7 dataset.
Figure 8. Confusion matrices on the RSSCN7 dataset.
Remotesensing 10 01934 g008
Figure 9. Samples of the WHU-RS19 dataset.
Figure 9. Samples of the WHU-RS19 dataset.
Remotesensing 10 01934 g009
Figure 10. Classification rate on the WHU-RS19 dataset with a different number of training samples in each category.
Figure 10. Classification rate on the WHU-RS19 dataset with a different number of training samples in each category.
Remotesensing 10 01934 g010
Figure 11. Samples of the AID dataset.
Figure 11. Samples of the AID dataset.
Remotesensing 10 01934 g011
Figure 12. Classification rate on the AID dataset with a different number of training samples in each category.
Figure 12. Classification rate on the AID dataset with a different number of training samples in each category.
Remotesensing 10 01934 g012
Table 1. Comparison with several classical classification methods on the UC Merced Land Use Dataset (%).
Table 1. Comparison with several classical classification methods on the UC Merced Land Use Dataset (%).
Methods\DatasetsUC-Merced
NN81.88
LIBLINEAR89.57
SOFTMAX88.00
SLRC-L289.79
CRC90.40
CS-CRC89.10
Hybrid-KCRC (linear)90.67
Hybrid-KCRC (POLY)91.43
Hybrid-KCRC (RBF)91.43
Hybrid-KCRC (Hellinger)90.90
Table 2. Experiment on UC-Merced dataset (%).
Table 2. Experiment on UC-Merced dataset (%).
MethodsYearAccuracy
SPMK [37]2006 74 %
LDA-SVM [38]2013 80.33 %
SIFT + SC [39]2013 81.67 %
Saliency + SC [8]2014 82.72 %
CaffeNet [40] (without fine-tuning)2015 93.42 %
CaffeNet [41] + VLAD2015 95.39 %
DCGANs [42] (without augmentation)2017 85.36 %
MAGANs [42] (without augmentation)2017 87.69 %
WDM [36]2017 95.71 %
UCFFN [35]2018 87.83 %
CNN-W + VLAD with SVM [43]2018 95.61 %
CNN-R + VLAD with SVM [43]2018 95.85 %
VGG19 + liblinear 95.05 %
VGG19 + CRC 94.67 %
VGG19 + CS-CRC 95.26 %
VGG19 + Hybrid-KCRC (linear) 96.17 %
VGG19 + Hybrid-KCRC (POLY) 96.29 %
VGG19 + Hybrid-KCRC (RBF) 96.26 %
VGG19 + Hybrid-KCRC (Hellinger) 96.33 %
Table 3. Comparison with several classical classification methods on the RSSCN7 dataset (%).
Table 3. Comparison with several classical classification methods on the RSSCN7 dataset (%).
Methods\DatasetsRSSCN7
NN76.44
LIBLINEAR84.84
SOFTMAX82.14
SLRC-L281.99
CRC85.77
CS-CRC84.23
Hybrid-KCRC (linear)86.39
Hybrid-KCRC (POLY)87.34
Hybrid-KCRC (RBF)87.29
Hybrid-KCRC (Hellinger)86.71
Table 4. Comparison with several classical classification methods on the WHU-RS19 dataset (%).
Table 4. Comparison with several classical classification methods on the WHU-RS19 dataset (%).
Methods\DatasetsWHU-RS19
NN87.74
LIBLINEAR94.42
SOFTMAX93.29
SLRC-L294.18
CRC94.58
CS-CRC93.95
Hybrid-KCRC (linear)94.76
Hybrid-KCRC (POLY)95.34
Hybrid-KCRC (RBF)95.34
Hybrid-KCRC (Hellinger)95.39
Table 5. Comparison with several classical classification methods on the AID dataset (%).
Table 5. Comparison with several classical classification methods on the AID dataset (%).
Methods\DatasetsAID
NN65.32
LIBLINEAR79.93
SOFTMAX76.13
SLRC-L279.27
CRC80.73
CS-CRC77.92
Hybrid-KCRC (linear)81.07
Hybrid-KCRC (POLY)82.07
Hybrid-KCRC (RBF)82.05
Hybrid-KCRC (Hellinger)81.28
Table 6. Comparison with different CNN pretrained models (%).
Table 6. Comparison with different CNN pretrained models (%).
Models\DatasetsUC-Merced ( 0.8 )RSSCN7 ( 0.5 )WHU-RS19 ( 0.6 )AID ( 0.5 )
CaffeNet + SVM [32]95.0296.2488.2589.53
VGG16 + SVM [32]95.2196.0587.1889.64
GoogleNet + SVM [32]94.3194.7185.8486.39
VGG19 + SVM94.6795.4285.9990.35
VGG19 + CRC95.0595.6386.9789.58
VGG19 + Hybrid-KCRC (linear)96.1795.6888.1689.93
VGG19 + Hybrid-KCRC (POLY)96.2996.4289.2191.75
VGG19 + Hybrid-KCRC (RBF)96.2696.589.1791.82
VGG19 + Hybrid-KCRC (Hellinger)96.3395.8288.4790.35
Resnet + SVM96.9097.7491.592.97
Resnet + CRC97.0098.0392.4792.85
Resnet + Hybrid-KCRC (linear)97.2998.0592.8992.87
Resnet + Hybrid-KCRC (POLY)97.4098.1693.1193.98
Resnet + Hybrid-KCRC (RBF)97.4398.1393.0794.00
Resnet + Hybrid-KCRC (Hellinger)97.3698.3792.8793.15

Share and Cite

MDPI and ACS Style

Liu, B.-D.; Xie, W.-Y.; Meng, J.; Li, Y.; Wang, Y. Hybrid Collaborative Representation for Remote-Sensing Image Scene Classification. Remote Sens. 2018, 10, 1934. https://doi.org/10.3390/rs10121934

AMA Style

Liu B-D, Xie W-Y, Meng J, Li Y, Wang Y. Hybrid Collaborative Representation for Remote-Sensing Image Scene Classification. Remote Sensing. 2018; 10(12):1934. https://doi.org/10.3390/rs10121934

Chicago/Turabian Style

Liu, Bao-Di, Wen-Yang Xie, Jie Meng, Ye Li, and Yanjiang Wang. 2018. "Hybrid Collaborative Representation for Remote-Sensing Image Scene Classification" Remote Sensing 10, no. 12: 1934. https://doi.org/10.3390/rs10121934

APA Style

Liu, B. -D., Xie, W. -Y., Meng, J., Li, Y., & Wang, Y. (2018). Hybrid Collaborative Representation for Remote-Sensing Image Scene Classification. Remote Sensing, 10(12), 1934. https://doi.org/10.3390/rs10121934

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop