Next Article in Journal
Measurement of Quasi-Static and Dynamic Displacements of Footbridges Using the Composite Instrument of a Smartstation and an Accelerometer: Case Studies
Previous Article in Journal
Climatology Characteristics of Ionospheric Irregularities Described with GNSS ROTI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels

by
Sergio R. Blanco
1,*,
Dora B. Heras
1 and
Francisco Argüello
2
1
Centro Singular de Investigación en Tecnologías Inteligentes, Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain
2
Departamento de Electrónica y Computación, Universidade de Santiago de Compostela, 15782 Santiago de Compostela, Spain
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(16), 2633; https://doi.org/10.3390/rs12162633
Submission received: 9 July 2020 / Revised: 7 August 2020 / Accepted: 12 August 2020 / Published: 14 August 2020
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)

Abstract

:
Texture information allows characterizing the regions of interest in a scene. It refers to the spatial organization of the fundamental microstructures in natural images. Texture extraction has been a challenging problem in the field of image processing for decades. In this paper, different techniques based on the classic Bag of Words (BoW) approach for solving the texture extraction problem in the case of hyperspectral images of the Earth surface are proposed. In all cases the texture extraction is performed inside regions of the scene called superpixels and the algorithms profit from the information available in all the bands of the image. The main contribution is the use of superpixel segmentation to obtain irregular patches from the images prior to texture extraction. Texture descriptors are extracted from each superpixel. Three schemes for texture extraction are proposed: codebook-based, descriptor-based, and spectral-enhanced descriptor-based. The first one is based on a codebook generator algorithm, while the other two include additional stages of keypoint detection and description. The evaluation is performed by analyzing the results of a supervised classification using Support Vector Machines (SVM), Random Forest (RF), and Extreme Learning Machines (ELM) after the texture extraction. The results show that the extraction of textures inside superpixels increases the accuracy of the obtained classification map. The proposed techniques are analyzed over different multi and hyperspectral datasets focusing on vegetation species identification. The best classification results for each image in terms of Overall Accuracy (OA) range from 81.07% to 93.77% for images taken at a river area in Galicia (Spain), and from 79.63% to 95.79% for a vast rural region in China with reasonable computation times.

Graphical Abstract

1. Introduction

Monitoring vegetation species in a natural area is an important task in the context of human intervention planning. Specifically, the observation of the dynamic behavior of the vegetation provides useful insights for biodiversity conservation and forestry, among other fields. Hyperspectral imagery for remote sensing has been revealed as a powerful technique in this field, and many examples can be mentioned: from land cover changes [1] to mapping vegetation species [2,3]. Although satellite-based remote sensing is a way of obtaining consistent and comparable data, Unmanned Aerial Vehicles (UAVs) provide a more flexible platform with higher spatial resolution. The price of multi- or hyperspectral sensors used on board UAVs has decreased during the last few years. This fact makes them widely used even by small companies for an increasing number of tasks.
In the case of images for land cover analysis, supervised classification solves the problem of, given a hyperspectral image and its reference data, obtaining and distinguishing the different vegetation species or artificial elements present in the scene. In order to perform this classification, texture features can be extracted from the image [4], thus improving the classification accuracy. These features characterize the visual structures present in the scene. As it is a powerful visual cue, texture supplies information to identify objects or uniform regions of interest in the images. Texture can be differentiated from color in the sense that it refers to the spatial organization of a set of basic elements or primitives over the image called textons. These can be defined as the fundamental microstructures in natural images and the atoms of preattentive human visual perception [5].
Texture classification deals with designing algorithms or processing schemes for declaring a given texture region as belonging to one out of a set of categories (in a context where training samples have been provided). Research on texture features is mainly focused on three well-established approaches: Bag of Words (BoW)-based [6], Convolutional Neural Network (CNN)-based [7], and attribute-based [8]. The goal of BoW texture feature extraction is the statistical representation of texture images as histograms over a texton dictionary. The approach of CNNs aims to leverage large labeled datasets to learn high quality features, which can then be categorized using a simple classifier. In the case of the attribute-based approach, there are three essential issues: the identification of a universal texture attribute vocabulary, the establishment of an annotated benchmark texture dataset, and the estimation of texture attributes from images based on low level texture representations. One of the first attempts was carried out in [9], where a set of seventeen human comprehensible attributes (seven related to color and ten to structure) for color texture characterization were introduced.
Different papers focused on the classification of vegetation species using texture features in color, multi-, or hyperspectral imagery can be found in the literature. The simplest methods to characterize vegetation using textures are based on color histograms, statistical measures (mean, standard deviation, skewness, kurtosis, or entropy, among others), and clustered centers of filter bank responses. Following this approach, a classification scheme for the canopy cover mapping of spekboom in a large semiarid region in South Africa is presented in [10]. The scheme is based on a set of spectral features and vegetation indices, including several statistical measures in sliding windows of several sizes. A different scheme for natural roadside vegetation classification is presented in [11]. This scheme learns two individual sets of BoW dictionaries from color and filter-bank texture features.
Two simple methods for texture extraction, based on the analysis of patterns in the neighborhood of a pixel, are Local Binary Pattern (LBP) and Gray-Level Co-occurrence Matrix (GLCM). LBP is used in [12] for the classification of tree species using hyperspectral data and an aerial stereo camera system. Feature extraction is performed following a patch-based approach. On the other hand, a large number of publications on the classification of vegetation are based on the GLCM texture method. Vegetation mapping in complex urban landscapes using a hybrid method combining Random Forest and GLCM texture analysis at nine different window sizes is presented in [13]. The classification is done using ultra-high resolution imagery acquired at low altitudes. A proposal of crop classification method for hyperspectral images combining spectral indices and GLCM texture information is presented in [14]. An object-based GLCM texture extraction method for the classification of man-planted forests on mountainous areas using satellite data is the contribution presented by [15]. As a preprocessing step, the texture feature of segmented image objects are enhanced using a 2D Gabor filter. Using very high resolution images acquired by UAVs, a study to identify the most relevant image parameters for tree species discrimination is conducted in [16]. Specifically, classification of savannah tree species is carried out by using chromatic coordinates, spectral indices, the canopy height model, and GLCM texture measures in different window sizes. Similarly, the potential of combining spectral measures and GLCM texture information for crop classification in time-series UAV images is investigated in [17].
More elaborate texture methods based on local invariant descriptors such as SURF and SIFT can also be used for characterizing vegetation species. For example, a methodology for vegetation segmentation in cornfield images obtained by UAVs is presented in [18]. Specifically, it focuses on finding an appropriate set of different color vegetation indices and local descriptors for vegetation characterization. The classification of weeds growing among crops using a BoW model based on SIFT or SURF features is presented in [19]. Finally, a study on the application of SIFT to cropland mapping in the Brazilian Amazon based on vegetation index time series is conducted in [20].
A classification scheme based on textures can be combined with other types of features obtained by UAVs to improve the classification results [10,16]. Among them are spectral features, vegetation indices, and morphological measures. For the detection of the extent of trees and shrubs, the canopy height model (CHM) is the one most commonly used. LiDAR sensors have been widely used in order to collect high resolution information on forest structure. Surface reconstruction by image matching can also be used to estimate CHM. It is achieved by exploiting the redundancy of multiple overlapping aerial images [21,22]. CHM is not used in this paper since the available datasets in many cases do not provide multiple images for the same area.
For the classification of a image using textures it is necessary to delimit regions over which the texture features are computed. Most of the vegetation classification methods proposed in the literature use regular patches [6,8,10,12,13,15,16,19]. In other cases, segmentation or object detection algorithms for dividing the image into regions are used [14,17,18]. A technique commonly used for the extraction of uniform regions in images is the segmentation based on superpixels [23,24]. A superpixel is a set of neighboring pixels (segment) which are similar in terms of low-level properties (such as spatial proximity, color, intensity, or other criteria). They differ from other segmentation methods in that the size and regularity of the superpixels are similar throughout the image. Superpixels provide a convenient and compact representation of images that allow to reduce the computational cost of the processing algorithms [25]. In the schemes presented in this paper, a texture feature vector is computed for each superpixel.
In this paper, different techniques for vegetation classification in multi and hyperspectral images based on texture extraction and BoW are proposed. The techniques are grouped into three categories: codebook-based, descriptor-based, and spectral enhanced descriptor-based schemes. The main contribution of this work is that in all the presented schemes the texture algorithms are computed inside superpixels, in contrast to most of the methods previously published in the literature, in which the vegetation textures are extracted from patches or objects. Moreover, some of the descriptor-based methods have not been applied before to multi- and hyperspectral images. Finally, a detailed comparison of the different techniques is carried out in terms of classification accuracy for several land cover remote sensing datasets.
The rest of the paper is organized into four sections. Section 2 presents a description of the proposed schemes involving superpixel computation and texture extraction. The experimental results for the evaluation in terms of classification performance and computational cost are presented in Section 3. The discussion is carried out in Section 4. Finally, Section 5 summarizes the main conclusions.

2. Methods

Three different schemes for texture extraction in order to obtain superpixel descriptors (i.e., a vector which describes the texture or visual properties of each superpixel in the scene) were proposed. The main novelty is that the texture features were computed inside these irregular patches of the images called superpixels and that the schemes were adapted to profit from the information available in all the bands of the hyperspectral images. Different texture extraction techniques can be derived from the proposed schemes depending on the algorithms selected for their stages as it will be explained throughout the paper.
The different stages of the proposed schemes, shown in Figure 1, are the following.
Superpixel extraction. This is a particular type of segmentation stage. As previously mentioned, a superpixel is a set of pixels which are similar in terms of spatial proximity, color, intensity, or other properties. There is a relationship between these superpixels and the objects present in the scene. In this stage, a set of S superpixels was extracted from the image. The computed superpixels are irregular, which is a difference between this process and other similar ones (e.g., creation of a grid of square patches). The differences in size and shape among superpixels are due to the adaption of each superpixel to the objects appearing in the scene.
In our case, the algorithm used for superpixel extraction was Simple Linear Iterative Clustering (SLIC) [26], although other options such as those based on watershed [27] or Efficient Topology Preserving Segmentation (ETPS) [28] would obtain similar results. SLIC clusterises pixels into superpixels, taking into account their relative position and spectral values, so both spatial an spectral information are considered. This algorithm is an adaptation of k-means for superpixel generation that begins defining clusters. Each pixel is associated to the nearest cluster center. Then, the cluster centers are adjusted to be the mean of all pixels belonging to the cluster. The assignment and update steps are iteratively repeated until the convergence criteria (a maximum number of iterations or an error value) is met. Finally, a postprocessing step enforces connectivity by reassigning disjoint pixels to nearby superpixels. It offers good results for segmenting hyperspectral images [24].
After the segmentation was computed, most of the subsequent stages in the proposed schemes were calculated at the superpixel level instead of at the pixel level. In particular, only one label from the reference data was considered for each superpixel, which is the one associated to the central pixel of this superpixel. Moreover, texture extraction was performed inside each superpixel.
Keypoint detection and description. A set of points of interest or keypoints were extracted from the image for each band and each superpixel. This stage was used in two of the schemes as shown in Figure 1b,c. These keypoints may be extracted in the positions given by a keypoint detector [29] or densely at each pixel position over a fixed grid. In addition, they should be distinctive and robust to image transformations.
Given a keypoint and its neighboring pixels, a set of features were computed, obtaining a local texture descriptor. In our case, the algorithms used to create texture descriptors were Scale-Invariant Feature Transform (SIFT) [30], Histogram of Oriented Gradients (HOG) [31], Dense SIFT (DSIFT) [32] and Local Intensity Order Pattern (LIOP) [33]. SIFT and HOG algorithms include both a descriptor and a keypoint detector. In LIOP a fixed grid was created, because this technique does not include a keypoint detector algorithm, only a descriptor one. DSIFT also uses a similar dense approach, but the detector is built-into the technique.
This process was applied to each spectral band, and then descriptors from all the bands were grouped for each superpixel taking into account their location in the XY plane. At the end of the process, a variable number of keypoints (with their corresponding descriptors) was assigned to each superpixel. The dimension of each descriptor was denoted as D and the their number as N, as shown in Figure 1b,c.
Codebook generation. The objective of this stage was to create a texton dictionary with K codewords based on all the bands of the input image. This codebook can be learned [34,35] or predefined [36]. In this paper the codebook was learned using k-means [6] or Gaussian Mixture Modeling (GMM) [37] algorithms. The size and nature of the codebook greatly affects the performance of the classification. The key was to generate a compact but yet discriminatory one.
Feature encoding. Given the codebook and the computed local features (i.e., vector descriptors corresponding to each superpixel), this encoding process mapped the latter to one or a variable number of codewords. The result was a feature coding vector per superpixel. Thereby, the aim of this process was mapping each superpixel description (object representation) to one or more codewords. This is a core component of the scheme, influencing texture classification in terms of both accuracy and speed. The feature encoding algorithms employed in this paper were Vector of Locally Aggregated Descriptors (VLAD) [38] and Fisher Vectors (FV) [37]. Once the desired vector representation for each superpixel was obtained, this representation was used as representative of the superpixel for the later stages.
Dimensionality Reduction. This is a stage where a set of vectors obtained in the previous stage (e.g., descriptors or coding vectors) were reduced. This reduction was performed if the number of bands B of the image was higher than the dimension D of the descriptors or coding vectors, in which case the image was reduced to D bands (see Figure 1 for details). This step was also used in Figure 1c in order to transform the descriptors from dimension D to D r e d . The techniques used for the reduction could be any of the traditional functions of aggregation such as the sum or the mean, or any other algorithm related to feature extraction such as, for example, Principal Component Analysis (PCA). PCA constitutes a quite popular method for feature extraction [39,40]. PCA estimates projections of the original data so that most of the variance is concentrated in a few components.
Feature classification. This last stage was not part of the texture extraction schemes and performed a classification of the images based on the features produced by the previous texture extraction technique. Texture features were inputs to a superpixel-level classification, i.e., the training and testing sets consisted of superpixels described by their texture features. Once the classification finished the same class was assigned to all the pixels in each superpixel. SVMs were selected as classifiers. They are usually presented as standard non-contextual classifiers for remote sensing classification [41], and can handle scenarios with a low number of training samples [42]. Results obtained by other two standard classifiers in remote sensing, RF and ELMs [43] were also obtained.
Figure 1 illustrates the three different proposed texture extraction schemes showing the different stages according to the previous description. All of them have the image as input and one feature vector per superpixel as output.

2.1. Codebook-Based Scheme

The first scheme (named codebook-based from now on and shown in Figure 1a) began by performing two tasks in parallel: segmenting the image and creating a codebook. In terms of codebook generation, a texton dictionary with K codewords was created. The final set of codewords obtained was of size K × B (being B the number of bands of the input image).
Given the generated codebook and the computed superpixel segmentation, the next stage was the feature encoding. A vector representation of each superpixel, a texture vector, was obtained by mapping each superpixel to one or more codewords. In the case of k-means, this assignment can be done using the centroid with the shortest Euclidean distance to the superpixel.

2.2. Descriptor-Based Scheme

The second scheme in Figure 1 (named descriptor-based) represents an increase in complexity with respect to the first one. In parallel to the superpixel generation, a conditional dimensionality reduction step was performed. Only if the number of bands B of the image is larger than the resulting dimension D of the descriptors that were created, the image was reduced to D spectral bands. The next stage was keypoint detection and description. An algorithm such as SIFT or HOG was applied over each one of the bands of the image. The algorithm carried out two sequential stages. First N keypoints (points of interest) were detected and then, using their neighborhood pixels, a local texture descriptor algorithm was applied to obtain a set or pool of texture features of dimension D. Each one of these vectors was assigned to a superpixel according to the location of the keypoint. The number of keypoints per superpixel is variable, being possible to obtain even zero keypoints for a particular one. The implications of this variable number of descriptors per superpixel are not important in this scheme because they are used only when computing the codebook where they are stacked together. Further implications will be pointed out when describing the feature encoding stage in the next scheme.
After keypoint detection and description, the codebook generator was applied to the stacked descriptor vectors from all bands, obtaining K codewords of size D each. Another conditional dimensionality reduction stage was then performed in case the number of bands of the input image is lower than the descriptor dimension. After the dimensionality reduction (to the dimension of the input image in the first step or to the dimension of the descriptors later), the dimensions of both the codewords and the image pixel-vectors were equal (which means that now B is equal to D). The output obtained (a texture vector describing each superpixel) is equivalent to the one obtained by the codebook-based scheme in Figure 1a.

2.3. Spectral-Enhanced Descriptor-Based Scheme

The last scheme (named spectral-enhanced descriptor-based), shown in Figure 1c, differs slightly from the previous one. The main novelty is that the feature coding stage operates at superpixel level and that a concatenation of spectral information to the texture descriptors at the end of the texture extraction process is performed.
As it can be observed in the figure, the input hyperspectral image for the encoding stage in the previous schemes was replaced here by the image descriptors obtained for the different superpixels. If a superpixel has no associated texture descriptor, the resulting vector is zero. However, if it has one or more descriptors, all of them are compared to each codeword in order to obtain the resulting vector.
As the feature encoding process took the texture descriptors as input (unlike the previous schemes), some kind of spectral information needed to be added. With this objective, a new stage called central pixel extraction was executed. It searched for the central pixel of each superpixel in the spatial coordinates of the image and extracted the corresponding spectral values (central pixel-vector). The resulting data structure once the central pixels were extracted consisted of S vectors (as many as superpixels in the segmentation) each one of dimension B (the number of bands). Finally, a concatenation was performed: a new vector per superpixel was created by stacking the texture vector from the feature encoding with the pixel-vector from the central pixel extraction stage. The output is equivalent to the one obtained by the previous schemes but differing in the dimension of the superpixel feature vector: B + D R e d .

2.4. Dataset Description

Three datasets were used to evaluate the schemes proposed: a set of three hyperspectral images (from now on standard dataset), a set of multispectral scenes from river basins for which only vegetation classes are taken into account (Galicia dataset), and a large set of multispectral images from a vast region in China called Gaofen dataset. The standard dataset was used for comparison purposes as the scenes in it are usually present in land cover classification papers. Both the Galicia dataset and the Gaofen dataset were used because they contain large images with a wide range of vegetation species (both forests and crops). For the Galicia dataset only vegetation classes will be classified although the images also contain other materials, while for the remaining two datasets all classes including a non-vegetation one are classified.
The previously mentioned standard dataset corresponds to two images commonly used in the remote sensing literature: Pavia University (Pavia) and Salinas Valley (Salinas) [44]. Pavia was obtained by the ROSIS-03 (Reflective Optics System Imaging Spectrometer) sensor over the city of Pavia, Italy, with a spatial resolution of 2.6 m/pixel and covering the spectral range from 430 to 860 nm. Its dimensions are 610 × 340 pixels and 103 bands and its corresponding latitude and longitude are 45 11 23.66 N and 09 08 57.06 E, respectively. Salinas was obtained by the AVIRIS (Airborne Visible Infrared Imaging Spectrometer) sensor with a spectral range from 400 to 2500 nm. The main properties of this image are a resolution of 3.7 m/pixel, dimensions of 512 × 217 and 224 spectral bands. Moreover, its corresponding latitude and longitude are 36 39 33.8 N and 121 39 58.7 W. Figure 2 shows the false color composite images and the reference data corresponding to this dataset, while Table 1 displays the classes available in the reference data and the number of disjoint superpixels used for classification in training (15%) and testing (85%). Fifteen percent of superpixels corresponds to between 14% and 15% of the pixels in the image.
The Galicia dataset is made up of four multispectral images, and the objective of their creation was to monitor the interaction of masses of native vegetation with artificial structures and river beds. Four locations in the Galician provinces of A Coruña and Pontevedra were selected in an area comprised between Eiras Dam and River Mestas, with a distance of approximately 145.6 kilometers end-to-end. The datasets were captured by a MicaSense RedEdge multispectral camera [45] mounted on a custom UAV. Its 5 discrete sensors provide spectral channels at wavelengths of 475 nm (Blue), 560 nm (Green), 668 nm (Red), 717 nm (Edge), and 840 nm (Near infrared). The spatial resolution is 8.2 cm/pixel at a height of 120 m.
The four images in the dataset are the following; River Oitavén (Oitavén from now on) of size 6689 × 6722 pixels and is located in 42 22 27.8 N 8 25 30.3 W, River Mestas (Mestas from now on) of size 4915 × 9040 pixels and is located in 43 38 38.5 N 7 59 04.3 W, River Ferreiras (Ferreiras from now on) which is of size 9335 × 9219 pixels and is located in 43 32 58.8 N 7 57 33.2 W, and Eiras dam (Eiras from now on) whose dimensions are 5176 × 18,224 pixels and is located in 42 22 26.0 N 8 25 41.3 W.
Figure 3 shows the false color composite images and their reference data (constructed in a long-term process involving forestry experts and the authors of the paper) corresponding to each one of the scenes, while Table 2 shows the classes available in the reference data for classification and the number of superpixels used for training (15%) and testing (85%). Moreover, as the objective is the identification of plant species, only vegetation classes are considered.
Finally, the Gaofen dataset was used [47]. It is a large-scale land use grouping of images containing 160 annotated Gaofen-2 (GF-2) satellite scenes. GF-2 is the second satellite of the High-definition Earth Observation System promoted by the China National Space Administration. The spectral range goes from 0.45 to 0.89 μ m (blue to near-infrared) and the spatial dimension of the images is 6908 × 7300 pixels. Some of the dataset advantages are its large coverage, wide distribution, and high spatial resolution. It is remarkable that this dataset has high intra-class and low inter-class differences. The images cover an area of more than 50,000 km t 2 in China. Specifically, it can be divided into two sets. A large-scale classification set made up of 150 high-resolution images acquired from more than 60 different cities in China and where 5 major categories are annotated. From now on, this dataset will be named GID5 (Gaofen Image Dataset 5 classes). A fine land-cover classification set composed of 30,000 multi-scale image patches coupled with 10 pixel-level annotated images and made up of 15 sub-categories. From now on GID15 (Gaofen Image Dataset 15 classes). Figure 4 shows the false color composite images and their reference data of two images from GID5 and two others from GID15. Table 3 shows the classes available in the reference data and the number of superpixels used for training (15%) and testing (85%).

2.5. Accuracy Assessment and Set-Up Description

The classification accuracy obtained by classifying the features provided by the proposed schemes was reported in terms of the usual measures in remote sensing. The first measure Overall Accuracy (OA) is the most widely used [48]. It provides the percentage of correctly classified pixels, and it is presented for every experiment. Besides, Quantity Disagreement (QD) and Allocation Disagreement (AD), which measure the disagreement between classification map and reference data in terms of proportion and spatial allocation of the classes, respectively, were also provided [49].
The input data for the experiments were standardized (a mean of 0 and a standard deviation of 1). In addition, with the aim of evaluating the computational cost associated to each method, execution time measures were performed. All the results presented in the paper are the mean value of 3 independent runs for each scenario, each one being obtained under identical experimental conditions.
Regarding the configuration parameters, as mentioned above, SLIC was used as the superpixel extraction algorithm. There are two parameters for SLIC: superpixel size and regularity of the superpixels. The superpixel size is the desired average size of each superpixel (in terms of area in pixel units). Ten and 1100 were the values selected for the standard dataset (as the images in the dataset are small) and the other two datasets, respectively. For superpixel regularity, the larger the value, the more regular the superpixels obtained. A value of 20 was selected for all datasets. These values were experimentally decided and depend mainly on the resolution of the images and on the size of the structures present on them, being in general bigger superpixels more adequate for higher resolution images.
The classification was performed by using SVM, Random Forest, and ELM. More precisely, for SVM the LIBSVM implementation version 3.24 in C/C++ was chosen [50] selecting a linear kernel. The parameter C was determined for each SVM and values of 0.02 for parameter C (the same value for all datasets) gave the best results. In the case of Random Forest, the OpenCV implementation was used [51]. The only parameter set for this algorithm is the number of trees. After a search for the considered datasets, a value of 200 trees was chosen. Last, the ELM implementation selected was [52]. The number of neurons in the hidden layer was 250 for the small images and 500 for the bigger ones. These are standard values for the datasets considered [43].
The classification is performed at superpixel level, i.e., all the pixels in a superpixel are assigned the same label. As far as the training and testing features are concerned, two disjoint sets were set up for each image. Specifically, after segmenting the images using SLIC, 15% of the superpixels from each class were randomly taken for training and the remaining 85% for testing in the general scenario. For training, only one label per superpixel, the label of the spatially central pixel, is considered. The number of 15% of superpixels is equivalent to choose ~13% of the pixels of each class. This percentage of training samples is reasonable as it is shown in [43]. Results for 10% and 20% of samples for training were also tested for all the images in the Galicia dataset as this is the most representative dataset in the study, containing large images and including only vegetation classes.
Regarding other set-up details, the VLFeat library version 0.9.21 was used [32]. Specifically, the implementations of the texture extraction related algorithms SIFT, DSIFT, LIOP, HOG, GMM, VLAD, and FV available in the library were used. All the experiments were carried out using C/C++ compiled with gcc 7.5.0. Additionally, a first generation Intel Core i7-8700K CPU at 3.70 GHz and a 64-bit Ubuntu 18.04 were used for all the experiments including the execution time evaluation.
The classification accuracy results were obtained for each one of the three datasets described in Section 2.4 and considering different texture extraction techniques. The selection of specific algorithms for each stage of the proposed texture extraction schemes resulted in 16 different techniques for the experiments. The mapping between these techniques, the schemes they correspond to and the specific algorithms used are presented in Table 4. The row Without Texture Features shows the configuration without performing feature extraction at all and using the central pixel of each superpixel as input for the classification.

3. Experimental Results

The aim of this section is to analyze how the use of the different schemes influences the results of the classification. Classification results and computational efficiency in terms of Overall Accuracy (OA), Quantity Disagreement (QD), Allocation Disagreement (AD), and execution time are presented. The results are obtained for the three hyperspectral and multispectral datasets described above and for the texture extraction techniques that were described in Table 4. Three classifiers are considered: SVM, RF, and ELM.
The classification accuracy results for the standard dataset are shown in Table 5 for the SVM classifier, Table 6 when RF is used for classification, and Table 7 for the experiments with the ELM classifier. The listed techniques are grouped according to the texture scheme followed: codebook-based, descriptor-based, and spectral-enhanced descriptor-based. For each technique, 15 % of the superpixels were randomly selected for training and the remaining 85 % for testing. The best results for each image in terms of OA are highlighted with a gray background.
The results in Table 5, Table 6 and Table 7 show the same trends. As the images in the standard dataset are very small, they do not benefit from a superpixel level classification independently of the classifier considered, so for each classifier all techniques offer similar results and, in general, the OA values are low. The techniques from the descriptor-based scheme and spectral-enhanced descriptor-based scheme present a slightly higher OA, being the SIFT-based ones, specifically SIFT + GMM + FV and SIFT + k-means + VLAD, the best methods for the Salinas and Pavia images respectively. The QD and AD values are low for all the experiments and lower for the higher OA values, as expected. As the standard dataset does not focus on vegetation, experiments with the Galicia dataset considering only vegetation classes were performed.
The results for the Galicia dataset are detailed in Table 8, Table 9 and Table 10 for the results with a SVM, a RF, and a ELM classifier, respectively. Unlike the standard dataset, the Galicia dataset is made up of larger images, so the execution time is very relevant. This is the reason for displaying execution times in the Tables. The best results for each image in terms of OA are highlighted with a gray background.
It can be observed that in this case, two techniques based on k-means as codebook generator, k-means + BoW and LIOP + k-means + VLAD, offer the best results for all the classifier. The best result is achieve for only one of the images, Mestas, and the RF classifier by a different technique: SIFT + GMM + FV + Spec. The AD and QD values are lower for higher OA values, as expected. Regarding execution times, the techniques with the highest ones correspond to those using a SIFT-based keypoint detection and description algorithm. On the contrary, the methods with the lowest computational cost are those based on LIOP or HOG as keypoint detection and description algorithms, while those based on the simpler codebook-based scheme present reasonable computational costs. Focusing on the best two techniques, LIOP + k-means + VLAD displays lower execution times than k-means + BoW.
In order to determine whether the previous results for the Galicia dataset are statistically reliable, Table 11 shows results varying the percentage of superpixels considered in the training set. Ten percent, 15%, and 20 % of the superpixels from each of the five vegetation classes were selected for training. For each percentage, the superpixels are randomly picked. The four best techniques extracted from the previous table were chosen to perform this comparison: k-means + BoW, GMM + FV, LIOP + k-means + VLAD, and SIFT + k-means + VLAD + Spec. It can be observed that the standard deviation decreases as the size of the training set increases. The highlighted best results show that the k-means + BoW technique outperforms the other methods 7 out of 16 times and presents the lowest standard deviation values. LIOP + k-means + VLAD obtains the best results in 2 out of 16 times and competitive results for the remaining experiments.
It can be concluded from Table 11 that k-means + BoW is a technique that offers the best consistent results because it outperforms the other ones in terms of OA and obtains reasonable results regarding execution times. LIOP + k-means + VLAD is also an interesting technique, although it was experimentally checked that its results are highly dependent on the optimization of the tuning of its input parameters, which is a resource expensive process.
Finally, Table 12 shows the results obtained for the Gaofen dataset as it is a dataset with a large number of scenes. It is divided into GID5 (150 scenes) and GID15 (10 scenes) [47], as it was detailed in the dataset description. The data trends are similar to those of the Galicia dataset, being the codebook-based scheme the best, followed by the descriptor-based scheme and the spectral-enhanced descriptor-based scheme. Specifically, the best techniques are again those based on k-means as codebook generator algorithm. In this case, the best techniques are, in particular, k-means + BoW and k-means + VLAD.

4. Discussion

In this work, different texture schemes based on BoW for vegetation classification using a superpixel approach were studied. We considered multi- and hyperspectral remote sensing images taken by UAVs and satellites. In all the presented schemes, the texture algorithms were computed inside superpixels, in contrast to most of the methods previously published in the literature, in which the vegetation textures are extracted from patches or objects. A detailed comparison of the different techniques was carried out in terms of classification accuracy for several land cover remote sensing datasets. In particular, the Galicia dataset contained five classes of vegetation (oak, meadows, autochthonous vegetation, eucalyptus, and pines), while in the GID5 5 classes were considered, three of them corresponding to vegetation, and in the GID15, 15 classes were considered, corresponding eight of them to vegetation. The best classification results for each image ranged from 81.07% to 93.77% for the Galicia dataset, and from 79.63% to 95.79% for the Gaofen dataset. The techniques and algorithms used in this work included several keypoint detectors and descriptors (HOG, LIOP, SIFT, and DSIFT), algorithms for codebook generation (k-means and GMM), algorithms for feature encoding (histogram-based, VLAD, and FV), and, finally, some algorithms for feature classification (SVM, RF, and ELM). Additionally, SLIC was used for superpixel generation and PCA for dimensionality reduction.
In previous works, studies were carried out to determine the most suitable set of parameters, including textures, to carry out the classification of vegetation in remote sensing images. However, in these works the only texture method considered was GLCM. Specifically, the authors of [10] present a classification scheme for the canopy cover mapping of spekboom in a large semiarid region in South Africa using multispectral imagery (red, green, blue, and near-infrared bands). Three classes were considered (spekboom, tree, and background) and the classification scheme is a decision tree with 47 features grouped into two broad categories: per-pixel (spectral information) and sliding window features (statistic of the pixels inside a small local neighborhood). The decision tree obtained a mean absolute canopy cover error of 5.85%. The authors of [14] present a crop classification method for hyperspectral images combining 40 spectral indices, spectral features (several class-pair distances), and GLCM texture information in a object-oriented approach. Eight classes were considered (chinese cabagge, japanese cabbage, lettuce, radish, pasture, pole bean, and forest) and the classification accuracy obtained was 97.84%. Finally, the authors of [16] performed the classification of savannah tree species from very high resolution images acquired by UAVs. Two flights capturing multispectral imagery (red, green, blue, and near-infrared bands) were made to obtain image mosaics with longitudinal and lateral overlap. The method uses chromatic coordinates, spectral indices, the canopy height model, and GLCM texture measures in different window sizes. Nine classes of trees and shrubs (with an abundance of more than ten individuals within the samples) were considered and an outline of each single-stem individual was drawn onto the image. The overall accuracy obtained was of 77% on average.
For the detection of the extent of trees and shrubs, the canopy height model (CHM) is the one most commonly used. Information on height is obtained from different sources, in some cases through sensor fusion with LiDAR information. In other cases surface reconstruction from aerial images is the way of obtaining this complementary information [21,22]. In this paper, information on height was not considered as only single images are available in the dataset for the areas under study.
Other works use simple texture methods for vegetation classification, specifically LBP and GLCM. For example, the authors of [12] applied the LBP textures for the classification of tree species using hyperspectral data and an aerial stereo camera system. In the classification step, a pixel-based approach and a patch-based BoW approach were used. Four classes were considered (spruce, beech, mixed, and non-tree) and the classification accuracy obtained was approximately 60%. In [13], an UAV performs vegetation mapping in complex urban landscapes using ultra-high resolution color imagery acquired at low altitudes. A hybrid method combining Random Forest and GLCM texture analysis at nine different window sizes was used. Six typical land covers (out of which three are vegetated ones) were considered (grass, trees, shrubs, bare soil, impervious surface, and water) and the classification accuracies ranged from 86.2% to 91.8%. The authors of [15] propose an object-based GLCM texture extraction method for the classification of man-planted forests on mountainous areas using high resolution satellite data, including panchromatic and multispectral bands. The method used a multi-resolution segmentation algorithm to generate image objects and enhances the texture feature of objects using a 2D Gabor filter. Four classes were considered (non-vegetation, natural forest, rubber trees, and crops) and the classification accuracy obtained was 91.4%. The authors of [17] propose a method combining spectral measures and GLCM texture information for crop classification in time-series UAV images composed of three bands (green, red, and near-infrared). The object-oriented approach extracted meaningful objects via multi-resolution segmentation and classification was carried out on object units. Four vegetation classes (highland kimchi cabbage, cabbage, potato, and fallow) were considered. In six multi-temporal images, combining texture features with spectral information led to an increase of 7.72% in OA, compared to the classification result with spectral information only (from 83.13% to 90.85%).
For the classification of a image using textures it is necessary to delimit regions on which the texture features are computed. None of the works cited above uses superpixels to obtain the textures. Instead they use patches, segments or objects. Superpixels were used in [24], which proposes a scheme for natural roadside vegetation classification shooting on the ground (not remote sensing) using color cameras. Six classes were considered (brown grass, green, road, soil, tree, and soil) and the scheme learns two individual sets of BoW dictionaries from color and filter-bank texture features using the nearest Euclidean distance, which were aggregated into class probabilities for each superpixel. Experimental evaluations on a natural image dataset obtained 75.5% accuracy for classifying six objects.
For keypoint detection and description, we considered four algorithms: HOG, LIOP, SIFT, and DSIFT. Few works in the literature proposed descriptor-based methods for the classification of vegetation in images, although this approach is common in the classification of scenes. For example, the authors of [18] present a methodology for vegetation segmentation in cornfield images obtained by autonomous agricultural vehicles. A collection of outdoor color images, which were acquired under different illumination conditions and different plant growth state, were selected. The method focuses on finding an appropriate set of different color vegetation indices and local descriptors for vegetation characterization. Three different classes were considered (vegetation, light-brown soil, and dark-brown soil), and an accuracy value of 95.3% was achieved. In [19], the classification of weeds growing among crops using a BoW model based on SIFT or SURF features is presented. In that work, a small-sized robot was developed for vision based precision control of volunteer potatoes (weed) in a sugar beet field. The highest classification accuracy (96.5%) was obtained using SIFT, Out-of-Row Regional Index (ORRI), and SVM. Finally, the authors of [20] study the application of SIFT to cropland mapping in the Brazilian Amazon based on vegetation index time series. It used a dense temporal SIFT BoW algorithm, which is able to capture temporal locality of the data. The dataset was thus made of 46 MODIS images acquired over two years. Five crop classes were considered (soybean, soybean + millet, soybean + maize, soybean + cotton, and cotton) with accurate detection of around 70% of the agricultural areas.
Based on the presented information, it can be concluded that the number of works in the literature that use descriptors to characterize textures in the topic of vegetation classification in images is very limited. To the best of our knowledge, no superpixel-based descriptors have been previously proposed for the classification of vegetation in multi and hyperspectral images. On the other hand, our classification results are comparable to other studies in the literature. However, an exact numerical comparison is difficult as it depends on the nature of the datasets, the number of classes, and the number of samples used for training. Comparable results in the literature are only available for GID5 [47]. In particular, the accuracy obtained by the CNN-based technique used in this reference was 95.74%, which is similar to the 95.79% obtained in the experiments shown in Table 12. However, the experimental conditions in [47] were different as disjoint sets of images were taken for training and testing. In our approach the same percentage of training and testing samples were selected over each one of the images.

5. Conclusions

In this paper, different texture extraction schemes at the superpixel level for the classification of vegetation species using multi- and hyperspectral imagery are proposed. These schemes, based on the classical BoW approach, are called codebook-based, descriptor-based, and spectral-enhanced descriptor-based schemes. Some of the following stages are considered for each one; superpixel extraction, keypoint detection and description, codebook generation, feature encoding, and dimensionality reduction. The relevant contributions of this paper are the use of a superpixel segmentation algorithm as a way of dividing an image into homogeneous regions previously to the texture extraction, and the adequate exploitation of the spectral information available in all the bands of the image. Superpixels are used in the keypoint detection and description, codebook generation, feature encoding and classification stages. Sixteen different texture-extraction techniques derived from the three proposed schemes are analyzed in detail in the paper and compared in terms of classification accuracy and execution time considering SVM, RF and ELM as supervised classification algorithms.
Three datasets consisting of real multi- and hyperspectral images containing vegetation classes were employed to test the proposed schemes. As the standard dataset does not focus on vegetation, the Galicia and Gaofen datasets were also considered. The best classification results for each image range from 81.07% to 93.77% for the Galicia dataset and from 79.63% to 95.79% for the Gaofen dataset. The techniques and algorithms used in this work include several keypoint detectors and descriptors (HOG, LIOP, SIFT, and DSIFT), algorithms for codebook generation (k-means and GMM), algorithms for feature encoding (histogram-based, VLAD and FV), and, finally, some algorithms for feature classification (SVM, RF, and ELM). Additionally, SLIC was used for superpixel generation and PCA for dimensionality reduction. The experimental results show that the best techniques are based on k-means as codebook generator. In particular, the highest OA values are offered by k-means + BoW, that is a representative of the codebook-based scheme, using BoW for feature encoding. The second best results on average are provided by LIOP + k-means + VLAD, which uses LIOP for keypoint detection and description and VLAD for feature encoding, as a representative of the descriptor-based scheme. These are also techniques that present reasonable computational cost according to our experiments.
As future work, we plan to analyze the performance of the best techniques with new multispectral images corresponding to vegetation. The desired properties of these images will be the abundance of vegetation and high spatial resolution. Several future research lines that would benefit from the current proposal have also been considered such as testing different algorithms for keypoint detection and description, for instance, robust and powerful techniques like SURF and KAZE. Moreover, the creation of schemes with a different structure from the three described is also projected as future work.

Author Contributions

Conceptualization, D.B.H. and F.A.; Experiments, S.R.B.; Project administration, D.B.H. and F.A.; Software, S.R.B.; Supervision, F.A.; Writing—original draft, S.R.B.; Writing—review and editing, D.B.H. and F.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Civil Program UAVs Initiative, promoted by the Xunta de Galicia and developed in partnership with the Babcock Company to promote the use of unmanned technologies in civil services. We also have to acknowledge the support by Ministerio de Ciencia e Innovación, Government of Spain (grant number PID2019-104834GB-I00), and Consellería de Educación, Universidade e Formación Profesional (ED431C 2018/19, and accreditation 2019-2022 ED431G-2019/04). All are cofunded by the European Regional Development Fund (ERDF).

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript.
UAVUnmanned Aerial Vehicle
BoWBag of Words
LBPLocal Binary Pattern
GLCMGray-Level Co-occurrence Matrix
CHMCanopy Height Model
SVMSupport Vector Machines
RFRandom Forest
ELMExtreme Learning Machine
CNNConvolutional Neural Network
GLCMGray-Level Co-occurrence Matrix
FNEAFractal Net Evolution Approach
NDVINormalized Difference Vegetation Index
DEMDigital Elevation Model
SLICSimple Linear Iterative Clustering
ETPSEfficient Topology Preserving Segmentation
SIFTScale-Invariant Feature Transform
HOGHistogram of Oriented Gradients
DSIFTDense Scale-Invariant Feature Transform
LIOPLocal Intensity Order Pattern
GMMGaussian Mixture Modeling
VLADVector of Locally Aggregated Descriptors
FVFisher Vectors
PCAPrincipal Component Analysis
GIDGaofen Image Dataset
ROSISReflective Optics System Imaging Spectrometer
AVIRISAirborne Visible Infrared Imaging Spectrometer
RFRandom Forest
OAOverall Accuracy
QDQuantity Disagreement
ADAllocation Disagreement

References

  1. Ghamisi, P.; Maggiori, E.; Li, S.; Souza, R.; Tarabalka, Y.; Moser, G.; De Giorgi, A.; Fang, L.; Chen, Y.; Chi, M.; et al. New frontiers in spectral-spatial classification of hyperspectral images. IEEE Geosci. Remote Sens. Mag. 2018, 6, 10–43. [Google Scholar] [CrossRef]
  2. Wagner, F.; Sanchez, A.; Tarabalka, Y.; Lotte, R.; Ferreira, M.; Aidar, M.; Gloor, M.; Phillips, O.; Aragão, L. Using convolutional network to identify tree species related to forest disturbance in a neotropical Forest with very high resolution multispectral images. AGUFM 2018, 2018, B33N–2861. [Google Scholar]
  3. Zeng, Y.; Zhao, Y.; Zhao, D.; Wu, B. Forest biodiversity mapping using airborne LiDAR and hyperspectral data. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 3561–3562. [Google Scholar]
  4. Liu, L.; Chen, J.; Fieguth, P.; Zhao, G.; Chellappa, R.; Pietikäinen, M. From BoW to CNN: Two decades of texture representation for texture classification. Int. J. Comput. Vis. 2019, 127, 74–109. [Google Scholar] [CrossRef] [Green Version]
  5. Julesz, B. Textons, the elements of texture perception, and their interactions. Nature 1981, 290, 91–97. [Google Scholar] [CrossRef]
  6. Csurka, G.; Dance, C.; Fan, L.; Willamowski, J.; Bray, C. Visual categorization with bags of keypoints. In Proceedings of the 8th European Conference on Computer Vision-ECCV 2004, Prague, Czech Republic, 11–14 May 2004; Volume 1, pp. 1–2. [Google Scholar]
  7. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  8. Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; Vedaldi, A. Describing textures in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 3606–3613. [Google Scholar]
  9. Bormann, R.; Esslinger, D.; Hundsdoerfer, D.; Haegele, M.; Vincze, M. Texture characterization with semantic attributes: Database and algorithm. In Proceedings of the ISR 2016: 47st International Symposium on Robotics, VDE, Munich, Germany, 21–22 June 2016; pp. 1–8. [Google Scholar]
  10. Harris, D.; Vlok, J.; van Niekerk, A. Regional mapping of spekboom canopy cover using very high resolution aerial imagery. J. Appl. Remote Sens. 2018, 12, 046022. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, L.; Verma, B. Class-Semantic Textons with Superpixel Neighborhoods for Natural Roadside Vegetation Classification. In Proceedings of the IEEE 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, Australia, 23–25 November 2015; pp. 1–8. [Google Scholar]
  12. Yuan, X.; Tian, J.; Cerra, D.; Meynberg, O.; Kempf, C.; Reinartz, P. Tree Species Classification by Fusing of Very Highresoltuion Hyperspectral Images and 3K-DSM. In Proceedings of the IEEE 2018 9th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Amsterdam, The Netherlands, 23–26 September 2018; pp. 1–5. [Google Scholar]
  13. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef] [Green Version]
  14. Zhang, X.; Sun, Y.; Shang, K.; Zhang, L.; Wang, S. Crop classification based on feature band set construction and object-oriented approach using hyperspectral images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4117–4128. [Google Scholar] [CrossRef]
  15. Yang, P.; Hou, Z.; Liu, X.; Shi, Z. Texture feature extraction of mountain economic forest using high spatial resolution remote sensing images. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 3156–3159. [Google Scholar]
  16. Oldeland, J.; Große-Stoltenberg, A.; Naftal, L.; Strohbach, B. The potential of UAV derived image features for discriminating savannah tree species. In The Roles of Remote Sensing in Nature Conservation; Springer: Berlin/Heidelberg, Germany, 2017; pp. 183–201. [Google Scholar]
  17. Kwak, G.H.; Park, N.W. Impact of texture information on crop classification with machine learning and UAV images. Appl. Sci. 2019, 9, 643. [Google Scholar] [CrossRef] [Green Version]
  18. Campos, Y.; Rodner, E.; Denzler, J.; Sossa, H.; Pajares, G. Vegetation segmentation in cornfield images using Bag of Words. In International Conference on Advanced Concepts for Intelligent Vision Systems; Springer: Berlin/Heidelberg, Germany, 2016; pp. 193–204. [Google Scholar]
  19. Suh, H.K.; Hofstee, J.W.; IJsselmuiden, J.; van Henten, E.J. Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information. Biosyst. Eng. 2018, 166, 210–226. [Google Scholar] [CrossRef] [Green Version]
  20. Bailly, A.; Arvor, D.; Chapel, L.; Tavenard, R. Classification of MODIS time series with dense bag-of-temporal-SIFT-words: Application to cropland mapping in the Brazilian Amazon. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 2300–2303. [Google Scholar]
  21. Dominik, W.A. Exploiting the redundancy of multiple overlapping aerial images for dense image matching based digital surface model generation. Remote Sens. 2017, 9, 490. [Google Scholar] [CrossRef] [Green Version]
  22. Osińska-Skotak, K.; Bakuła, K.; Jełowicki, Ł.; Podkowa, A. Using Canopy Height Model Obtained with Dense Image Matching of Archival Photogrammetric Datasets in Area Analysis of Secondary Succession. Remote Sens. 2019, 11, 2182. [Google Scholar] [CrossRef] [Green Version]
  23. Fang, L.; Li, S.; Kang, X.; Benediktsson, J.A. Spectral–spatial classification of hyperspectral images with a superpixel-based discriminative sparse model. IEEE Trans. Geosci. Remote Sens. 2015, 53, 4186–4201. [Google Scholar] [CrossRef]
  24. Zhang, X.; Chew, S.E.; Xu, Z.; Cahill, N.D. SLIC superpixels for efficient graph-based dimensionality reduction of hyperspectral imagery. In Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXI; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9472, p. 947209. [Google Scholar]
  25. Li, J.; Zhang, H.; Zhang, L. Efficient superpixel-level multitask joint sparse representation for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5338–5351. [Google Scholar]
  26. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Beucher, S. Use of watersheds in contour detection. In International Workshop on Image Processing; CCETT: Rennes, France, 1979. [Google Scholar]
  28. Yao, J.; Boben, M.; Fidler, S.; Urtasun, R. Real-time coarse-to-fine topologically preserving segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 2947–2955. [Google Scholar]
  29. Lazebnik, S.; Schmid, C.; Ponce, J. A sparse texture representation using local affine regions. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1265–1278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
  31. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
  32. Vedaldi, A.; Fulkerson, B. VLFeat: An Open and Portable Library of Computer Vision Algorithms. 2008. Available online: http://www.vlfeat.org/ (accessed on 13 August 2020).
  33. Wang, Z.; Fan, B.; Wu, F. Local intensity order pattern for feature description. In Proceedings of the IEEE 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 603–610. [Google Scholar]
  34. Lazebnik, S.; Schmid, C.; Ponce, J. A sparse texture representation using affine-invariant regions. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, WI, USA, 18–20 June 2003; Volume 2, p. II. [Google Scholar]
  35. Varma, M.; Zisserman, A. A statistical approach to texture classification from single images. Int. J. Comput. Vis. 2005, 62, 61–81. [Google Scholar] [CrossRef]
  36. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  37. Perronnin, F.; Sánchez, J.; Mensink, T. Improving the fisher kernel for large-scale image classification. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 143–156. [Google Scholar]
  38. Jegou, H.; Perronnin, F.; Douze, M.; Sánchez, J.; Perez, P.; Schmid, C. Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1704–1716. [Google Scholar] [CrossRef] [Green Version]
  39. Tong, X.; Xie, H.; Weng, Q. Urban land cover classification with airborne hyperspectral data: What features to use? IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 3998–4009. [Google Scholar] [CrossRef]
  40. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef] [Green Version]
  41. Ghamisi, P.; Maggiori, E.; Li, S.; Souza, R.; Tarablaka, Y.; Moser, G.; De Giorgi, A.; Fang, L.; Chen, Y.; Chi, M.; et al. New frontiers in spectral-spatial hyperspectral image classification: The latest advances based on mathematical morphology, Markov random fields, segmentation, sparse representation, and deep learning. IEEE Geosci. Remote Sens. Mag. 2018, 6, 10–43. [Google Scholar] [CrossRef]
  42. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  43. Ghamisi, P.; Plaza, J.; Chen, Y.; Li, J.; Plaza, A.J. Advanced spectral classifiers for hyperspectral images: A review. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–32. [Google Scholar] [CrossRef] [Green Version]
  44. ROSIS. Hyperspectral Remote Sensing Scenes. 2013. Available online: http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes (accessed on 13 August 2020).
  45. Micasense RedEdge Multispectral Camera. Available online: https://micasense.com/rededge-mx/ (accessed on 13 August 2020).
  46. Bascoy, P.G.; Garea, A.S.; Heras, D.B.; Argüello, F.; Ordóñez, A. Texture-based analysis of hydrographical basins with multispectral imagery. In Remote Sensing for Agriculture, Ecosystems, and Hydrology XXI; International Society for Optics and Photonics: Bellingham, WA, USA, 2019; Volume 11149, p. 111490Q. [Google Scholar]
  47. Tong, X.Y.; Xia, G.S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Learning transferable deep models for land-use classification with high-resolution remote sensing images. arXiv 2018, arXiv:1807.05713. [Google Scholar]
  48. He, L.; Li, J.; Liu, C.; Li, S. Recent advances on spectral–spatial hyperspectral image classification: An overview and new guidelines. IEEE Trans. Geosci. Remote Sens. 2017, 56, 1579–1597. [Google Scholar] [CrossRef]
  49. Pontius, R.G., Jr.; Millones, M. Death to Kappa: Birth of quantity disagreement and allocation disagreement for accuracy assessment. Int. J. Remote Sens. 2011, 32, 4407–4429. [Google Scholar] [CrossRef]
  50. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2011, 2, 27:1–27:27. Available online: http://www.csie.ntu.edu.tw/~cjlin/libsvm (accessed on 13 August 2020). [CrossRef]
  51. Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–125. [Google Scholar]
  52. López-Fandiño, J.; Quesada-Barriuso, P.; Heras, D.B.; Argüello, F. Efficient ELM-based techniques for the classification of hyperspectral remote sensing images on commodity GPUs. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2884–2893. [Google Scholar] [CrossRef]
Figure 1. Texture extraction schemes for hyperspectral imagery using the Bag of Words (BoW) superpixel-based approach proposed: (a) codebook-based, (b) descriptor-based, and (c) spectral-enhanced descriptor-based.
Figure 1. Texture extraction schemes for hyperspectral imagery using the Bag of Words (BoW) superpixel-based approach proposed: (a) codebook-based, (b) descriptor-based, and (c) spectral-enhanced descriptor-based.
Remotesensing 12 02633 g001
Figure 2. Standard dataset hyperspectral images and their corresponding reference maps for classification: (a,b) image and reference map for Salinas, respectively, and (c,d) image and reference map for Pavia, respectively. Each color in a reference map corresponds to a particular class for the image. No reference information is available for the regions marked in black.
Figure 2. Standard dataset hyperspectral images and their corresponding reference maps for classification: (a,b) image and reference map for Salinas, respectively, and (c,d) image and reference map for Pavia, respectively. Each color in a reference map corresponds to a particular class for the image. No reference information is available for the regions marked in black.
Remotesensing 12 02633 g002
Figure 3. Galicia dataset multispectral images and their corresponding reference maps for classification: (a,b) image and reference map for Oitavén, respectively; (c,d) image and reference map for Mestas, respectively; (e,f) image and reference map for Ferreiras, respectively; and (g,h) image and reference map for Eiras, respectively. Each color in a reference map corresponds to a different class for the image. Only vegetations classes are represented and considered for the experiments. No reference information is available for the regions marked in black.
Figure 3. Galicia dataset multispectral images and their corresponding reference maps for classification: (a,b) image and reference map for Oitavén, respectively; (c,d) image and reference map for Mestas, respectively; (e,f) image and reference map for Ferreiras, respectively; and (g,h) image and reference map for Eiras, respectively. Each color in a reference map corresponds to a different class for the image. Only vegetations classes are represented and considered for the experiments. No reference information is available for the regions marked in black.
Remotesensing 12 02633 g003
Figure 4. Gaofen dataset multispectral images and their corresponding reference maps for classification: (a,b) image and reference map for GF1 (original name GF2_PMS2__L1A0001708261-MSS2), respectively; (c,d) image and reference map for GF2 (original name GF2_PMS1__L1A0001798942-MSS1), respectively; (e,f) image and reference map for GF3 (original name GF2_PMS2__L1A0001821754-MSS2), respectively; and (g,h) image and reference map for GF4 (original name GF2_PMS1__L1A0001680858-MSS1), respectively. The different colors in the reference data indicate different classes for the images. No reference information is available for the regions marked in black.
Figure 4. Gaofen dataset multispectral images and their corresponding reference maps for classification: (a,b) image and reference map for GF1 (original name GF2_PMS2__L1A0001708261-MSS2), respectively; (c,d) image and reference map for GF2 (original name GF2_PMS1__L1A0001798942-MSS1), respectively; (e,f) image and reference map for GF3 (original name GF2_PMS2__L1A0001821754-MSS2), respectively; and (g,h) image and reference map for GF4 (original name GF2_PMS1__L1A0001680858-MSS1), respectively. The different colors in the reference data indicate different classes for the images. No reference information is available for the regions marked in black.
Remotesensing 12 02633 g004
Table 1. Standard dataset. Classes available in the reference data and number of superpixels used for training (15%) and testing (85%).
Table 1. Standard dataset. Classes available in the reference data and number of superpixels used for training (15%) and testing (85%).
SalinasPavia
Superpixels Superpixels
# and Color Classes Train (15%) Test (85%) Classes Train (15%) Test (85%)
1.Broccoli gr. weeds 1638Asphalt1482
2.Broccoli gr. weeds 2742Meadows59336
3.Fallow532Gravel850
4.Fallow rough plow636Trees17
5.Fallow smooth1062Metal529
6.Stubble957Bare soil1585
7.Celery1377Bitumen741
8.Grapes untrained30174Bricks1057
9.Soil vineyard dev.1268Shadows19
10.Corn gr. weeds956
11.Lettuce rom 4 weeks428
12.Lettuce rom 5 weeks639
13.Lettuce rom 6 weeks320
14.Lettuce rom 7 weeks323
15.Vineyard untrained22126
16.Vineyard ver. trellis426
Table 2. Galicia dataset. Classes available in the reference data and number of superpixels in the disjoint training (15%) and testing (85%) sets. Only vegetation classes are considered in the experiments, the rest of the classes are marked with “-”. NA indicates that the image does not contain samples for a specific vegetation class [46].
Table 2. Galicia dataset. Classes available in the reference data and number of superpixels in the disjoint training (15%) and testing (85%) sets. Only vegetation classes are considered in the experiments, the rest of the classes are marked with “-”. NA indicates that the image does not contain samples for a specific vegetation class [46].
OitavénMestasFerreirasEiras
Superpixels Superpixels Superpixels Superpixels
# and Color Classes Train (15%) Test (85%) Train (15%) Test (85%) Train (15%) Test (85%) Train (15%) Test (85%)
1.Water--------
2.Oak2681523NANA183952240
3.Tiles--------
4.Meadows342194366137485573159114646
5.Asphalt--------
6.Bare Soil--------
7.Rock--------
8.Concrete--------
9.Authoctonous vegetation905141374846851
10.Eucalyptus208118268338739575427110
11.Pines1481NANA141691
Table 3. Gaofen dataset. Classes available in the reference data and number of superpixels used for training (15%) and testing (85%). NA values indicate the non-existence of samples for a specific vegetation class and image, while “-” implies the non-existence of samples for the specific non-vegetation class [47].
Table 3. Gaofen dataset. Classes available in the reference data and number of superpixels used for training (15%) and testing (85%). NA values indicate the non-existence of samples for a specific vegetation class and image, while “-” implies the non-existence of samples for the specific non-vegetation class [47].
GF1GF2 GF3GF4
Superpixels Superpixels Superpixels Superpixels
# and Color Classes Train (15%) Test (85%) Train (15%) Test (85%) # and Color Classes Train (15%) Test (85%) Train (15%) Test (85%)
1.Built-up123698341941.Industrial land5333024--
2.Farmland232313,16979645112.urban residential9525401634
3.Forest348197818710663.Rural residential291165176433
4.Meadow66837911146484.Traffic land496281238221
5.Water----5.Paddy fieldNANANANA
6.Irrigated land225812,801215912,235
7.Dry cropland118675--
8.Garden plot1377NANA
9.Arbor woodlandNANA7074010
10.Shrub land1480NANA
12.Natural grasslandNANA8034551
13.Artificial grasslandNANANANA
13.River----
14.Lake----
15.Pond1271426
Table 4. Texture extraction techniques considered for the schemes proposed in Figure 1 detailing the configuration of the different stages.
Table 4. Texture extraction techniques considered for the schemes proposed in Figure 1 detailing the configuration of the different stages.
SchemeTechniqueSuperpixel ExtractionKeypoint Detection and DescriptionCodebook GenerationFeature EncodingDimensionality ReductionFeature Classification
-Without Texture FeaturesSLIC----SVM
k-means + VLADSLIC-k-meansVLAD-SVM
Codebook-based schemek-means + BOWSLIC-k-meansBoW-SVM
GMM + FVSLIC-GMMFV-SVM
SIFT + k-means + VLADSLICSIFTk-meansVLADPCASVM
SIFT + GMM + FVSLICSIFTGMMFVPCASVM
DSIFT + k-means + VLADSLICDSIFTk-meansVLADPCASVM
Descriptor-basedDSIFT + GMM + FVSLICDSIFTGMMFVPCASVM
schemeLIOP + k-means + VLADSLICLIOPk-meansVLADPCASVM
LIOP + GMM + FVSLICLIOPGMMFVPCASVM
HOG + k-means + VLADSLICHOGk-meansVLADPCASVM
HOG + GMM + FVSLICHOGGMMFVPCASVM
SIFT + k-means + VLAD + SpecSLICSIFTk-meansVLADPCASVM
Spectral-enhancedSIFT + GMM + FV + SpecSLICSIFTGMMFVPCASVM
descriptor-based schemeDSIFT + k-means + VLAD + SpecSLICDSIFTk-meansVLADPCASVM
DSIFT + GMM + FV + SpecSLICDSIFTGMMFVPCASVM
Table 5. Classification results in terms of OA (%), QD (%), and AD (%) obtained by the techniques detailed in Table 4 for the images of the standard dataset and using a SVM classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
Table 5. Classification results in terms of OA (%), QD (%), and AD (%) obtained by the techniques detailed in Table 4 for the images of the standard dataset and using a SVM classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
SchemeTechniqueSalinasPavia
OAQDADOAQDAD
-Without Texture Features76.48 ± 0.4615.19 ± 2.723.12 ± 1.3665.77 ± 0.9721.48 ± 9.676.75 ± 1.94
k-means + VLAD75.07 ± 1.5515.88 ± 2.953.74 ± 1.4665.92 ± 2.9720.28 ± 8.326.55 ± 1.01
Codebook-based schemek-means + BOW77.93 ± 1.0814.97 ± 2.563.04 ± 1.2760.93 ± 1.3227.98 ± 10.177.95 ± 2.34
GMM + FV74.86 ± 1.6616.64 ± 3.524.07 ± 1.8966.84 ± 3.5221.28 ± 9.376.05 ± 1.52
SIFT + k-means + VLAD82.11 ± 0.149.41 ± 0.601.48 ± 0.3671.41 ± 3.6016.88 ± 7.614.57 ± 0.99
SIFT + GMM + FV83.38 ± 1.549.12 ± 0.571.18 ± 0.2270.12 ± 1.3617.83 ± 6.395.77 ± 1.24
DSIFT + k-means + VLAD79.90 ± 0.9813.41 ± 2.292.88 ± 1.1168.71 ± 0.2919.88 ± 8.135.69 ± 1.84
Descriptor-basedDSIFT + GMM + FV82.70 ± 1.129.36 ± 0.671.54 ± 0.4670.36 ± 0.3019.28 ± 8.675.42 ± 1.94
schemeLIOP + k-means + VLAD76.90 ± 1.2815.89 ± 2.783.43 ± 1.5167.89 ± 3.4821.93 ± 9.256.73 ± 2.95
LIOP + GMM + FV78.09 ± 3.5315.96 ± 2.143.12 ± 1.2068.96 ± 3.5420.03 ± 8.875.93 ± 2.15
HOG + k-means + VLAD78.66 ± 3.4615.81 ± 2.263.31 ± 1.2569.35 ± 3.3620.36 ± 8.545.83 ± 1.65
HOG + GMM + FV79.19 ± 3.4016.06 ± 2.583.10 ± 1.4969.76 ± 3.2020.17 ± 8.835.13 ± 1.35
SIFT + k-means + VLAD + Spec79.28 ± 3.1616.32 ± 2.183.90 ± 1.8269.66 ± 2.9920.03 ± 8.524.87 ± 1.75
Spectral-enhancedSIFT + GMM + FV + Spec79.61 ± 3.1116.12 ± 2.343.15 ± 1.2969.42 ± 2.9120.94 ± 8.515.02 ± 1.85
descriptor-based schemeDSIFT + k-means + VLAD + Spec79.66 ± 2.9916.96 ± 2.883.76 ± 1.5269.40 ± 2.7519.74 ± 8.124.83 ± 1.72
DSIFT + GMM + FV + Spec79.89 ± 2.9416.66 ± 2.833.14 ± 1.7969.38 ± 2.6220.97 ± 7.964.33 ± 1.63
Table 6. Classification results in terms of OA (%), QD (%), and AD (%) obtained by the techniques detailed in Table 4 for the images of the standard dataset and using a RF classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
Table 6. Classification results in terms of OA (%), QD (%), and AD (%) obtained by the techniques detailed in Table 4 for the images of the standard dataset and using a RF classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
SchemeTechniqueSalinasPavia
OAQDADOAQDAD
-Without Texture Features77.32 ± 0.5613.27 ± 2.863.01 ± 1.1666.69 ± 1.2420.28 ± 9.375.91 ± 1.72
k-means + VLAD75.77 ± 1.1314.68 ± 2.353.20 ± 0.6866.82 ± 2.0519.88 ± 7.765.10 ± 1.42
Codebook-based schemek-means + BOW78.53 ± 1.6814.21 ± 2.063.14 ± 1.5861.93 ± 1.0224.18 ± 9.727.55 ± 2.04
GMM + FV75.26 ± 1.6015.64 ± 2.633.87 ± 1.7467.38 ± 2.2220.08 ± 8.275.73 ± 1.22
SIFT + k-means + VLAD83.20 ± 0.548.31 ± 1.821.18 ± 0.25 72.83 ± 2.85 15.92 ± 6.814.37 ± 0.79
SIFT + GMM + FV 84.26 ± 1.43 8.82 ± 0.371.04 ± 0.3671.58 ± 1.3816.94 ± 5.975.03 ± 1.73
DSIFT + k-means + VLAD80.63 ± 0.8512.82 ± 2.192.38 ± 1.1069.82 ± 0.2518.71 ± 7.935.09 ± 1.51
Descriptor-basedDSIFT + GMM + FV83.67 ± 1.028.56 ± 0.971.29 ± 0.3671.76 ± 0.2418.84 ± 8.074.86 ± 1.32
schemeLIOP + k-means + VLAD77.13 ± 1.4415.19 ± 2.283.34 ± 1.2568.69 ± 3.0420.33 ± 9.036.42 ± 2.30
LIOP + GMM + FV79.49 ± 3.2315.09 ± 2.282.87 ± 1.1669.45 ± 3.1919.07 ± 8.345.27 ± 1.77
HOG + k-means + VLAD79.36 ± 3.0614.91 ± 1.932.24 ± 1.2370.95 ± 3.0619.46 ± 7.744.32 ± 1.15
HOG + GMM + FV80.91 ± 2.3115.63 ± 2.062.88 ± 0.9770.96 ± 2.1719.67 ± 6.934.46 ± 1.55
SIFT + k-means + VLAD + Spec80.38 ± 2.8615.87 ± 2.983.56 ± 1.6270.86 ± 2.7419.43 ± 8.023.74 ± 1.05
Spectral-enhancedSIFT + GMM + FV + Spec80.62 ± 3.1015.63 ± 2.142.75 ± 1.3070.12 ± 1.9119.84 ± 8.114.92 ± 1.45
descriptor-based schemeDSIFT + k-means + VLAD + Spec80.63 ± 2.9316.26 ± 2.283.36 ± 1.5470.41 ± 2.2518.94 ± 8.024.33 ± 1.68
DSIFT + GMM + FV + Spec80.35 ± 2.7515.36 ± 2.332.67 ± 1.8270.31 ± 2.4120.07 ± 7.164.56 ± 1.57
Table 7. Classification results in terms of OA (%), QD (%), and AD (%) obtained by techniques detailed in Table 4 for the images of the standard dataset and using a ELM classifier. Fifteen of the superpixels are used for training. The best OA results are in a gray background.
Table 7. Classification results in terms of OA (%), QD (%), and AD (%) obtained by techniques detailed in Table 4 for the images of the standard dataset and using a ELM classifier. Fifteen of the superpixels are used for training. The best OA results are in a gray background.
SchemeTechniqueSalinasPavia
OAQDADOAQDAD
-Without Texture Features75.64 ± 0.8716.30 ± 2.383.33 ± 1.3064.49 ± 0.1522.69 ± 8.866.65 ± 1.47
k-means + VLAD74.58 ± 1.1416.48 ± 2.503.96 ± 1.7266.01 ± 2.1121.53 ± 8.876.72 ± 1.10
Codebook-based schemek-means + BOW76.08 ± 1.5015.12 ± 2.383.63 ± 1.5559.04 ± 1.8928.77 ± 11.037.67 ± 2.94
GMM + FV73.01 ± 1.8217.52 ± 3.105.24 ± 1.2865.94 ± 3.0923.82 ± 5.916.66 ± 1.69
SIFT + k-means + VLAD80.07 ± 0.9311.24 ± 0.011.23 ± 0.31 72.07 ± 3.88 18.93 ± 7.585.31 ± 0.84
SIFT + GMM + FV82.58 ± 1.8211.49 ± 0.131.38 ± 0.3668.92 ± 1.6018.13 ± 6.327.76 ± 1.42
DSIFT + k-means + VLAD79.47 ± 0.2915.48 ± 1.653.49 ± 1.1770.34 ± 0.5120.01 ± 8.966.77 ± 1.02
Descriptor-basedDSIFT + GMM + FV 83.93 ± 1.60 8.30 ± 0.571.75 ± 0.8869.81 ± 0.9019.28 ± 8.105.43 ± 1.49
schemeLIOP + k-means + VLAD74.25 ± 1.4216.05 ± 2.365.83 ± 1.1069.29 ± 3.2023.44 ± 9.418.10 ± 2.83
LIOP + GMM + FV76.72 ± 3.7115.02 ± 2.564.07 ± 1.2267.86 ± 3.4622.22 ± 8.517.39 ± 2.01
HOG + k-means + VLAD76.83 ± 3.8116.24 ± 2.354.82 ± 1.8767.44 ± 3.5022.10 ± 8.626.11 ± 1.17
HOG + GMM + FV77.82 ± 3.7218.70 ± 2.733.90 ± 1.1867.76 ± 3.3823.23 ± 8.537.36 ± 1.07
SIFT + k-means + VLAD + Spec78.06 ± 3.3118.92 ± 2.335.07 ± 1.9368.11 ± 2.3921.16 ± 8.174.74 ± 1.02
Spectral-enhancedSIFT + GMM + FV + Spec78.60 ± 3.6318.47 ± 2.723.48 ± 1.8567.24 ± 2.0821.57 ± 8.265.62 ± 1.45
descriptor-based schemeDSIFT + k-means + VLAD + Spec77.07 ± 2.5218.08 ± 2.965.59 ± 1.0767.45 ± 2.1720.89 ± 8.295.75 ± 1.07
DSIFT + GMM + FV + Spec77.11 ± 2.3518.90 ± 2.393.60 ± 1.0968.98 ± 2.2621.22 ± 7.735.14 ± 1.41
Table 8. Classification results in terms of OA (%), QD (%), AD (%), and execution times (seconds) obtained by the techniques detailed in Table 4 for the Galicia dataset images and using a SVM classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
Table 8. Classification results in terms of OA (%), QD (%), AD (%), and execution times (seconds) obtained by the techniques detailed in Table 4 for the Galicia dataset images and using a SVM classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
OitavénMestas
OAQDADTimeOAQDADTime
Without Texture Features72.71 ± 0.533.23 ± 1.882.22 ± 1.126 ± 086.10 ± 0.361.75 ± 0.481.06 ± 0.095 ± 0
k-means + VLAD76.68 ± 1.062.08 ± 0.111.81 ± 0.41239 ± 689.44 ± 0.130.73 ± 0.050.19 ± 0.09161 ± 1
k-means + BoW81.94 ± 0.542.71 ± 0.481.39 ± 0.38239 ± 6 92.67 ± 0.41 0.50 ± 0.010.54 ± 0.04160 ± 0
GMM + FV79.62 ± 0.571.87 ± 0.791.15 ± 0.11360 ± 790.27 ± 0.250.61 ± 0.070.99 ± 0.40399 ± 1
SIFT + k-means + VLAD79.52 ± 0.432.26 ± 0.791.15 ± 0.07280 ± 286.92 ± 0.332.75 ± 0.661.45 ± 0.47479 ± 2
SIFT + GMM + FV64.58 ± 0.505.26 ± 1.563.58 ± 0.34268 ± 290.08 ± 0.241.51 ± 0.250.98 ± 0.08264 ± 0
DSIFT + k-means + VLAD75.22 ± 0.483.43 ± 0.712.13 ± 1.101279 ± 686.09 ± 0.361.77 ± 0.600.86 ± 0.091369 ± 5
DSIFT + GMM + FV77.32 ± 0.333.51 ± 0.731.61 ± 0.581136 ± 391.85 ± 0.671.01 ± 0.500.75 ± 0.082257 ± 4
LIOP + k-means + VLAD 82.74 ± 0.47 2.52 ± 0.432.48 ± 0.6868 ± 190.24 ± 3.671.79 ± 0.031.78 ± 0.0426 ± 0
LIOP + GMM + FV71.72 ± 0.496.02 ± 1.424.51 ± 0.7321 ± 090.33 ± 0.021.64 ± 0.731.12 ± 0.0214 ± 0
HOG + k-means + VLAD73.14 ± 0.265.65 ± 0.893.13 ± 0.88111 ± 090.00 ± 0.111.34 ± 0.670.99 ± 0.1849 ± 0
HOG + GMM + FV60.43 ± 0.5810.98 ± 1.476.16 ± 1.13111 ± 090.33 ± 0.021.71 ± 0.020.30 ± 0.1849 ± 0
SIFT + k-means + VLAD + Spec77.86 ± 0.285.28 ± 1.243.39 ± 1.441068 ± 392.48 ± 0.020.95 ± 0.140.40 ± 0.041076 ± 2
SIFT + GMM + FV + Spec77.97 ± 0.035.81 ± 1.663.78 ± 1.831068 ± 392.53 ± 0.020.25 ± 0.040.18 ± 0.021076 ± 2
DSIFT + k-means + VLAD + Spec77.64 ± 0.335.63 ± 1.253.49 ± 1.551135 ± 492.51 ± 0.030.18 ± 0.070.36 ± 0.011501 ± 5
DSIFT + GMM + FV + Spec78.03 ± 0.275.56 ± 1.283.14 ± 1.471135 ± 492.47 ± 0.030.50 ± 0.090.42 ± 0.091988 ± 4
FerreirasEiras
OAQDADTimeOAQDADTime
Without Texture Features76.84 ± 0.717.86 ± 1.493.70 ± 1.089 ± 078.40 ± 0.796.05 ± 1.864.37 ± 1.2811 ± 0
k-means + VLAD83.58 ± 0.352.43 ± 1.072.30 ± 0.49230 ± 186.58 ± 1.092.24 ± 0.751.78 ± 0.79245 ± 2
k-means + BoW 84.93 ± 0.47 3.13 ± 0.432.77 ± 0.57228 ± 085.19 ± 0.812.30 ± 1.161.57 ± 0.28243 ± 0
GMM + FV82.77 ± 0.343.95 ± 0.992.28 ± 1.12702 ± 085.61 ± 0.412.83 ± 0.611.79 ± 0.69887 ± 0
SIFT + k-means + VLAD80.75 ± 0.083.41 ± 0.582.23 ± 1.07682 ± 684.32 ± 0.243.83 ± 0.451.64 ± 0.81503 ± 9
SIFT + GMM + FV83.99 ± 0.283.87 ± 1.302.05 ± 0.84600 ± 681.42 ± 0.484.97 ± 1.592.89 ± 0.76493 ± 7
DSIFT + k-means + VLAD83.29 ± 0.133.15 ± 0.731.49 ± 0.321951 ± 1973.35 ± 4.565.76 ± 1.793.74 ± 0.652081 ± 1.89
DSIFT + GMM + FV84.38 ± 1.093.29 ± 1.172.67 ± 0.962547 ± 13181.17 ± 0.174.26 ± 1.172.28 ± 0.393953 ± 30
LIOP + k-means + VLAD83.08 ± 0.413.40 ± 1.342.05 ± 0.43118 ± 0 88.21 ± 0.37 3.99 ± 1.612.93 ± 0.4899 ± 0
LIOP + GMM + FV84.82 ± 0.195.34 ± 1.583.64 ± 0.6152 ± 069.24 ± 0.356.64 ± 1.193.16 ± 1.1573 ± 0
HOG + k-means + VLAD82.98 ± 0.014.51 ± 1.492.43 ± 0.25216 ± 084.41 ± 0.174.13 ± 1.042.79 ± 0.36308 ± 0
HOG + GMM + FV83.50 ± 0.033.77 ± 1.162.82 ± 0.44216 ± 069.01 ± 0.266.22 ± 1.073.66 ± 0.81308 ± 0
SIFT + k-means + VLAD + Spec84.08 ± 0.103.96 ± 1.042.90 ± 0.861477 ± 784.96 ± 0.782.91 ± 0.971.94 ± 0.821707 ± 7
SIFT + GMM + FV + Spec84.12 ± 0.163.72 ± 1.402.23 ± 0.021477 ± 085.14 ± 0.512.15 ± 0.981.17 ± 0.821707 ± 7
DSIFT + k-means + VLAD + Spec84.06 ± 0.133.09 ± 1.052.31 ± 0.512195 ± 2685.21 ± 0.592.84 ± 0.421.47 ± 0.282385 ± 14
DSIFT + GMM + FV + Spec84.05 ± 0.133.42 ± 1.432.97 ± 0.872064 ± 47685.11 ± 0.572.01 ± 0.261.34 ± 0.674037 ± 148
Table 9. Classification results in terms of OA (%), QD (%), AD (%), and execution times (seconds) obtained by the techniques detailed in Table 4 for the Galicia dataset images and using a RF classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
Table 9. Classification results in terms of OA (%), QD (%), AD (%), and execution times (seconds) obtained by the techniques detailed in Table 4 for the Galicia dataset images and using a RF classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
OitavénMestas
OAQDADTimeOAQDADTime
Without Texture Features73.66 ± 0.272.38 ± 0.751.69 ± 1.128 ± 087.65 ± 0.321.85 ± 0.101.56 ± 0.937 ± 0
k-means + VLAD78.75 ± 1.832.56 ± 0.501.41 ± 0.94243 ± 591.21 ± 0.120.92 ± 0.060.16 ± 0.02164 ± 2
k-means + BoW82.86 ± 0.832.13 ± 0.641.80 ± 0.79242 ± 692.44 ± 0.680.57 ± 0.020.65 ± 0.05165 ± 2
GMM + FV80.35 ± 0.691.13 ± 0.511.74 ± 0.05362 ± 491.31 ± 0.150.48 ± 0.060.25 ± 0.08402 ± 2
SIFT + k-means + VLAD81.38 ± 0.482.35 ± 0.591.33 ± 0.04285 ± 287.85 ± 0.392.90 ± 0.601.45 ± 0.27477 ± 4
SIFT + GMM + FV66.93 ± 0.126.98 ± 1.233.26 ± 0.77264 ± 190.34 ± 0.391.13 ± 0.570.46 ± 0.06272 ± 2
DSIFT + k-means + VLAD76.60 ± 0.333.21 ± 0.342.83 ± 1.311291 ± 988.63 ± 0.111.47 ± 0.220.55 ± 0.071345 ± 6
DSIFT + GMM + FV78.78 ± 0.892.94 ± 0.351.33 ± 0.251139 ± 291.37 ± 0.901.94 ± 0.290.49 ± 0.012263 ± 5
LIOP + k-means + VLAD 83.29 ± 0.80 2.86 ± 0.852.36 ± 0.2369 ± 191.24 ± 3.991.09 ± 0.031.16 ± 0.0229 ± 0
LIOP + GMM + FV73.89 ± 0.417.61 ± 1.294.90 ± 0.3722 ± 191.44 ± 0.041.66 ± 0.831.77 ± 0.0315 ± 0
HOG + k-means + VLAD74.24 ± 0.105.45 ± 0.623.87 ± 0.06113 ± 091.08 ± 0.181.84 ± 0.450.24 ± 0.0152 ± 0
HOG + GMM + FV62.04 ± 0.3810.59 ± 1.986.29 ± 1.51115 ± 391.33 ± 0.091.56 ± 0.080.28 ± 0.0151 ± 0
SIFT + k-means + VLAD + Spec78.26 ± 0.284.21 ± 0.872.51 ± 0.461076 ± 393.67 ± 0.620.95 ± 0.030.64 ± 0.011089 ± 1
SIFT + GMM + FV + Spec78.92 ± 0.554.04 ± 0.312.08 ± 0.321078 ± 293.77 ± 0.790.45 ± 0.070.74 ± 0.021086 ± 1
DSIFT + k-means + VLAD + Spec78.85 ± 0.694.97 ± 0.802.88 ± 0.251097 ± 393.34 ± 0.420.99 ± 0.030.59 ± 0.031189 ± 1
DSIFT + GMM + FV + Spec78.04 ± 0.594.14 ± 0.252.46 ± 0.921089 ± 393.45 ± 0.910.93 ± 0.040.33 ± 0.021146 ± 3
FerreirasEiras
OAQDADTimeOAQDADTime
Without Texture Features78.64 ± 0.997.19 ± 1.623.64 ± 0.3811 ± 081.70 ± 0.786.60 ± 1.954.68 ± 1.4314 ± 1
k-means + VLAD84.19 ± 0.402.36 ± 0.441.07 ± 0.05240 ± 287.38 ± 1.192.61 ± 0.541.58 ± 0.17263 ± 2
k-means + BoW 85.99 ± 0.44 3.36 ± 0.602.01 ± 0.26235 ± 086.46 ± 0.592.54 ± 1.741.82 ± 0.89267 ± 4
GMM + FV83.47 ± 0.963.48 ± 0.142.10 ± 1.40732 ± 185.41 ± 0.852.39 ± 0.091.66 ± 0.53896 ± 1
SIFT + k-means + VLAD81.82 ± 0.313.22 ± 0.612.16 ± 0.58701 ± 385.57 ± 0.243.71 ± 0.571.93 ± 0.64533 ± 12
SIFT + GMM + FV84.09 ± 0.823.18 ± 1.072.27 ± 0.12608 ± 481.84 ± 0.494.96 ± 1.412.01 ± 0.27515 ± 5
DSIFT + k-means + VLAD83.24 ± 0.913.41 ± 0.861.04 ± 0.061962 ± 1473.54 ± 4.135.11 ± 1.253.74 ± 0.802099 ± 2
DSIFT + GMM + FV85.76 ± 1.923.35 ± 0.802.49 ± 0.462588 ± 5682.56 ± 0.764.40 ± 1.592.81 ± 0.223943 ± 4
LIOP + k-means + VLAD84.56 ± 0.653.68 ± 0.272.94 ± 0.10123 ± 3 89.20 ± 0.96 3.39 ± 1.542.25 ± 0.56117 ± 2
LIOP + GMM + FV85.96 ± 0.845.19 ± 1.163.27 ± 0.5458 ± 071.03 ± 0.606.65 ± 1.323.16 ± 0.2587 ± 0
HOG + k-means + VLAD82.74 ± 0.014.91 ± 1.322.43 ± 0.11228 ± 185.92 ± 0.814.29 ± 0.492.72 ± 0.52315 ± 1
HOG + GMM + FV84.63 ± 0.093.86 ± 1.282.14 ± 0.16239 ± 069.01 ± 0.546.59 ± 1.073.97 ± 0.07308 ± 0
SIFT + k-means + VLAD + Spec84.93 ± 0.093.22 ± 1.042.02 ± 0.451497 ± 584.99 ± 0.542.41 ± 0.751.06 ± 0.871712 ± 4
SIFT + GMM + FV + Spec84.64 ± 0.373.10 ± 1.302.53 ± 0.491481 ± 085.36 ± 0.532.40 ± 0.921.47 ± 0.591757 ± 5
DSIFT + k-means + VLAD + Spec84.38 ± 0.653.16 ± 1.812.69 ± 0.292208 ± 2085.87 ± 0.592.94 ± 0.011.89 ± 0.742399 ± 3
DSIFT + GMM + FV + Spec84.62 ± 0.943.17 ± 1.902.15 ± 0.612084 ± 21785.18 ± 0.532.46 ± 0.081.45 ± 0.084056 ± 176
Table 10. Classification results in terms of OA (%), QD (%), AD (%), and execution times (seconds) obtained by the techniques detailed in Table 4 for the Galicia dataset images and using a ELM classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
Table 10. Classification results in terms of OA (%), QD (%), AD (%), and execution times (seconds) obtained by the techniques detailed in Table 4 for the Galicia dataset images and using a ELM classifier. Fifteen percent of the superpixels are used for training. The best OA results are in a gray background.
OitavénMestas
OAQDADTimeOAQDADTime
Without Texture Features71.90 ± 0.724.23 ± 1.192.90 ± 0.648 ± 084.66 ± 0.231.97 ± 0.630.54 ± 0.076 ± 0
k-means + VLAD74.32 ± 0.573.19 ± 0.802.66 ± 0.89248 ± 387.31 ± 0.150.74 ± 0.040.34 ± 0.02178 ± 1
k-means + BoW80.01 ± 0.862.54 ± 0.331.22 ± 0.92248 ± 4 91.95 ± 0.16 1.38 ± 0.080.94 ± 0.06179 ± 1
GMM + FV78.16 ± 0.751.33 ± 0.521.98 ± 0.82383 ± 489.46 ± 0.570.83 ± 0.040.74 ± 0.90408 ± 2
SIFT + k-means + VLAD78.66 ± 0.702.67 ± 0.581.21 ± 0.01289 ± 284.71 ± 0.522.94 ± 0.571.35 ± 0.78503 ± 1
SIFT + GMM + FV63.55 ± 0.765.08 ± 0.323.71 ± 0.22275 ± 289.18 ± 0.411.43 ± 0.670.02 ± 0.13271 ± 1
DSIFT + k-means + VLAD73.39 ± 0.363.85 ± 0.082.58 ± 0.131287 ± 785.09 ± 0.991.70 ± 0.810.86 ± 0.111469 ± 7
DSIFT + GMM + FV76.84 ± 0.853.93 ± 0.681.62 ± 0.881153 ± 491.04 ± 0.591.36 ± 0.900.37 ± 0.072274 ± 3
LIOP + k-means + VLAD 81.07 ± 0.57 2.64 ± 0.292.36 ± 0.6175 ± 189.43 ± 1.691.13 ± 0.081.26 ± 0.0636 ± 1
LIOP + GMM + FV70.16 ± 0.925.88 ± 0.614.09 ± 0.1131 ± 089.64 ± 0.041.21 ± 0.131.73 ± 0.0528 ± 1
HOG + k-means + VLAD72.91 ± 0.405.01 ± 0.703.57 ± 0.73121 ± 289.35 ± 0.311.67 ± 0.490.29 ± 0.0956 ± 1
HOG + GMM + FV60.55 ± 0.2612.02 ± 1.667.79 ± 1.16150 ± 289.69 ± 0.051.56 ± 0.040.64 ± 0.2754 ± 0
SIFT + k-means + VLAD + Spec76.79 ± 0.695.86 ± 0.993.10 ± 1.251076 ± 390.20 ± 0.020.96 ± 0.040.39 ± 0.011176 ± 3
SIFT + GMM + FV + Spec75.68 ± 0.025.42 ± 1.823.46 ± 1.181142 ± 391.49 ± 0.010.16 ± 0.060.63 ± 0.071256 ± 2
DSIFT + k-means + VLAD + Spec76.10 ± 0.765.96 ± 1.653.79 ± 1.841155 ± 491.85 ± 0.040.35 ± 0.040.36 ± 0.491701 ± 5
DSIFT + GMM + FV + Spec77.42 ± 0.825.60 ± 0.863.19 ± 1.051835 ± 490.14 ± 0.030.47 ± 0.050.65 ± 0.062410 ± 4
FerreirasEiras
OAQDADTimeOAQDADTime
Without Texture Features73.48 ± 0.827.68 ± 0.603.35 ± 0.4912 ± 176.05 ± 0.606.15 ± 1.234.02 ± 1.3319 ± 0
k-means + VLAD80.93 ± 0.552.27 ± 1.972.75 ± 0.90252 ± 385.98 ± 0.762.40 ± 0.521.80 ± 0.60269 ± 3
k-means + BoW 83.27 ± 0.66 3.94 ± 0.972.72 ± 0.61235 ± 184.68 ± 0.813.29 ± 0.971.60 ± 0.47282 ± 2
GMM + FV81.83 ± 0.663.33 ± 0.912.47 ± 0.75722 ± 184.66 ± 0.422.03 ± 0.751.73 ± 0.52925 ± 2
SIFT + k-means + VLAD78.46 ± 0.283.40 ± 0.503.14 ± 0.25701 ± 482.84 ± 0.173.91 ± 0.761.57 ± 0.47573 ± 1
SIFT + GMM + FV81.71 ± 0.913.93 ± 0.962.29 ± 0.47630 ± 680.81 ± 0.514.82 ± 0.713.55 ± 0.52536 ± 2
DSIFT + k-means + VLAD82.82 ± 0.073.27 ± 0.441.41 ± 0.872254 ± 2372.30 ± 1.486.20 ± 0.584.94 ± 0.422191 ± 1.70
DSIFT + GMM + FV83.14 ± 0.644.21 ± 0.243.54 ± 0.882747 ± 5280.29 ± 0.724.01 ± 0.443.34 ± 0.454201 ± 41
LIOP + k-means + VLAD82.02 ± 0.553.29 ± 0.123.88 ± 0.53238 ± 1 87.58 ± 0.37 3.39 ± 0.421.37 ± 0.48106 ± 2
LIOP + GMM + FV82.77 ± 0.235.01 ± 0.833.92 ± 0.6372 ± 167.63 ± 0.657.70 ± 1.254.05 ± 0.0496 ± 1
HOG + k-means + VLAD81.12 ± 0.094.40 ± 1.092.29 ± 0.04266 ± 183.70 ± 0.934.28 ± 1.072.78 ± 0.40358 ± 1
HOG + GMM + FV82.75 ± 0.183.67 ± 1.812.85 ± 0.50296 ± 067.99 ± 0.476.99 ± 0.863.43 ± 0.84358 ± 1
SIFT + k-means + VLAD + Spec82.56 ± 0.484.68 ± 1.252.38 ± 0.311682 ± 782.33 ± 0.073.50 ± 0.291.77 ± 0.081857 ± 3
SIFT + GMM + FV + Spec82.93 ± 0.283.40 ± 1.042.27 ± 0.091688 ± 084.08 ± 0.032.73 ± 0.951.54 ± 0.091835 ± 7
DSIFT + k-means + VLAD + Spec83.88 ± 0.963.01 ± 1.102.30 ± 0.262674 ± 2685.90 ± 0.662.80 ± 0.051.70 ± 0.592412 ± 14
DSIFT + GMM + FV + Spec82.95 ± 0.783.11 ± 1.882.95 ± 0.392580 ± 47683.48 ± 0.592.49 ± 0.601.41 ± 0.374345 ± 148
Table 11. Classification results for different percentages of training samples for the Galicia dataset images using a SVM classifier and varying the percentage of superpixels randomly selected for training. Accuracy values expressed in terms of OA (in %) indicating standard deviation values (±). The best results are in a gray background.
Table 11. Classification results for different percentages of training samples for the Galicia dataset images using a SVM classifier and varying the percentage of superpixels randomly selected for training. Accuracy values expressed in terms of OA (in %) indicating standard deviation values (±). The best results are in a gray background.
%TechniqueOitavénMestasFerreirasEiras
10k-means + BoW76.01 ± 0.3087.6 ± 0.2079.52 ± 0.2784.48 ± 0.21
GMM + FV75.24 ± 1.0184.01 ± 3.1480.23 ± 1.5982.51 ± 2.00
LIOP + k-means + VLAD75.23 ± 0.8286.24 ± 3.0181.02 ± 1.0183.31 ± 2.00
SIFT + k-means + VLAD + Spec75.83 ± 1.4486.93 ± 3.0881.22 ± 1.5783.93 ± 2.00
15k-means + BoW81.94 ± 0.5492.67 ± 0.4184.93 ± 0.4785.19 ± 0.81
GMM + FV79.62 ± 0.6790.27 ± 0.2582.77 ± 0.3485.61 ± 0.41
LIOP + k-means + VLAD82.74 ± 0.4790.24 ± 3.6783.08 ± 0.4188.21 ± 0.37
SIFT + k-means + VLAD + Spec77.86 ± 0.2892.48 ± 0.0284.08 ± 0.1084.96 ± 0.78
20k-means + BoW83.12 ± 0.1492.14 ± 0.1985.07 ± 0.3586.62 ± 0.27
GMM + FV81.93 ± 1.2191.65 ± 0.6684.42 ± 0.7187.32 ± 1.22
LIOP + k-means + VLAD81.38 ± 2.0491.22 ± 1.9383.98 ± 0.7586.77 ± 1.43
SIFT + k-means + VLAD + Spec82.39 ± 1.1990.76 ± 2.0383.99 ± 0.8686.66 ± 0.88
Table 12. Classification accuracy results in terms of OA (%) for the GID5 and GID15 datasets.
Table 12. Classification accuracy results in terms of OA (%) for the GID5 and GID15 datasets.
SchemeTechniqueGID5GID15
-Without Texture Features91.2869.93
k-means + VLAD93.84 79.63
Codebook-based schemek-means + BoW 95.79 79.61
GMM + FV94.6679.61
SIFT + k-means + VLAD92.4676.21
SIFT + GMM + FV93.6375.97
DSIFT + k-means + VLAD93.0276.47
Descriptor-basedDSIFT + GMM + FV93.0475.81
schemeLIOP + k-means + VLAD94.8577.32
LIOP + GMM + FV92.1971.24
HOG + k-means + VLAD94.5376.76
HOG + GMM + FV92.1870.98
SIFT + k-means + VLAD + Spec93.9975.49
Spectral-enhancedSIFT + GMM + FV + Spec94.0075.50
descriptor-based schemeDSIFT + k-means + VLAD + Spec93.9175.47
DSIFT + GMM + FV + Spec93.9275.51

Share and Cite

MDPI and ACS Style

Blanco, S.R.; Heras, D.B.; Argüello, F. Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels. Remote Sens. 2020, 12, 2633. https://doi.org/10.3390/rs12162633

AMA Style

Blanco SR, Heras DB, Argüello F. Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels. Remote Sensing. 2020; 12(16):2633. https://doi.org/10.3390/rs12162633

Chicago/Turabian Style

Blanco, Sergio R., Dora B. Heras, and Francisco Argüello. 2020. "Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels" Remote Sensing 12, no. 16: 2633. https://doi.org/10.3390/rs12162633

APA Style

Blanco, S. R., Heras, D. B., & Argüello, F. (2020). Texture Extraction Techniques for the Classification of Vegetation Species in Hyperspectral Imagery: Bag of Words Approach Based on Superpixels. Remote Sensing, 12(16), 2633. https://doi.org/10.3390/rs12162633

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop