Next Article in Journal
Wide Field of View Microwave Interferometric Radiometer Imaging
Next Article in Special Issue
An Integrated Land Cover Mapping Method Suitable for Low-Accuracy Areas in Global Land Cover Maps
Previous Article in Journal
CARIB18: A Stable Geodetic Reference Frame for Geological Hazard Monitoring in the Caribbean Region
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Review of Manifold Learning Techniques for Hyperspectral and Polarimetric SAR Image Fusion

1
Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany
2
Signal Processing in Earth Observation (SiPEO), Technical University of Munich (TUM), 80333 Munich, Germany
*
Authors to whom correspondence should be addressed.
Remote Sens. 2019, 11(6), 681; https://doi.org/10.3390/rs11060681
Submission received: 14 February 2019 / Revised: 15 March 2019 / Accepted: 18 March 2019 / Published: 21 March 2019
(This article belongs to the Special Issue Multi-Modality Data Classification: Algorithms and Applications)

Abstract

:
In remote sensing, hyperspectral and polarimetric synthetic aperture radar (PolSAR) images are the two most versatile data sources for a wide range of applications such as land use land cover classification. However, the fusion of these two data sources receive less attention than many other, because of their scarce data availability, and relatively challenging fusion task caused by their distinct imaging geometries. Among the existing fusion methods, including manifold learning-based, kernel-based, ensemble-based, and matrix factorization, manifold learning is one of most celebrated techniques for the fusion of heterogeneous data. Therefore, this paper aims to promote the research in hyperspectral and PolSAR data fusion, by providing a comprehensive comparison between existing manifold learning-based fusion algorithms. We conducted experiments on 16 state-of-the-art manifold learning algorithms that embrace two important research questions in manifold learning-based fusion of hyperspectral and PolSAR data: (1) in which domain should the data be aligned—the data domain or the manifold domain; and (2) how to make use of existing labeled data when formulating a graph to represent a manifold—supervised, semi-supervised, or unsupervised. The performance of the algorithms were evaluated via multiple accuracy metrics of land use land cover classification over two data sets. Results show that the algorithms based on manifold alignment generally outperform those based on data alignment (data concatenation). Semi-supervised manifold alignment fusion algorithms performs the best among all. Experiments using multiple classifiers show that they outperform the benchmark data alignment-based algorithms by ca. 3% in terms of the overall classification accuracy.

1. Introduction

1.1. Related Work

Multi-modal data fusion [1,2,3,4,5,6,7] continuously draws attention in the remote sensing community. The fusion of optical and synthetic aperture radar (SAR) data, two important yet intrinsically different data sources, has also began to appear frequently in the context of multi-modal data fusion [8,9,10,11,12,13,14]. With the rapid development of Earth observation missions, such as the Sentinel-1 [15], Sentinel-2 [16], and the upcoming EnMAP [17], the availability of both data sources create a huge potential for Earth-oriented information retrieval. Among all optical data [18,19,20], hyperspectral data are well known for their distinguishing power that originates from their rich spectral information [21,22,23,24]. Similarly, polarimetric SAR (PolSAR) data are a popular choice for classification task in the field of SAR because it can reflect the geometric and the dielectric property of the scatterers [25,26,27,28,29]. It is of great interest to investigate the fusion of hyperspectral and PolSAR images, especially with the application to land use land cover classification (LULC).
Few studies attempted to address the challenge of fusing hyperspectral and PolSAR data. Jouan and Allard [30] proposed a hierarchical fusion strategy for land cover classification using PolSAR and hyperspectral images. In their work, hyperspectral images are firstly used to distinguish vegetation and non-vegetation area. The PolSAR data are used to classify the non-vegetation area to man-made objects, water, or bare soil. Li et al. [31] applied feature level and decision level fusion using hyperspectral and PolSAR data. They combined the parameters of scattering mechanism of the PolSAR data and the features of the hyperspectral image to create a concatenation of features. The classification results of multiple classifiers are then merged using decision fusion. An application of spilled oil detection was studied by Dabbiru et al. [32] using hyperspectral and PolSAR data. They applied pixel level concatenation on the data, and employed support vector machine (SVM) as the classifier. Hu et al. [33] proposed a framework for fusing hyperspectral and PolSAR data based on the segmented objects that provide spatial information. A two-stream convolutional neural network (CNN) was introduced in [34] that takes advantage of the feature extraction power of CNN.
Among the existing fusion methods, including manifold learning-based [35,36], kernel-based [37], ensemble-based [38,39], tensor-based [40,41], and matrix factorization [42], manifold learning is one of most celebrated techniques. However, although it has been proven as a powerful technique in the field of data fusion, it is barely studied in the scope of fusing hyperspectral and PolSAR data. Generally, manifold-based fusion techniques attempt to find a shared latent space where the original data sets can be fused or aligned. Wang and Mahadevan [35,43,44,45] proposed several manifold-based techniques to find the correspondence of data sets which describe the same object from different aspects via the latent space. A kernel based manifold alignment [46] searches the latent space from a kernel space of the original data, because the kernel space has a better representation of the data than the original feature space of the data. In remote sensing, it was introduced in [36,47] that the manifold latent space is able to align multiple optical data sets and improve the LULC classification. A manifold-based data alignment technique was introduced in [48] for the fusion of hyperspectral and Lidar data with application to classification. Besides data fusion, various manifold techniques can be found in remote sensing field for detection [49], visualization [50], and dimension reduction [51].

1.2. Scope of This Paper

When fusing data with manifold techniques, one technical question is that: in which domain should the fusion be carried on? We categorized the existing techniques into two types: (1) data alignment-based approach and (2) manifold alignment-based approach. As shown in the left of Figure 1, the data alignment approach carries out the fusion in the data domain. As the simplest example, it fuses the data by concatenation, and carries out a manifold-based dimension reduction. Essentially, this approach assumes that an intrinsic manifold exists in the concatenated data. Representatives of this approach are the locality preservation projection (LPP) [52] and the generalized graph fusion (GGF) [48]. On the contrary, the manifold alignment-based approach carries out the fusion on manifolds which are separately derived from different data sources. This is demonstrated in the right of Figure 1. The assumption of this approach is that different manifold exists in each data source. Those manifolds can be aligned in a latent space. Representative algorithms are the manifold alignment (MA) [36] and the MAPPER-Induced manifold alignment (MIMA) [53].
The other essential research question of manifold-based fusion is: how should the manifold be extracted? We categorize the existing techniques into three learning strategies in terms of the usage of labeled data: unsupervised, semi-supervised, and supervised. When modeling a manifold, a general assumption is that, hidden in the data representation, there exists an underlying lower dimensional manifold where the data truly distributes [54]. Early studies [54,55,56,57] model manifolds by following a geometric assumption that the Riemannian manifold can be locally approximated by Euclidean measures. The geometric assumption models the manifold in an unsupervised manner by using k-nearest-neighbor (kNN). With the presence of labeled data, the manifold can be jointly modeled by the Riemannian manifold and the labeled data. For example, one can construct the manifold in a semi-supervised fashion by using both kNN and the labeled data [35,36]. The manifold can also be modeled in a supervised manner by using only the labeled data. One of the main goals of this paper is to investigate the impact of these learning strategies on the classification performance on the fused data.

1.3. Contribution of This Paper

This paper investigates the performance of manifold learning technique on the fusion of hyperspectral and PolSAR data, based on four state-of-art algorithms, locality preservation projection (LPP) [52], generalized graph fusion (GGF) [48], manifold alignment (MA) [36,44], and MAPPER-induced manifold alignment (MIMA) [53]. We implemented 16 variants of the four algorithms which involve the above-mentioned two alignment approaches and the three manifold learning strategies. These algorithms were tested on two study areas for a LULC classification task with five classifiers: one nearest neighbor (1NN) [58], linear SVM (LSVM) [59,60], Gaussian kernel SVM (KSVM) [59,60], random forest (RF) [61], and canonical correlation forest (CCF) [62]. We avoided any deep network classifiers, because the goal of this article is to solely evaluate the performance of multi-sensory data fusion. In total, 80 classification maps were produced for each study area, based on which a comprehensive discussion was carried out. The main contributions of this paper are as follows:
  • An exhaustive investigation of existing manifold learning techniques. A sufficient number of manifold techniques and classifiers were tested on the fusion of hyperspectral and PolSAR data in terms of classification. It provides a reliable demonstration on the performance of the manifold technique regarding hyperspectral and PolSAR data fusion.
  • An objective comparison of the performance of different manifold data fusion algorithms. To avoid any fortuity, five classifiers were applied for the classification. A grid search was applied to all tunable hyperparameters of those algorithms. The best classification accuracies are compared.
  • A comprehensive analysis of the results. The experiment results were analyzed in regard to two fusion approaches, three manifold learning strategies, four basic algorithms, and five classifiers.

1.4. Structure of This Paper

The second section recalls the theory of manifold technique and the four selected state-of-art algorithms. The third section describes the data sets used in the experiments, introduces the experiment setting, and carries out the discussion. The fourth section concludes the paper. Table 1 also lists the symbols used in this article for a better understanding of the content of the article.

2. Materials and Methods

In this section, the general concept of the manifold technique is introduced, with the help of necessary mathematical notations. Meanwhile, the theoretical impact of different learning strategies to the fusion result is discussed. Then, the following up sub-sections recall the principles of the four selected state-of-art manifold fusion techniques, namely LPP [52], GGF [48], MA [36,44], and MIMA [53]. Pseudo-codes of these four algorithms are listed in the Appendix A, Appendix B, Appendix C and Appendix D, which provides the technical details.Finally, the data sets and the experiment settings are introduced in detail.

2.1. Manifold Technique, Learning Strategy, and Notations

Let X i = [ x i 1 , , x i p , , x i n i ] R m i × n i be a matrix representing the ith data source, with m i dimensions by n i instances. For simplicity, the subscript i is omitted in the following content when only one data source is involved. The m i -dimensional data space is named as the feature space of data X i in this paper. The term x i p denotes the pth instance of the ith data source. Let K denote the total number of data sources.
A manifold M is a smooth hyper-surface embedded in a higher dimensional space [56], e.g., the surface of a sphere is a 2D manifold in a 3D space. The underlying assumption of the manifold technique is that, for a data X R m × n of redundant m dimensions, there exists a low dimensional intrinsic manifold M where the data distributes [54,57,63,64]. The goal of a manifold technique is to pursue a representation, realized by a projection Y = [ y 1 , , y p , , y n ] R l × n , l < m of the original data, of the manifold M . In order to approximate Y , the bridging property is that the data point y p on the manifold is locally homeomorphic to its counterpart x p in the feature space [56]. It means that a data point has identical local structures in its intrinsic manifold and in its feature space. With this property, a variety of methods [54,55,57,65] extract the local structure of a data [52,66,67,68,69] in its feature space as an estimation to the local structure in its intrinsic manifold, with different locality criterion. All those methods pursue an optimized projection f which maps data from the feature space to a representation ( Y = f T X ) of the intrinsic manifold M . In terms of the manifold technique for data fusion [36,44,48], the aim is to find the projection which maps multiple data sources { X 1 , X 2 , , X K } into a fused manifold M ˜ where the fused data locates.
The centerpiece of the abovementioned algorithms is the modeling of the manifold. Usually, an intrinsic manifold of the data is modeled by an n × n symmetric binary matrix A that describes the connection among the data points. A ( p , q ) = 1 for a confirmed connection between x i p and x i q while A ( p , q ) = 0 otherwise. A can be generalized to an n × n symmetric weight matrix W . Different from A , W ( p , q ) takes a real value in [ 0 , 1 ] , which describes the strength of the connection between x i p and x i q . Essentially, A and W are the adjacency and weight matrix of a graph that captures the topology of the manifold. As introduced in [52], the manifold structure ( A or W ) can be defined from different perspectives. In this paper, we would like to categorize these perspectives based on how the labeled data is utilized for modeling the manifold, namely the unsupervised learning, supervised learning, and semi-supervised learning.
  • The unsupervised learning takes the original geometric assumption that the manifold and the original data space share the same local property. Besides the geometric measure, model-based similarity measurement can also be used to build up the structure of the manifold. The key point is that the definition of the similarity measurement is capable of revealing the underlying distribution of the data or the physical information in the data.
  • The supervised learning assumes that a given set of labeled data includes sufficient amount of inter- and intra-class connections among the data points, so that they can well capture the topology of the manifold. As a result, the underlying manifold is directly defined by the label information. Thus, the quality of the label has a great impact.
  • The semi-supervised learning pursues a manifold where the data distribution partially correlates to the label information and partially associates to the distribution predefined by a similarity measurement. This manifold implicitly propagates the label information to the unlabeled data.

2.2. Locality Preservation Projection (LPP)

LPP aims to find a lower dimensional representation Y of the original data X which reflects the intrinsic manifold M . According to the geometric assumption that the intrinsic manifold and the original data share the same local properties, the lower dimensional representation Y achieved by LPP preserves the local structure of the original data X . The locality is defined by either the k-nearest-neighbor or the ϵ -neighborhoods [52] and is mathematically described in a weight matrix W as Equation (1).
W ( p , q ) = e x p x q 2 σ x p and x q are local neighbors 0 x p and x q are not local neighbors
where σ is a filtering parameter.
LPP pursues an optimized projection f which maps the data X to a lower dimensional representation Y = f T X . As the local structure of the intrinsic manifold is modeled by Equation (1), minimizing the objective function expressed by Equation (2) encourages the preservation of the derived local structure in the intrinsic manifold.
L = p q ( y p y q ) W ( p , q ) = p q ( f T x p f T x q ) W ( p , q ) .
Thus, the optimization is formulated as follow:
min f p q ( f T x p f T x q ) W ( p , q ) .
Proven in [52], the solution that minimizes the objective function L ( f ) is given by the minimum eigenvalue solution to the generalized eigenvalue problem expressed in Equation (4).
X L X T f = λ X D X T f ,
where D is the degree matrix; if p = q , D ( p , q ) = p = 1 p = n W ( p , q ) , otherwise, D ( p , q ) = 0 ; and the L is the Laplacian matrix, L = D W .
As brief described above, LPP is originally designed to as a dimension reduction algorithm, instead of data fusion. However, it is essential to include this algorithm in the scope of this paper, because (1). When conducting manifold fusion, the dimension reduction is also accomplished as a side effect. Due to the well-known curse-of-dimensionality [70], classification on selective subset of dimensions could result in better performance than using the data with all dimensions [71]. LPP can serve as a baseline algorithm to reduce the dimension of the data; and (2). LPP is essentially a manifold learning technique. Some data fusion algorithms [48,72] are developed on the idea of data alignment using the LPP.

2.3. Generalized Graph-Based Fusion (GGF)

GGF is originally proposed to fuse hyperspectral data and LiDAR data for land cover classification [48]. Its fusion strategy comprises a joint LPP dimension reduction and an additional constraint that captures the common local structure that exists in all data sources.
Technically, GGF concatenates K data sources ( X i = [ x i 1 , , x i p , , x i n ] R m i × n , i { 1 , 2 , , K } ) into a stack ( X ˜ = [ x ˜ 1 , , x ˜ p , , x ˜ n ] R ( m 1 + m 2 + + m K ) × n ) which are treated as one data sources in its high dimensional feature space. Therefore, GGF is essentially a LPP carried out on the data stack X ˜ , with an additional constraint. The constraint assumes that the connectivity A ˜ of the fused intrinsic manifold M ˜ should be a complete subset of the connectivity matrices of the manifolds M i of the individual data sources X i , i { 1 , 2 , , K } . Thus, the assumption is formulated as Equation (5).
A ˜ = A 1 A 2 , , A K
where ⊙ indicates element-wise multiplication.
The manifold constraint A ˜ is embedded into a n by n pairwise distance matrix D ˜ ( D ˜ ( p , q ) = x ˜ p x ˜ q ) , which is expressed by Equation (6) where ¬ means logical operator negative, and m a x ( · ) means the maximum value of all elements in ‘ · ’. The distance between any two data points that are not connected according to A ˜ is penalized with the maximum distance value of D ˜ . The final distance matrix is named as D ˜ GGF .
D ˜ GGF = D ˜ + ( ¬ A ˜ ) m a x ( D ˜ )
The weight matrix W ˜ of the intrinsic manifold is then as follows.
W ˜ ( p , q ) = e D ˜ G G F ( p , q ) x p and x q are local neighbors 0 x p and x q are not local neighbors
After achieving the weight matrix W ˜ , similar to the LPP, the optimized projection f is given by the minimum eigenvalue solution to the generalized eigenvalue problem in Equation (8).
X ˜ L ˜ X ˜ T f = λ X ˜ D ˜ X ˜ T f
where D ˜ is the degree matrix. If p = q , D ˜ ( p , q ) = p = 1 p = n W ˜ ( p , q ) , otherwise, D ˜ ( p , q ) = 0 . L ˜ is the Laplacian matrix, L ˜ = D ˜ W ˜ .

2.4. Manifold Alignment (MA)

Manifold alignment [35,36,44] aims to learn a set of projections { f 1 , , f K } that (1) apply to individual data sources X i in order to obtain their individual manifolds M i , and (2) align those obtained manifolds { M 1 , , M K } to each other.
Designed in [36,44], three properties hold in the fused manifold: (a) data of the same class should locate close to each other; (b) data of different classes should locate far from one another; and (c) the intrinsic manifolds of individual data are preserved. These three properties are respectively formulated by the following three connection matrices A ˜ s (9), A ˜ d (10), and A ˜ g (11).
A ˜ s = A s 1 , 1 A s 1 , K A s K , 1 A s K , K
The connection matrix of similarity (9) is computed by the labeled information to pursue property (a).
A ˜ d = A d 1 , 1 A d 1 , K A d K , 1 A d K , K
The connection matrix of dissimilarity is modeled as (10) to accomplish property (b), which is also computed from the label.
A ˜ g = A g 1 , 1 0 0 0 0 0 0 A g K , K
The connection matrix (11) describes the manifolds of individual data sources by using kNN, which aims at the property (c). All of the matrices (9)–(11) have the size of ( n 1 + n 2 + + n k ) × ( n 1 + n 2 + + n k ) . In each matrix, the superscript i , j , e.g., A i , j , represents the relationship between the ith and jth data sources.
With connection matrices (9)–(11), three terms are formulated as Equations (12)–(14) to preserve the three properties, respectively.
A = i = 1 K j = 1 K p = 1 n i q = 1 n j f i T x i p f j T x j q 2 A ˜ s i , j ( p , q ) .
Minimizing Equation (12) pulls data of the same class together, which meets property (a).
B = i = 1 K j = 1 K p = 1 n i q = 1 n j f i T x i p f j T x j q 2 A ˜ d i , j ( p , q ) .
Maximizing Equation (13) pushes data of different classes away, which meets property (b).
C = i = 1 K p = 1 n i q = 1 n i f i T x i p f i T x i q 2 A ˜ g i , i ( p , q ) .
Minimizing Equation (14) preserves the geometric structure of individual data sources, which corresponds to property (c). The terms (12)–(14) jointly construct the objective unction (15):
L ( f 1 , , f K ) = ( A + C ) / B ,
and hence an optimization problem (16) can be written as
min f 1 , , f K L ( f 1 , , f K ) .
Proven in [35], the solution { f 1 , , f K } that minimizing the cost function L ( f 1 , , f K ) is given by the smallest non-zero eigenvectors of the generalized eigenvalue decomposition of (17).
X ˜ ( μ L ˜ g + L ˜ s ) X ˜ T f = λ X ˜ L ˜ d X ˜ T f ,
where
  • X ˜ = X 1 0 0 0 0 X K ,
  • L ˜ { s , d , g } = A ˜ { s , d , g } D ˜ { s , d , g } ,
  • D ˜ { s , d , g } ( p , q ) = q = 1 m 1 + + m k A ˜ { s , d , g } ( p , q ) p = q 0 p q .
The matrices D ˜ and L ˜ with subscript s, d, and g are the degree matrices and the Laplacian matrices, respectively.

2.5. MAPPER-Induced Manifold Alignment (MIMA)

MIMA is designed to fuse optical and PolSAR data for the purpose of LULC classification [53]. It follows the framework of MA [36,44] yet introduces a novel constraint term which originates from a recent field of topological data analysis (TDA). TDA has emerged as a new mathematical sub-field of big data analysis that aims to derive relevant information from the topological property of a data [73,74,75,76,77]. One TDA tool, named MAPPER [78], has been proven capable of revealing unknown insights in medical studies, by interpreting topological structures of data sets [79,80,81,82]. As a brief introduction, the MAPPER requires a filter function as an input which projects the data into a parameter space. The original data is sorted into overlapping bins guided by the projected parameter. MAPPER carries out clustering of data points in each of the data bins, respectively. Afterwards, MAPPER models a graph where a node represents a cluster and an edge links two clusters that share common data points. Finally, a simplified graph is built up to represent the shape of the data. Such graph is an approximation of the Reeb graph [83].
Technically, MIMA pursues the solution { f 1 , , f K } by solving the same generalized eigenvalue decomposition as in Equation (17), except the the connection matrix of geometry A ˜ g (Equation (11)) is replace by the MAPPER-derived connection matrix A ˜ M I M A where A M I M A i , i ( p , q ) = 1 if x i p and x i q belongs to the same cluster or belongs to two separated but linked clusters; A M I M A i , i ( p , q ) = 0 elsewhere. Comparing to A ˜ g , A ˜ M I M A introduces some unique properties that are listed as follows:
  • Field knowledge. An expertise knowledge is introduced by the selection of the filter function. It defines a perspective of viewing the data while deriving the structure.
  • A regional-to-global structure. Clustering in each data bin provides a regional structure. The design of overlapping bins combines the regional structures into a global one. It makes the derived structure more robust to outliers than the one derived by kNN.
  • A data-driven regional structure. A spectral clustering is applied in the step, which is capable of detecting the number of clusters by the concept of eigen-gap [84]. It allows the derived structure constraining to the data distribution.

2.6. Data Description

Two sets of real data were used to investigate the manifold learning techniques for the fusion of hyperspectral and PolSAR data. The two data sets are in city of Berlin, Germany, and Augsburg, Germany.

2.6.1. The Berlin Data Set

In the Berlin data set, the hyperspectral image is a synthetic spaceborne EnMAP scene synthesized from airborne HyMap data. It has a size of 817 by 220 pixels, a 30-m ground sampling distance (GSD), and 244 spectral bands ranging from 400 nm to 2500 nm [85]. The dual-channel PolSAR data is a VH-VV polarized Sentinel-1 data acquired in interferometric wide swath mode. The Sentinel-1 SLC data is preprocessed using ESA SNAP toolbox and filtered by a non-local mean filter [86]. The PolSAR data has a GSD of 13 m and a size of 1723 by 476 pixels. The ground truth is a land use land cover map derived from Open Street Map (OSM) data [87]. The ground truth labels are spatially separated into a training data set and a testing data set shown in Figure 2. The details of the training and testing data sets are summarized in Table 2.

2.6.2. The Augsburg Data Set

Similar to the Berlin data set, the hyperspectral image in the Augsburg data set is a synthetic spaceborne imagery simulated based on an airborne HySpex data. It has a GSD of 30 m, a size of 332 by 485 pixels, and 180 bands ranging from 400 nm to 2500 nm. Same as the Berlin data set, the PolSAR data is a VH-VV polarized Sentinel-1 image with a GSD of 10 m and a size of 997 by 1456 pixels. The training data and the testing data are shown in Figure 3 which are spatially separated. The details of the training and testing data sets are summarized in Table 3.

2.7. Experiment Setting

We start with a reasonable feature selection and extraction strategy from the original data, since it is well known that feature selection and extraction promote the classification performance of remote sensing data. The spectral-spatial feature extraction was employed for the hyperspectral image because of its excellent performance on classification tasks [88,89,90,91]. Specifically, the first four and six principal components (PCs) which occupy 99% of the variance of the data were extracted from the hyperspectral images of Berlin and Augsburg, respectively. The morphological profiles with radius of one, two, and three were employed to extract the spatial information on each PC. Thus, in total, 28 features and 42 features were extracted from the hyperspectral images of Berlin and Augsburg, respectively. For the feature extraction of Sentinel-1 dual-Pol data, four polarimetric features were extracted. They are the intensity of the VH channel, the intensity of the VV channel, the coherence of VV and VH, and the intensity ratio of VV and VH. Since the morphological profile was proven to promote classification of PolSAR [92,93], it is also used to extract spatial information from the four polarimetric features with radius equal to one, two, and three. In addition, the local statistics including the mean and standard deviation were extracted using a sliding window of 11 by 11 pixel on those four polarimetric features. In total, 36 features were extracted from the dual-Pol SAR data for both data sets of Berlin and Augsburg, respectively.
To carry out a comprehensive comparison of the fusion algorithms, in total 16 algorithms were implemented. Listed in Table 4, they are (1) PolSAR data only (POL), (2) hyperspectral image only (HSI), (3) feature stacking of hyperspectral and PolSAR data (HSI+POL), (4) data alignment using the original locality preserving projections (LPP) [52], (5) supervised version of LPP (LPP_SU), (6) semi-supervised version of LPP (LPP_SE), (7) the generalized graph-based fusion (GGF) [48], (8) supervised version of GGF (GGF_SU), (9) semi-supervised version of GGF (GGF_SE), (10) manifold alignment (MA) [36,44], (11) unsupervised version of MA (MA_UN), (12) supervised version of MA (MA_SU), (13) MAPPER-Induced manifold alignment with first two principal components as filter functions (MIMA) [53], (14) unsupervised MIMA (MIMA_UN), (15) MIMA with local density as filter function (MIMA-D), and (16) unsupervised MIMA with local density as filter function (MIMA-D_UN).
These manifold algorithms listed in Table 4 are categorized into the two approaches (data alignment or manifold alignment) mentioned in Section 1.2. LPP and GGF belong to the category of data alignment which concatenates data as a stack, and applies manifold learning on the stacked data. MA and MIMA belong to the category of manifold alignment which independently project K data sources to a latent space where the data are aligned.
The hyperparameters of each algorithm were tuned via a grid search, so that each algorithm reaches its best performance. The k was set in a range of 10 to 120 with an interval of 10. The number of dimension d n is set in a range of 5 to 50 with an interval of 5. The topology weighting parameter μ is set in a range of 0.5 to 3 with an interval of 0.5. The number of bins b is set in a range of 5 to 55 with an interval of 5.
After the data being fused, five different shallow classifiers were applied to the fused data set in the classification step. They are: one nearest neighbor (1NN) [58], linear SVM (LSVM) [59,60], Gaussian kernel SVM (KSVM) [59,60], random forest (RF) [61], and canonical correlation forest (CCF) [62]. The parameter tuning of LSVM is done in a heuristic procedure [60]. LIBSVM [94] is employed for the implementation of the KSVM. The number of trees was set as 40 for both RF and CCF.

3. Experiment Results

The discussion of experiment result mainly focus on the following three aspects:
  • Manifold learning strategy. The experiment result supports the discussion of the impact that causes by different learning strategies, the unsupervised learning, the supervised learning, and the semi-supervised learning.
  • Data fusion approach. The result supports the discussion of the two fusion approaches, the data alignment-based and the manifold alignment-based, for the fusion of the hyperspectral image and PolSAR data.
  • Performance on classification. The experiment result reveals how manifold techniques perform on fusing hyperspectral images and PolSAR data and how different these manifold techniques perform.
The classification result is quantitatively evaluated by the class-specific accuracy, the average accuracy, the overall accuracy, and the kappa coefficient. The class-specific accuracy provides the percentage of correct predictions for a specific class. The average accuracy is the mean value of class-specific accuracy. The overall accuracy indicates the percentage of correctness for all predictions. And kappa coefficient also evaluates the overall correctness, yet is more robust than the overall accuracy [95].

3.1. Experiment on the Berlin Data Set

As shown in Figure 4 and Table 5, for the data alignment-based fusion algorithms (LPP and GGF), the unsupervised versions outperform the supervised and the semi-supervised versions. However, for the manifold alignment-based fusion algorithms (MA, MIMA, and MIMA-D), the semi-supervised versions have the best performance comparing to the supervised and the unsupervised ones. Surprisingly, in both type of fusion algorithms, the fully supervised strategy performs the worst.
Taking the result of the simple concatenation (HSI+POL in Table 5) as reference, the data alignment-based fusion algorithms (LPP and GGF) marginally improve the classification accuracy. Sometimes the performance even drops below the reference accuracy. On the contrary, the manifold alignment-based fusion algorithms (MA, MIMA, and MIMA-D) have a more consistent improvement of the classification accuracy by ca. 3%. In fact, MIMA and MIMA-D have a considerable improvement comparing to LPP, GGF, and MA, especially when RF and CCF are employed as the classifier. This can be seen in Figure 4. Among all the algorithms, MIMA and MIMA-D have the best overall performance. Shown in Table 5, their best performance reach over 0.66, 65%, and 79%, for the kappa coefficient, the average accuracy, and the overall accuracy, respectively. For a visual comparison, Figure 5 plots the ground truth and the classification maps predicted by the 16 algorithms with CCF.

3.2. Experiment on Augsburg Data Set

The findings of the Augsburg data set are consistent with that of the Berlin data set. For the data alignment-based fusion algorithms, the unsupervised learning strategy works the best among the three learning strategies. For the manifold alignment-based fusion algorithms, the semi-supervised learning strategy performs the best. Comparing the results to that of simple concatenation (HSI+POL), the data alignment-based fusion (LPP and GGF) barely has any improvement, while the manifold alignment fusion has a 2% improvement comparing to the LPP and GGF. These findings can be seen in Figure 6, and Table 6. Among all the algorithms, combining MIMA or MIMA-D with RF or CCF provide the best classification performance. Their kappa coefficient, the average accuracy, and the overall accuracy, reach 0.56, 62.5%, and 62.5% respectively. A visual comparison of the results is shown in Figure 7. Similar to Figure 5, the classification maps were predicted by CCF.

4. Discussion

4.1. The Setting of the Training and Testing Samples

As shown in Figure 2 and Figure 3, the training and testing samples are spatially separated as a standard machine learning practice. However, the distribution of training samples of the Berlin and the Augsburg data set are slightly different. For each class of the Berlin data set, the training data are block-wisely scattered over the whole area. For the Augsburg data set, the training data only covers on the western part of the area. There is no sample from the eastern half of the site where the testing data distribute. Both scenarios are common in remote sensing applications. The latter one is naturally more challenging. This is why the overall accuracy of the Augsburg data set fluctuates around 56%, while it is around 76% for the Berlin data set.

4.2. The Data Alignment Fusion

An unsupervised data alignment-based fusion in this article pursues an intrinsic manifold of a concatenation of the hyperspectral and PolSAR data. Intuitively, making use of the additional label information in the manifold learning (semi-supervision) should be improve the classification accuracy. However, we observed the exact opposite in our experiments. We believe it is due to the misalignment of image pixels of optical and SAR images caused by their distinct imaging geometry. This pixel misalignment leads to extra difficulty in learning a joint manifold. Adding one more manifold defined by the misaligned label will only lead to destructive effects. Therefore, the data alignment-based fusion algorithm is not competent for fusing hyperspectral and PolSAR data with the resolution similar to our dataset. This finding should also be able to generalized to high resolution optical and SAR data, although we have not conducted any experiment.

4.3. The Manifold Alignment Fusion

Different to the data alignment-based fusion, the semi-supervised manifold alignment-based fusion outperforms the unsupervised manifold alignment fusion. This fusion concept is able to introduce the advantage of label information while pursuing the intrinsic manifold. The reason is that this fusion concept models the manifold of individual data sources independently which suits the fact that hyperspectral and PolSAR data are severely dissimilar in geometry and content. The label information is merged into the two manifolds in a way that the two manifolds are separately link to the label and are then aligned to each other by the label. In such manner, the advantage of label data appears on the classification results. Comparing to the data alignment-based fusion, the manifold alignment-based fusion introduces considerable improvements to the classification accuracy, which shows its competence for the fusion of hyperspectral and PolSAR data.

4.4. The Filter Function of MIMA

As introduced in Section 2.5, the filter function of MIMA introduces an expertise knowledge while deriving the manifold structure of data. MIMA and MIMA-D in this paper employed PCA and a density estimation as the filter function, respectively. The principal components are frequently used in classification. It is proven to be effective [82,96]. The density function is an important property for classification or clustering tasks [97,98]. However, from the experiment in this paper, it is inconclusive which choice is more suitable to serve as the filter function.

5. Conclusions and Outlook

This paper compares 16 variants of four state-of-the-art multi-sensory data fusion algorithms based on manifold learning. The comparison was done via a rigorous evaluation of the performance of the 16 algorithms on land use land cover classification on two sets of spaceborne hyperspectral images and PolSAR data. To carry out an objective comparison, the hyperparameters of the 16 algorithms were optimized via a grid search. Five different shallow classifiers were applied on the data sets fused by the 16 algorithms. We avoided any deep network classifiers, because the goal of this article is to solely evaluate the performance of multi-sensory data fusion algorithms. The experiments conclude that (1) data alignment-based (data concatenation) manifold techniques are less competent for the fusion of hyperspectral images and PolSAR data, or in general optical and SAR images fusion, because a concatenation of the two data sets with distinct imaging geometries causes difficulty even destructive effects when optimizing the target manifold; On the contrary, manifold alignment-based techniques are more competent for the task of optical and SAR images fusion, because the manifolds of the two data are separatly modeled and aligned; (2) Among the manifold alignment-based manifold techniques, semi-supervised methods are able to effectively make use of both the structure of data and the existing label information; (3) the MIMA algorithm cooperating with the CCF classifier provides the best classification accuracy among all the algorithms.
Based on our current research, our future research directions can include:
  • In the current algorithms, the learned manifold is specific to the very input data sets. We would like to study the generalization of such manifold on data sets of the same sensors. Eventually, we aim at big data processing where one common manifold can be applied to all the data sets of the same type.
  • Graph CNN has been an emerging filed in deep lerning. It is also of great interest to combine it with the traditional manifold learning techniques described in this article.
  • Because of the data availability of spaceborne hyperspectral and PolSAR data, they have not been extensively applied to real world problems. We would like to address more real world applications especially those for social good using those two types of data, for example, contributing to the monitoring of Unite Nation’s sustainable development goals.

Author Contributions

Conceptualization, J.H., D.H., and X.X.Z.; methodology, J.H., and D.H.; software, J.H.; validation, Y.W.; formal analysis, J.H.; investigation, J.H.; resources, J.H.; data curation, J.H.; writing—original draft preparation, J.H.; writing—review and editing, D.H. and Y.W.; visualization, J.H.; supervision, X.X.Z.; project administration, J.H.; funding acquisition, X.X.Z.

Funding

This research was funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program with the grant number ERC-2016-StG-714087 (Acronym: So2Sat, project website: www.so2sat.eu), and the Helmholtz Association under the framework of the Young Investigators Group Signal Processing in Earth Observation (SiPEO) with the grant number VH-NG-1018 (project web-site: www.sipeo.bgu.tum.de).

Acknowledgments

The author would like to thank Claas Grohnfeldt for providing the Augsburg data, Wenzhi Liao for releasing the GGF code, Devis Tuia for releasing the MA code.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Pseudo-Code of LPP

Algorithm 1:LPP( X ,k, σ )
Input:
X : the data source X = [ X 1 , , X p , , X n ] R m × n with n instances and m dimensions
k: the number of local neighbors
σ : the filtering parameter
Output:
Y : the representation of data X on the intrinsic manifold M .
f : the projection maps data X to Y
 1construct the n by n weight matrix with Equation (1)
 2construct the degree matrix D
 3construct the Laplacian matrix L = D W
 4solve the generalized eigenvalue decomposition X L X T f = λ X D X T f
 5construct Y : Y = f T X
 6Return Y and f

Appendix B. Pseudo-Code of GGF

Algorithm 2:GGF( X 1 , X 2 ,k, σ )
Input:
X 1 : the data source X 1 R m 1 × n with n instances and m 1 dimensions
X 2 : the data source X 2 R m 2 × n with n instances and m 2 dimensions
k: the number of local neighbors
σ : the filtering parameter
Output:
Y ˜ : the fused data.
f : the projection maps data X ˜ to Y ˜
 1stacking data sources on the feature dimension: X ˜ = X 1 X 2 = [ X ˜ 1 , , X ˜ p , , X ˜ n ] R ( m 1 + m 2 ) × n
 2construct binary matrices A i ( i { 1 , 2 } ) to model manifolds of X i :
A i ( p , q ) = 1 x i p is one of the k nearest neighbor to x i q 0 otherwise
 3construct a fused binary matrix A ˜ ( p , q ) = A 1 ( p , q ) * A 2 ( p , q )
 4calculate a n by n pairwise distance matrix D ˜
 5construct a GGF pairwise distance matrix D ˜ G G F as Equation (6)
 6calculate the n by n weight matrix: W ˜ as Equation (7)
 7calculate the degree matrix D ˜
 8calculate the Laplacian matrix L ˜ = D ˜ W ˜
 9solve the generalized eigenvalue decomposition X ˜ L ˜ X ˜ T f = λ X ˜ D ˜ X ˜ T f
10calculate Y ˜ = f T X ˜
11Return Y ˜ and f

Appendix C. Pseudo-Code of MA

Algorithm 3:MA( X 1 , X 2 , E 1 , E 2 ,k)
Input:
X 1 : the data source X 1 R m 1 × n 1 with n 1 instances and m 1 dimensions
X 2 : the data source X 2 R m 2 × n 2 with n 2 instances and m 2 dimensions
E 1 : E 1 R 1 × n 1 * with n 1 * < n 1 , labels for the first n 1 * instances of X 1
E 2 : E 2 R 1 × n 2 * with n 2 * < n 2 , labels for the first n 2 * instances of X 2
k: the number of local neighbors
Output:
Y ˜ 1 : the projected data of X 1 .
Y ˜ 2 : the projected data of X 2 .
f 1 : the projection maps data X 1 to Y ˜ 1
f 2 : the projection maps data X 2 to Y ˜ 2
 1construct ( n 1 + n 2 ) by ( n 1 + n 2 ) binary matrices A ˜ s (Equation (9)) and A ˜ d (Equation (10)) using E 1 and E 2
 2construct ( n 1 + n 2 ) by ( n 1 + n 2 ) binary matrix A ˜ g (Equation (11)) using k-nearest-neighbor with the given k
 3construct degree matrices D ˜ s , D ˜ d , and D ˜ g with A ˜ s , A ˜ d , and A ˜ g , respectively
 4construct Laplacian matrices L ˜ s , L ˜ d , and L ˜ g as instructed in Equation (17)
 5organize the data matrix X ˜ as instructed in Equation (17)
 6solve the generalized eigenvalue decomposition X ˜ ( μ L ˜ g + L ˜ s ) X ˜ T f = λ X ˜ L ˜ d X ˜ T f so that f 1 and f 2 are achieved, f = f 1 f 2 .
 7calculate Y ˜ 1 = f 1 T X 1 and Y ˜ 2 = f 2 T X 2
 8Return Y ˜ 1 , Y ˜ 2 , f 1 , f 2

Appendix D. Pseudo-Code of MIMA

Algorithm 4:MIMA-MAPPER( X ,b,c, F )
Input:
X : the data source X R m × n with n instances and m dimensions
b: the number of data bins
c: the overlapping rate
F : the filtering function
Output:
A M I M A : the connection matrix
 1calculate the parameter space X F
 2divide X F into b intervals with c % overlap of adjacent intervals
 3divide data X into b bins corresponding to intervals achieved in 2
 4for (each data bin):
 5 Spectral clustering
 6end for
 7Construct topological matrix A M I M A ( p , q ) = 1 , if p and q in the same cluster ; 1 , if p and q in the linked clusters ; 0 , otherwise .
 8Return A M I M A
Algorithm 5:MIMA( X 1 , X 2 , E 1 , E 2 ,k)
Input:
X 1 : the data source X 1 R m 1 × n 1 with n 1 instances and m 1 dimensions
X 2 : the data source X 2 R m 2 × n 2 with n 2 instances and m 2 dimensions
E 1 : E 1 R 1 × n 1 * with n 1 * < n 1 , labels for the first n 1 * instances of X 1
E 2 : E 2 R 1 × n 2 * with n 2 * < n 2 , labels for the first n 2 * instances of X 2
k: the number of local neighbors
Output:
Y ˜ 1 : the projected data of X 1 .
Y ˜ 2 : the projected data of X 2 .
f 1 : the projection maps data X 1 to Y ˜ 1
f 2 : the projection maps data X 2 to Y ˜ 2
 1construct ( n 1 + n 2 ) by ( n 1 + n 2 ) binary matrices A ˜ s (Equation (9)) and A ˜ d (Equation (10)) using E 1 and E 2
 2for(i=1:2)
 3 A M I M A i , i = MIMA-MAPPER( X i ,b,c)
 4end
 5construct matrix A ˜ M I M A = A M I M A 1 , 1 0 0 0 0 0 0 A M I M A K , K
 6construct degree matrices D ˜ s , D ˜ d , and D ˜ M I M A with A ˜ s , A ˜ d , and A ˜ M I M A , respectively
 7construct Laplacian matrices L ˜ s , L ˜ d , and L ˜ M I M A as instructed in Equation (17)
 8organize the data matrix X ˜ as instructed in Equation (17)
 9solve the generalized eigenvalue decomposition X ˜ ( μ L ˜ g + L ˜ s ) X ˜ T f = λ X ˜ L ˜ M I M A X ˜ T f so that f 1 and f 2 are achieved, f = f 1 f 2
10calculate Y ˜ 1 = f 1 T X 1 and Y ˜ 2 = f 2 T X 2
11Return Y ˜ 1 , Y ˜ 2 , f 1 , f 2

References

  1. Zhang, J. Multi-source remote sensing data fusion: status and trends. Int. J. Image Data Fusion 2010, 1, 5–24. [Google Scholar] [CrossRef] [Green Version]
  2. Dalla Mura, M.; Prasad, S.; Pacifici, F.; Gamba, P.; Chanussot, J.; Benediktsson, J.A. Challenges and opportunities of multimodality and data fusion in remote sensing. Proc. IEEE 2015, 103, 1585–1601. [Google Scholar] [CrossRef]
  3. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  4. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. CoSpace: Common Subspace Learning from Hyperspectral- Multispectral Correspondences. arXiv, 2018; arXiv:1812.11501. [Google Scholar] [CrossRef]
  5. Dalponte, M.; Bruzzone, L.; Gianelle, D. Fusion of hyperspectral and LIDAR remote sensing data for classification of complex forest areas. IEEE Trans. Geosci. Remote Sens. 2008, 46, 1416–1427. [Google Scholar] [CrossRef]
  6. Swatantran, A.; Dubayah, R.; Roberts, D.; Hofton, M.; Blair, J.B. Mapping biomass and stress in the Sierra Nevada using lidar and hyperspectral data fusion. Remote Sens. Environ. 2011, 115, 2917–2930. [Google Scholar] [CrossRef] [Green Version]
  7. Khodadadzadeh, M.; Li, J.; Prasad, S.; Plaza, A. Fusion of hyperspectral and LiDAR remote sensing data using multiple feature learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2971–2983. [Google Scholar] [CrossRef]
  8. Merkle, N.; Auer, S.; Müller, R.; Reinartz, P. Exploring the Potential of Conditional Adversarial Networks for Optical and SAR Image Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1811–1820. [Google Scholar] [CrossRef]
  9. Joshi, N.; Baumann, M.; Ehammer, A.; Fensholt, R.; Grogan, K.; Hostert, P.; Jepsen, M.R.; Kuemmerle, T.; Meyfroidt, P.; Mitchard, E.T.; et al. A review of the application of optical and radar remote sensing data fusion to land use mapping and monitoring. Remote Sens. 2016, 8, 70. [Google Scholar] [CrossRef]
  10. Wang, Y.; Zhu, X.X.; Zeisl, B.; Pollefeys, M. Fusing Meter-Resolution 4-D InSAR Point Clouds and Optical Images for Semantic Urban Infrastructure Monitoring. IEEE Trans. Geosci. Remote Sens. 2017, 55, 14–26. [Google Scholar] [CrossRef] [Green Version]
  11. Schmitt, M.; Hughes, L.H.; Zhu, X.X. THE SEN1-2 Dataset for DEEP LEARNING IN SAR-OPTICAL DATA FUSION. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, IV-1, 141–146. [Google Scholar] [CrossRef]
  12. Hong, D.; Yokoya, N.; Ge, N.; Chanussot, J.; Zhu, X.X. Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification. ISPRS J. Photogramm. Remote Sens. 2019, 147, 193–205. [Google Scholar] [CrossRef]
  13. Chang, Y.L.; Han, C.C.; Ren, H.; Chen, C.T.; Chen, K.S.; Fan, K.C. Data fusion of hyperspectral and SAR images. Opt. Eng. 2004, 43, 1787–1798. [Google Scholar]
  14. Koch, B. Status and future of laser scanning, synthetic aperture radar and hyperspectral remote sensing data for forest biomass assessment. ISPRS J. Photogramm. Remote Sens. 2010, 65, 581–590. [Google Scholar] [CrossRef]
  15. Torres, R.; Snoeij, P.; Geudtner, D.; Bibby, D.; Davidson, M.; Attema, E.; Potin, P.; Rommen, B.; Floury, N.; Brown, M.; et al. GMES Sentinel-1 mission. Remote Sens. Environ. 2012, 120, 9–24. [Google Scholar] [CrossRef]
  16. Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
  17. Stuffler, T.; Kaufmann, C.; Hofer, S.; Förster, K.; Schreier, G.; Mueller, A.; Eckardt, A.; Bach, H.; Penne, B.; Benz, U.; et al. The EnMAP hyperspectral imager—An advanced optical payload for future applications in Earth observation programmes. Acta Astronaut. 2007, 61, 115–120. [Google Scholar] [CrossRef]
  18. Wu, X.; Hong, D.; Ghamisi, P.; Li, W.; Tao, R. MsRi-CCF: Multi-scale and rotation-insensitive convolutional channel features for geospatial object detection. Remote Sens. 2018, 10, 1990. [Google Scholar] [CrossRef]
  19. Wu, X.; Hong, D.; Tian, J.; Chanussot, J.; Li, W.; Tao, R. ORSIm Detector: A Novel Object Detection Framework in Optical Remote Sensing Imagery Using Spatial-Frequency Channel Features. arXiv, 2019; arXiv:1901.07925. [Google Scholar] [CrossRef]
  20. Hong, D.; Zhu, X.X. SULoRA: Subspace unmixing with low-rank attribute embedding for hyperspectral data analysis. IEEE J. Sel. Top. Signal Process. 2018, 12, 1351–1363. [Google Scholar] [CrossRef]
  21. Drumetz, L.; Veganzones, M.A.; Henrot, S.; Phlypo, R.; Chanussot, J.; Jutten, C. Blind hyperspectral unmixing using an extended linear mixing model to address spectral variability. IEEE Trans. Image Process. 2016, 25, 3890–3905. [Google Scholar] [CrossRef] [PubMed]
  22. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. Learning a low-coherence dictionary to address spectral variability for hyperspectral unmixing. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 235–239. [Google Scholar]
  23. Ceamanos, X.; Waske, B.; Benediktsson, J.A.; Chanussot, J.; Fauvel, M.; Sveinsson, J.R. A classifier ensemble based on fusion of support vector machines for classifying hyperspectral data. Int. J. Image Data Fusion 2010, 1, 293–307. [Google Scholar] [CrossRef] [Green Version]
  24. Hong, D.; Yokoya, N.; Chanussot, J.; Zhu, X.X. An augmented linear mixing model to address spectral variability for hyperspectral unmixing. IEEE Trans. Image Process. 2019, 28, 1923–1938. [Google Scholar] [CrossRef] [PubMed]
  25. Lee, J.S.; Pottier, E. Polarimetric Radar Imaging: From Basics to Applications; CRC Press: Boca Raton, FL, USA, 2009. [Google Scholar]
  26. Cloude, S.R.; Pottier, E. An entropy based classification scheme for land applications of polarimetric SAR. IEEE Trans. Geosci. Remote Sens. 1997, 35, 68–78. [Google Scholar] [CrossRef]
  27. Moreira, A.; Prats-Iraola, P.; Younis, M.; Krieger, G.; Hajnsek, I.; Papathanassiou, K.P. A tutorial on synthetic aperture radar. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–43. [Google Scholar] [CrossRef] [Green Version]
  28. Schmitt, A.; Wendleder, A.; Hinz, S. The Kennaugh element framework for multi-scale, multi-polarized, multi-temporal and multi-frequency SAR image preparation. ISPRS J. Photogramm. Remote Sens. 2015, 102, 122–139. [Google Scholar] [CrossRef] [Green Version]
  29. Hu, J.; Ghamisi, P.; Zhu, X. Feature Extraction and Selection of Sentinel-1 Dual-Pol Data for Global-Scale Local Climate Zone Classification. ISPRS Int. J. Geo-Inf. 2018, 7, 379. [Google Scholar] [CrossRef]
  30. Jouan, A.; Allard, Y. Land use mapping with evidential fusion of features extracted from polarimetric synthetic aperture radar and hyperspectral imagery. Inf. Fusion 2004, 5, 251–267. [Google Scholar] [CrossRef]
  31. Li, T.; Zhang, J.; Zhao, H.; Shi, C. Classification-oriented hyperspectral and PolSAR images synergic processing. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, Australia, 21–26 July 2013; pp. 1035–1038. [Google Scholar]
  32. Dabbiru, L.; Samiappan, S.; Nobrega, R.A.A.; Aanstoos, J.V.; Younan, N.H.; Moorhead, R.J. Fusion of synthetic aperture radar and hyperspectral imagery to detect impacts of oil spill in Gulf of Mexico. In Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015. [Google Scholar]
  33. Hu, J.; Ghamisi, P.; Schmitt, A.; Zhu, X. Object Based Fusion of Polarimetric SAR and Hyperspectral Imaging for Land Use Classification. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016. [Google Scholar]
  34. Hu, J.; Mou, L.; Schmitt, A.; Zhu, X.X. FusioNet: A two-stream convolutional neural network for urban scene classification using PolSAR and hyperspectral data. In Proceedings of the 2017 Joint Urban Remote Sensing Event (JURSE), Dubai, UAE, 6–8 March 2017; pp. 1–4. [Google Scholar]
  35. Wang, C.; Mahadevan, S. A General Framework for Manifold Alignment. In Proceedings of the AAAI Fall Symposium: Manifold Learning and Its Applications, Washington, DC, USA, 14–18 July 2009; pp. 53–58. [Google Scholar]
  36. Tuia, D.; Volpi, M.; Trolliet, M.; Camps-Valls, G. Semisupervised manifold alignment of multimodal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7708–7720. [Google Scholar] [CrossRef]
  37. Ghamisi, P.; Benediktsson, J.A.; Phinn, S. Land-cover classification using both hyperspectral and LiDAR data. Int. J. Image Data Fusion 2015, 6, 189–215. [Google Scholar] [CrossRef]
  38. Xia, J.; Yokoya, N.; Iwasaki, A. Hyperspectral image classification with canonical correlation forests. IEEE Trans. Geosci. Remote Sens. 2017, 55, 421–431. [Google Scholar] [CrossRef]
  39. Yokoya, N.; Ghamisi, P.; Xia, J.; Sukhanov, S.; Heremans, R.; Tankoyeu, I.; Bechtel, B.; Le Saux, B.; Moser, G.; Tuia, D. Open data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS Data Fusion Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1363–1377. [Google Scholar] [CrossRef]
  40. Makantasis, K.; Doulamis, A.; Doulamis, N.; Nikitakis, A.; Voulodimos, A. Tensor-Based Nonlinear Classifier for High-Order Data Analysis. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; Volume 2, pp. 2221–2225. [Google Scholar]
  41. Makantasis, K.; Doulamis, A.D.; Doulamis, N.D.; Nikitakis, A. Tensor-based classification models for hyperspectral data analysis. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6884–6898. [Google Scholar] [CrossRef]
  42. Yokoya, N.; Chanussot, J.; Iwasaki, A. Nonlinear unmixing of hyperspectral data using semi-nonnegative matrix factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1430–1437. [Google Scholar] [CrossRef]
  43. Wang, C.; Mahadevan, S. Manifold Alignment without Correspondence. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI), Pasadena, CA, USA, 11–17 July 2009; Volume 2, p. 3. [Google Scholar]
  44. Wang, C.; Mahadevan, S. Heterogeneous domain adaptation using manifold alignment. IJCAI Proc. Int. Jt. Conf. Artif. Intell. 2011, 22, 1541. [Google Scholar]
  45. Wang, C.; Mahadevan, S. Manifold Alignment Preserving Global Geometry. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence (IJCAI), Beijing, China, 3–9 August 2013; pp. 1743–1749. [Google Scholar]
  46. Tuia, D.; Camps-Valls, G. Kernel manifold alignment for domain adaptation. PLoS ONE 2016, 11, e0148655. [Google Scholar] [CrossRef]
  47. Tuia, D.; Munoz-Mari, J.; Gómez-Chova, L.; Malo, J. Graph matching for adaptation in remote sensing. IEEE Trans. Geosci. Remote Sens. 2013, 51, 329–341. [Google Scholar] [CrossRef]
  48. Liao, W.; Pižurica, A.; Bellens, R.; Gautama, S.; Philips, W. Generalized graph-based fusion of hyperspectral and LiDAR data using morphological features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 552–556. [Google Scholar] [CrossRef]
  49. Volpi, M.; Camps-Valls, G.; Tuia, D. Spectral alignment of multi-temporal cross-sensor images with automated kernel canonical correlation analysis. ISPRS J. Photogramm. Remote Sens. 2015, 107, 50–63. [Google Scholar] [CrossRef] [Green Version]
  50. Liao, D.; Qian, Y.; Zhou, J.; Tang, Y.Y. A manifold alignment approach for hyperspectral image visualization with natural color. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3151–3162. [Google Scholar] [CrossRef]
  51. Hong, D.; Yokoya, N.; Zhu, X.X. Learning a Robust Local Manifold Representation for Hyperspectral Dimensionality Reduction. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2960–2975. [Google Scholar] [CrossRef]
  52. He, X.; Niyogi, P. Locality preserving projections. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2004; pp. 153–160. [Google Scholar]
  53. Hu, J.; Hong, D.; Zhu, X.X. MIMA: MAPPER-Induced Manifold Alignment for Semi-Supervised Fusion of Optical Image and Polarimetric SAR Data. IEEE Trans. Geosci. Remote Sens. 2019. under review. [Google Scholar]
  54. Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef]
  55. Tenenbaum, J.B.; De Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef]
  56. Hatcher, A. Algebraic Topology; Tsinghua University Press: Beijing, China, 2005. [Google Scholar]
  57. Lin, T.; Zha, H. Riemannian manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 796–809. [Google Scholar]
  58. Friedman, J.H.; Bentley, J.L.; Finkel, R.A. An algorithm for finding best matches in logarithmic time. ACM Trans. Math. Softw. 1976, 3, 209–226. [Google Scholar] [CrossRef]
  59. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  60. Schölkopf, B.; Smola, A.J.; Bach, F. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  61. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  62. Rainforth, T.; Wood, F. Canonical correlation forests. arXiv, 2015; arXiv:1507.05444. [Google Scholar]
  63. Hong, D.; Yokoya, N.; Zhu, X.X. The K-LLE algorithm for nonlinear dimensionality ruduction of large-scale hyperspectral data. In Proceedings of the 2016 8th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, USA, 21–24 August 2016; pp. 1–5. [Google Scholar]
  64. Hong, D.; Yokoya, N.; Zhu, X.X. Local manifold learning with robust neighbors selection for hyperspectral dimensionality reduction. In Proceedings of the 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 40–43. [Google Scholar]
  65. Belkin, M.; Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2002; pp. 585–591. [Google Scholar]
  66. Hong, D.; Pan, Z.; Wu, X. Improved differential box counting with multi-scale and multi-direction: A new palmprint recognition method. Opt.-Int. J. Light Electron Opt. 2014, 125, 4154–4160. [Google Scholar] [CrossRef]
  67. He, X.; Cai, D.; Yan, S.; Zhang, H.J. Neighborhood preserving embedding. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 2, pp. 1208–1213. [Google Scholar]
  68. Hong, D.; Liu, W.; Su, J.; Pan, Z.; Wang, G. A novel hierarchical approach for multispectral palmprint recognition. Neurocomputing 2015, 151, 511–521. [Google Scholar] [CrossRef]
  69. Hong, D.; Liu, W.; Wu, X.; Pan, Z.; Su, J. Robust palmprint recognition based on the fast variation Vese–Osher model. Neurocomputing 2016, 174, 999–1012. [Google Scholar] [CrossRef]
  70. Donoho, D.L. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math Chall. Lect. 2000, 1, 32. [Google Scholar]
  71. Farrell, M.D.; Mersereau, R.M. On the impact of PCA dimension reduction for hyperspectral detection of difficult targets. IEEE Geosci. Remote Sens. Lett. 2005, 2, 192–195. [Google Scholar] [CrossRef]
  72. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; van Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR data fusion: Outcome of the 2013 GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
  73. Chintakunta, H.; Robinson, M.; Krim, H. Introduction to the special session on Topological Data Analysis, ICASSP 2016. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 6410–6414. [Google Scholar] [CrossRef]
  74. Chazal, F.; Michel, B. An introduction to Topological Data Analysis: fundamental and practical aspects for data scientists. arXiv, 2017; arXiv:1710.04019. [Google Scholar]
  75. Zomorodian, A.; Carlsson, G. Computing persistent homology. Discret. Comput. Geom. 2005, 33, 249–274. [Google Scholar] [CrossRef]
  76. Edelsbrunner, H.; Letscher, D.; Zomorodian, A. Topological persistence and simplification. In Proceedings of the 41st Annual Symposium on Foundations of Computer Science, Redondo Beach, CA, USA, 12–14 November 2000; pp. 454–463. [Google Scholar]
  77. Edelsbrunner, H. A Short Course in Computational Geometry and Topology; Springer: Cham, Switzerland, 2014. [Google Scholar]
  78. Singh, G.; Mémoli, F.; Carlsson, G.E. Topological methods for the analysis of high dimensional data sets and 3D object recognition. In Proceedings of the Eurographics Symposium on Point-Based Graphics (SPBG), San Diego, CA, USA, 25 May 2007; pp. 91–100. [Google Scholar]
  79. Nicolau, M.; Levine, A.J.; Carlsson, G. Topology based data analysis identifies a subgroup of breast cancers with a unique mutational profile and excellent survival. Proc. Natl. Acad. Sci. USA 2011, 108, 7265–7270. [Google Scholar] [CrossRef] [Green Version]
  80. Nielson, J.L.; Paquette, J.; Liu, A.W.; Guandique, C.F.; Tovar, C.A.; Inoue, T.; Irvine, K.A.; Gensel, J.C.; Kloke, J.; Petrossian, T.C.; et al. Topological data analysis for discovery in preclinical spinal cord injury and traumatic brain injury. Nat. Commun. 2015, 6, 8581. [Google Scholar] [CrossRef] [Green Version]
  81. Li, L.; Cheng, W.Y.; Glicksberg, B.S.; Gottesman, O.; Tamler, R.; Chen, R.; Bottinger, E.P.; Dudley, J.T. Identification of type 2 diabetes subgroups through topological analysis of patient similarity. Sci. Transl. Med. 2015, 7, 311ra174. [Google Scholar] [CrossRef]
  82. Lum, P.Y.; Singh, G.; Lehman, A.; Ishkanov, T.; Vejdemo-Johansson, M.; Alagappan, M.; Carlsson, J.; Carlsson, G. Extracting insights from the shape of complex data using topology. Sci. Rep. 2013, 3, srep01236. [Google Scholar] [CrossRef]
  83. Carriere, M.; Michel, B.; Oudot, S. Statistical analysis and parameter selection for Mapper. J. Mach. Learn. Res. 2018, 19, 478–516. [Google Scholar]
  84. Ng, A.Y.; Jordan, M.I.; Weiss, Y. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2002; pp. 849–856. [Google Scholar]
  85. Okujeni, A.; Van Der Linden, S.; Hostert, P. Berlin-Urban-Gradient dataset 2009—An EnMAP Preparatory Flight Campaign (Datasets); GFZ Data Services: Potsdam, Germany, 2016. [Google Scholar]
  86. Hu, J.; Guo, R.; Zhu, X.; Baier, G.; Wang, Y. Non-local means filter for polarimetric SAR speckle reduction-experiments using TerraSAR-x data. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 2, 71. [Google Scholar] [CrossRef]
  87. Haklay, M.; Weber, P. Openstreetmap: User-generated street maps. IEEE Pervasive Comput. 2008, 7, 12–18. [Google Scholar] [CrossRef]
  88. Benediktsson, J.A.; Palmason, J.A.; Sveinsson, J.R. Classification of hyperspectral data from urban areas based on extended morphological profiles. IEEE Trans. Geosci. Remote Sens. 2005, 43, 480–491. [Google Scholar] [CrossRef]
  89. Ghamisi, P.; Benediktsson, J.A.; Ulfarsson, M.O. Spectral—Spatial classification of hyperspectral images based on hidden Markov random fields. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2565–2574. [Google Scholar] [CrossRef]
  90. Liao, W.; Chanussot, J.; Dalla Mura, M.; Huang, X.; Bellens, R.; Gautama, S.; Philips, W. Taking Optimal Advantage of Fine Spatial Resolution: Promoting partial image reconstruction for the morphological analysis of very-high-resolution images. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–28. [Google Scholar] [CrossRef]
  91. Rasti, B.; Ghamisi, P.; Gloaguen, R. Hyperspectral and lidar fusion using extinction profiles and total variation component analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3997–4007. [Google Scholar] [CrossRef]
  92. Zhu, Z.; Woodcock, C.E.; Rogan, J.; Kellndorfer, J. Assessment of spectral, polarimetric, temporal, and spatial dimensions for urban and peri-urban land cover classification using Landsat and SAR data. Remote Sens. Environ. 2012, 117, 72–82. [Google Scholar] [CrossRef]
  93. Wurm, M.; Taubenböck, H.; Weigand, M.; Schmitt, A. Slum mapping in polarimetric SAR data using spatial features. Remote Sens. Environ. 2017, 194, 190–204. [Google Scholar] [CrossRef]
  94. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 27. [Google Scholar] [CrossRef]
  95. Banerjee, M.; Capozzoli, M.; McSweeney, L.; Sinha, D. Beyond kappa: A review of interrater agreement measures. Can. J. Stat. 1999, 27, 3–23. [Google Scholar] [CrossRef]
  96. Hong, D.; Yokoya, N.; Xu, J.; Zhu, X. Joint and progressive learning from high-dimensional data for multi-label classification. In European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2018; pp. 478–493. [Google Scholar]
  97. Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 344, 1492–1496. [Google Scholar] [CrossRef] [Green Version]
  98. Chazal, F.; Guibas, L.J.; Oudot, S.Y.; Skraba, P. Persistence-based clustering in riemannian manifolds. J. ACM (JACM) 2013, 60, 41. [Google Scholar] [CrossRef]
Figure 1. Frameworks of manifold learning fusion techniques. Left: the data alignment fusion; Right: the manifold alignment fusion. The blue arrow indicates the fusion step. The yellow arrow indicates where the modeling of manifold takes place. The black arrow indicates the feature extraction. X i : ith data source; A or W : mathematical modeling of manifolds; Y : fused feature; f : the learned projection. · ˜ : a fusion of certain form.
Figure 1. Frameworks of manifold learning fusion techniques. Left: the data alignment fusion; Right: the manifold alignment fusion. The blue arrow indicates the fusion step. The yellow arrow indicates where the modeling of manifold takes place. The black arrow indicates the feature extraction. X i : ith data source; A or W : mathematical modeling of manifolds; Y : fused feature; f : the learned projection. · ˜ : a fusion of certain form.
Remotesensing 11 00681 g001
Figure 2. The Berlin data set. From left to right: RGB components of the simulated EnMAP data; Sentinel-1 dual-Pol data; The training data; The testing data.
Figure 2. The Berlin data set. From left to right: RGB components of the simulated EnMAP data; Sentinel-1 dual-Pol data; The training data; The testing data.
Remotesensing 11 00681 g002
Figure 3. The Augsburg data set. From left to right, top to bottom: RGB components of the hyperspectral image; Sentinel-1 dual-Pol data; The training data; The testing data.
Figure 3. The Augsburg data set. From left to right, top to bottom: RGB components of the hyperspectral image; Sentinel-1 dual-Pol data; The training data; The testing data.
Remotesensing 11 00681 g003
Figure 4. Comparison of the classification accuracies of different classifiers applied on the Berlin data set. Each chart is resulted from a corresponding classifier. The right bottom chart demonstrates all the overall accuracy resulted by applying five classifiers on every fused data achieved from the selected 16 algorithms. The y-axis report the overall accuracy in percentage (%). The ‘SU’, ‘SE’, and ‘UN’, represent supervised, semi-supervised, and unsupervised, respectively.
Figure 4. Comparison of the classification accuracies of different classifiers applied on the Berlin data set. Each chart is resulted from a corresponding classifier. The right bottom chart demonstrates all the overall accuracy resulted by applying five classifiers on every fused data achieved from the selected 16 algorithms. The y-axis report the overall accuracy in percentage (%). The ‘SU’, ‘SE’, and ‘UN’, represent supervised, semi-supervised, and unsupervised, respectively.
Remotesensing 11 00681 g004
Figure 5. Visualization of the classification maps and the ground truth. The 16 classification maps are provided by applying CCF on fused data of the 16 algorithms, for the Berlin data set. Classification maps achieved by manifold alignment fusion methods are more accurate than the maps achieved by data alignment fusion methods.
Figure 5. Visualization of the classification maps and the ground truth. The 16 classification maps are provided by applying CCF on fused data of the 16 algorithms, for the Berlin data set. Classification maps achieved by manifold alignment fusion methods are more accurate than the maps achieved by data alignment fusion methods.
Remotesensing 11 00681 g005
Figure 6. Comparison of the classification accuracies of different classifiers applying on the Augsburg data set. Each chart is resulted from a corresponding classifier. The right bottom chart demonstrates all the overall accuracy resulted by applying five classifiers on the fused data achieved from each of the 16 algorithms. The y-axis is the overall accuracy in percentage (%). The ’SU’, ’SE’, and ’UN’, represent supervision, semi-supervision, and un-supervision, respectively.
Figure 6. Comparison of the classification accuracies of different classifiers applying on the Augsburg data set. Each chart is resulted from a corresponding classifier. The right bottom chart demonstrates all the overall accuracy resulted by applying five classifiers on the fused data achieved from each of the 16 algorithms. The y-axis is the overall accuracy in percentage (%). The ’SU’, ’SE’, and ’UN’, represent supervision, semi-supervision, and un-supervision, respectively.
Remotesensing 11 00681 g006
Figure 7. Visualization of the achieved classification maps and the ground truth. The 16 classification maps are provided by applying CCF on fused data of the 16 algorithms, for the Augsburg data set. Classification maps achieved by manifold alignment fusion methods are more accurate than the maps achieved by data alignment fusion methods, especially MIMA based methods.
Figure 7. Visualization of the achieved classification maps and the ground truth. The 16 classification maps are provided by applying CCF on fused data of the 16 algorithms, for the Augsburg data set. Classification maps achieved by manifold alignment fusion methods are more accurate than the maps achieved by data alignment fusion methods, especially MIMA based methods.
Remotesensing 11 00681 g007
Table 1. The notations used in this article.
Table 1. The notations used in this article.
NotationExplanation
X i The ith data source
M i The manifold of the X i
x i p The pth instance of the X i
m i The number of dimensions of the X i
E i The labeled subset of the X i
y i p The pth instance of the Y i
l i The number of dimensions of the Y i
F The filter function in MAPPER
W The weight matrix that models a manifold
D The degree matrix of a graph
· ˜ The fusion at certain form
L The loss fuction
d n The dimension of underlying manifold
bThe number of bins in MAPPER
λ The eigenvalue of generalized eigenvalue decomposition
KThe total number of data sources
Y i The data representation of the M i
x i q The qth instance of the X i
n i The number of instances of the X i
n i * The number of instances of the E i , n i * < n i
y i q The qth instance of the Y i
f The projection Y = f T X
A The binary matrix that models a manifold
σ The filtering parameter of weight matrix
L The Laplacian matrix of a graph
D The pairwise distance matrix
kThe number of local neighbors
μ The weighting of topology structure in MA
cThe overlap rate in MAPPER
Table 2. Summary of the training data and the testing data for the scene of city Berlin.
Table 2. Summary of the training data and the testing data for the scene of city Berlin.
Class# of Training Sample# of Testing Sample
Forest29852,455
Residential area756262,903
Industrial area29617,462
Low plants34456,683
Soil42814,505
Allotment28111,322
Commercial area56020,909
Water1535539
Table 3. Summary of the training data and the testing data for the scene of city Augsburg.
Table 3. Summary of the training data and the testing data for the scene of city Augsburg.
Class# of Training Sample# of Testing Sample
Forest2004100
Residential area2004100
Industrial area2004100
Low plants2004100
Soil--
Allotment2004100
Commercial area2004100
Water2004100
Table 4. Technical summary of the selected algorithms. ’SU’, ’UN’, and ’SE’ represent the learning strategy of supervised, unsupervised, or semi-supervised, respectively. W and A represent the weight matrix and the connection matrix, respectively. The hyperparameter set { k , d n , μ , b } indicates the number of neighbors, the number of dimensions, the topology weighting parameter, and the number of bins.
Table 4. Technical summary of the selected algorithms. ’SU’, ’UN’, and ’SE’ represent the learning strategy of supervised, unsupervised, or semi-supervised, respectively. W and A represent the weight matrix and the connection matrix, respectively. The hyperparameter set { k , d n , μ , b } indicates the number of neighbors, the number of dimensions, the topology weighting parameter, and the number of bins.
AlgorithmDataLearning StrategyFusion ConceptManifoldHyper-Parameter
HSIPOLSUUNSE
1POL-------
2HSI-------
3HSI+POL---Concatenation--
4LPP--data alignment W { k , d n }
5LPP_SU--data alignment W { k , d n }
6LPP_SE--data alignment W { k , d n }
7GGF--data alignment W { k , d n }
8GGF_SU--data alignment W { k , d n }
9GGF_SE--data alignment W { k , d n }
10MA--manifold alignment A { μ , k , d n }
11MA_UN--Constrained dimension reduction A { k , d n }
12MA_SU--manifold alignment A { d n }
13MIMA--manifold alignment A { μ , b , d n }
14MIMA_UN--Constrained dimension reduction A { b , d n }
15MIMA-D--manifold alignment A { μ , b , d n }
16MIMA-D_UN--Constrained dimension reduction A { b , d n }
Table 5. Quantitative performance comparison on the Berlin data, in terms of class-specific accuracy, kappa coefficient, average accuracy, overall accuracy, and mean overall accuracy. The mean overall accuracy is calculated based on the overall accuracies achieved by the five classifiers. The listed indications are achieved after hyperparameter tunning. The hyperparameters of each algorithm are listed under the name of the algorithm and their values are listed in the table. The kappa coefficient, average accuracy, and the overall accuracy that larger than 0.66, 65%, and 79% are marked in bold. And the three highest mean overall accuracies are also marked in bold.
Table 5. Quantitative performance comparison on the Berlin data, in terms of class-specific accuracy, kappa coefficient, average accuracy, overall accuracy, and mean overall accuracy. The mean overall accuracy is calculated based on the overall accuracies achieved by the five classifiers. The listed indications are achieved after hyperparameter tunning. The hyperparameters of each algorithm are listed under the name of the algorithm and their values are listed in the table. The kappa coefficient, average accuracy, and the overall accuracy that larger than 0.66, 65%, and 79% are marked in bold. And the three highest mean overall accuracies are also marked in bold.
AlgorithmParameterClassifiersForestResidential AreaIndustrial AreaLow PlantsSoilAllotmentCommercial AreaWaterKAPPAAAOAMean OA
POL-1NN40.6457.6725.1432.9456.8832.1930.3733.850.292738.7148.9256.76
LSVM33.0277.9213.8536.4672.640.6432.2337.680.401243.0560.94
KSVM34.3669.9420.3830.6168.2738.6232.7942.820.356642.2355.76
RF35.6172.325.6328.6666.3843.937.8745.390.378944.4757.61
CCF37.9676.8724.8730.6964.7238.8236.8841.340.403544.0260.56
HSI-1NN68.7863.8730.0157.5890.7355.7632.8673.890.459959.1861.6470.14
LSVM69.282.518.5565.779.0653.5944.7772.810.58560.7773.48
KSVM72.5878.6835.4363.7474.1856.8731.5874.290.562560.9271.34
RF66.6579.6430.2557.4475.3347.7735.1778.10.543758.7970.21
CCF7181.8631.5468.9581.3653.4738.3574.810.59762.6774.03
HSI+POL-1NN64.8369.732.8965.2783.8154.7734.5963.510.497558.6765.4473.73
LSVM66.5786.2430.4875.379.6153.5240.1276.110.632963.4976.93
KSVM67.2780.9341.7864.0272.3757.583374.60.576461.4472.36
RF63.4684.9937.7974.3882.7256.2640.6182.090.626665.2976.26
CCF71.5186.2734.0572.0383.2456.344.3377.70.644565.6877.67
LPP { k , d n } {60, 15}1NN69.5369.0734.5666.0980.2757.5132.1864.560.500959.2265.6574.18
{20, 30}LSVM70.187.0532.5270.9779.2658.8836.4872.610.635463.4877.27
{30, 25}KSVM71.1985.7741.4370.9582.3653.9730.7772.680.629763.6476.69
{10, 20}RF56.285.8728.969.287649.938.6467.070.587458.9874.25
{10, 15}CCF68.4186.6834.3571.9680.0754.0737.5475.930.632563.6377.04
LPP_SU { d n } {10}1NN63.8667.0434.7971.4279.0654.3928.1772.320.481758.8864.2571.26
{30}LSVM64.4181.5134.1270.181.5656.7429.171.380.57861.1172.9
{50}KSVM67.0681.643.9672.1782.3457.8125.0469.690.590862.4673.77
{25}RF64.7180.8930.9865.5572.2655.2732.969.360.559658.9971.67
{25}CCF64.2581.9933.7274.4775.5955.8933.7769.760.588361.1873.7
LPP_SE { k , d n } {80, 10}1NN68.2272.1738.9273.2173.4358.0930.6574.020.532761.0968.2673.52
{120, 40}LSVM64.6885.3738.1574.3679.6359.1829.7577.410.619463.5776.04
{120, 40}KSVM69.0281.9341.6770.747759.7630.7776.170.600163.3874.15
{120, 30}RF66.9683.1529.6672.1266.4556.3934.17740.591960.3674.03
{120, 25}CCF64.8685.0934.6371.8566.8356.0534.3375.050.604461.0975.12
GGF { k , d n } {20, 30}1NN69.2871.3736.6566.5483.5156.9431.3463.820.518659.9367.1775.31
{90, 30}LSVM68.1188.7634.1476.1179.2954.9336.5475.140.65564.1378.7
{20, 30}KSVM72.1884.6437.0870.2981.8857.2534.4974.440.625464.0376.15
{10, 20}RF68.9786.5529.1370.3981.2349.4541.8562.880.624261.3176.58
{10, 25}CCF70.5387.5131.2976.3470.8651.9542.0667.950.644862.3177.98
GGF_SU { d n } {10}1NN65.5769.9937.7368.8980.1351.9628.7176.620.501359.9566.0571.59
{50}LSVM63.682.8736.4969.882.3456.5829.6276.220.590662.1973.77
{50}KSVM69.9980.6346.4360.4377.2153.9225.1578.770.569561.5771.98
{50}RF62.0181.4232.0967.374.0853.338.8365.170.567859.2872.17
{40}CCF65.5483.431.3870.5872.2651.1537.0268.240.590659.9574
GGF_SE { k , d n } {10, 15}1NN66.9670.6336.0769.6580.6255.6529.4976.350.511960.6866.7772.40
{120, 45}LSVM63.0683.5237.6973.0181.9455.4829.1179.870.600762.9674.54
{40, 40}KSVM70.1982.2641.5267.9280.3554.3831.2282.510.598863.7974.19
{20, 40}RF65.2780.5634.4967.0175.8554.5738.7266.980.571660.4372.21
{70, 30}CCF60.1583.9435.0774.1874.351.2235.5168.640.594260.3774.29
MA { μ , k , d n } {2, 90, 10}1NN69.8373.83875.6869.6460.0929.4172.270.547461.0969.5476.40
{2.5, 20, 25}LSVM65.4986.9737.6379.0880.0655.6334.4673.370.644564.0977.77
{2.5, 90, 35}KSVM69.3885.8137.4978.380.5455.4233.2973.210.640564.1877.39
{2, 10, 50}RF64.590.0830.2577.6865.5849.4136.8567.950.64460.2978.45
{2, 10, 20}CCF66.6689.1233.0579.5168.9554.9139.4771.010.655762.8478.89
MA_UN { k , d n } {120, 15}1NN68.4669.6132.8772.8778.5154.8834.7667.950.515959.9966.6875.13
{90, 30}LSVM66.8687.5835.9777.5578.5955.4436.376.150.64964.378.1
{40, 50}KSVM70.5585.6136.2374.1879.8357.5735.5573.140.634664.0876.97
{100, 30}RF58.9187.3726.3569.7780.753.1441.9460.710.607959.8675.74
{30, 30}CCF6788.0533.0574.1181.855541.9170.520.646763.9478.14
MA_SU { d n } {5}1NN69.8871.3434.8768.6971.0157.8832.3873.520.519959.9467.2175
{50}LSVM67.5686.7338.7679.6777.2156.8732.2775.450.645764.3177.85
{50}KSVM71.683.9635.7275.9261.5759.5937.172.650.620462.2675.84
{50}RF60.5387.8233.2277.1370.1652.4238.8263.660.624260.4776.94
{50}CCF64.0988.3730.5776.7362.5651.8636.9959.90.625758.8977.14
MIMA { μ , b , d n } {1, 15, 5}1NN69.9170.233.3969.6361.9453.4935.0768.620.505557.7866.2676.22
{1, 15, 15}LSVM67.7684.9736.2278.3679.0857.743870.250.632864.0576.85
{1, 15, 15}KSVM71.0684.2441.0176.1169.8755.8232.9768.970.623362.5176.11
{1.5, 25, 40}RF65.190.3132.548082.7750.7935.0871.010.664263.4579.6
{2, 25, 20}CCF70.8688.0636.5480.4276.8857.2139.6173.210.66765.3579.36
MIMA_UN { b , d n } {10, 20}1NN72.5768.3935.9670.1879.2762.5830.7367.410.51360.8966.2575.85
{10, 35}LSVM68.2188.5936.6274.680.7955.8729.8676.080.649563.8378.29
{10, 35}KSVM71.7887.136.8573.1382.3158.0531.7973.140.644964.2777.81
{55, 30}RF67.9288.4427.3677.2281.3250.93561.040.641761.1578.08
{30, 20}CCF71.0688.1929.7277.5579.8155.7139.9969.670.65863.9678.86
MIMA-D { μ , b , d n } {1.5, 30, 15}1NN71.3172.335.3174.5176.6657.3733.4871.840.542361.668.9276.75
{1.5, 45, 20}LSVM67.5986.8536.881.0778.356.438.9775.880.654965.2378.38
{2.5, 55, 30}KSVM70.0185.3336.7978.8478.5256.8336.4476.080.642564.8677.37
{1, 30, 30}RF67.0289.8533.0980.4683.2150.6137.9574.270.669864.5679.81
{1, 45, 30}CCF68.9189.1834.7978.6375.4851.7439.8569.450.662863.579.28
MIMA-D_UN { b , d n } {55, 15}1NN72.5768.3935.9670.1879.2762.5830.7367.410.51360.8966.2575.52
{55, 25}LSVM68.2188.5936.6274.680.7955.8729.8676.080.649563.8378.29
{40, 20}KSVM71.7887.136.8573.1382.3158.0531.7973.140.644964.2777.81
{45, 30}RF67.9288.4427.3677.2281.3250.93561.040.641761.1578.08
{45, 25}CCF71.0688.1929.7277.5579.8155.7139.9969.670.65863.9678.86
Table 6. Quantitative performance comparison on the Augsburg data, in terms of class-specific accuracy, kappa coefficient, average accuracy, overall accuracy, and mean overall accuracy. The mean overall accuracy is calculated based on the overall accuracies achieved by the five classifiers. The listed indications are achieved after hyperparameter tunning. The hyperparameters of each algorithm are listed under the name of the algorithm and their values are listed in the table. The kappa coefficient, average accuracy, and the overall accuracy that larger than 0.56, 62.5%, and 62.5% are marked in bold. And the three highest mean overall accuracies are also marked in bold.
Table 6. Quantitative performance comparison on the Augsburg data, in terms of class-specific accuracy, kappa coefficient, average accuracy, overall accuracy, and mean overall accuracy. The mean overall accuracy is calculated based on the overall accuracies achieved by the five classifiers. The listed indications are achieved after hyperparameter tunning. The hyperparameters of each algorithm are listed under the name of the algorithm and their values are listed in the table. The kappa coefficient, average accuracy, and the overall accuracy that larger than 0.56, 62.5%, and 62.5% are marked in bold. And the three highest mean overall accuracies are also marked in bold.
AlgorithmParameterClassifiersForestResidential AreaIndustrial AreaLow PlantsAllotmentCommercial AreaWaterKAPPAAAOAMean OA
POL-1NN6435.8838.855.0222.5438.918.660.289739.1139.1148.21
LSVM86.9346.4439.1573.1725.3744.2921.80.395248.1648.16
KSVM86.5164.4931.4181.9822.3941.9819.120.413149.749.7
RF81.8863.4447.7688.4628.8838.7114.630.439651.9751.97
CCF82.2961.8547.888.3730.3438.0716.10.441452.1252.12
HSI -1NN27.952.4961.178.260.6624.955.240.434151.551.551.33
LSVM25.4450.2275.9367.4638.3215.1557.930.384147.2147.21
KSVM31.265.270.7186.3755.9820.854.630.474854.9854.98
RF25.5958.2970.2984.3440.4115.9852.980.413149.749.7
CCF27.2964.5675.7184.6848.2916.5455.660.454653.2553.25
HSI+POL -1NN34.7658.1755.9384.5657.7334.954.880.468254.4254.4256.71
LSVM3165.9573.2983.8536.925.0742.850.431551.2851.28
KSVM40.5967.8367.0792.5945.2427.155.780.493756.656.6
RF61.2773.8870.194.9847.5125.6359.170.554261.7961.79
CCF46.0775.6378.0595.5158.0718.4944.220.526759.4459.44
LPP { k , d n } {10, 40}1NN44.960.6153.2986.5661.3734.7656.320.496356.8356.8357.42
{20, 20}LSVM28.1764.9376.6381.5438.2717.8853.930.435651.6251.62
{40, 50}KSVM40.9867.9873.4992.3245.4922.6853.660.494356.6656.66
{10, 30}RF73.6666.1565.889.5451.2425.7855.170.545661.0561.05
{10, 35}CCF59.6370.7172.892.251.922.7856.510.544260.9360.93
LPP_SU { d n } {5}1NN31.9355.8356.9578.5149.0733.9842.760.41549.8649.8652.97
{10}LSVM40.8563.163.2987.4649.1732.6136.050.454253.2253.22
{40}KSVM54.2463.9366.3287.245.0528.4929.410.457753.5253.52
{35}RF44.4660.9362.7890.0744.1530.9541.880.458753.653.6
{35}CCF52.0762.1564.6690.1744.2428.8840.510.471154.6754.67
LPP_SE { k , d n } {20, 45}1NN49.7659.155385.0560.9840.0555.150.505257.5957.5956.06
{10, 10}LSVM43.4965.5177.2285.0740.7620.841.050.456553.4153.41
{120, 35}KSVM37.6671.2775.2293.2248.4420.5445.490.486455.9855.98
{30, 15}RF27.1763.2272.291.7854.4626.5455.660.48555.8655.86
{80, 40}CCF47.266.4673.2290.9356.0723.2745.170.503957.4757.47
GGF { k , d n } {20, 50}1NN41.3757.2249.6882.6361.6138.256.320.478455.2955.2955.81
{30, 15}LSVM29.1763.7674.8382.1236.5419.7156.710.43851.8351.83
{20, 15}KSVM34.5169.2273.7192.3445.3223.959.610.497756.9456.94
{40, 45}RF60.2265.6161.2989.7346.4631.7856.560.519458.8158.81
{40, 35}CCF47.970.972.2292.4443.3423.05550.508157.8457.84
GGF_SU { d n } {5}1NN31.9355.8356.9578.5149.0733.9842.760.41549.8649.8653.36
{10}LSVM40.8563.1563.2987.4649.232.6136.050.454353.2353.23
{35}KSVM51.1764.0765.4686.7844.3730.931.490.457153.4653.46
{45}RF44.9361.0560.1589.9342.8832.6845.020.461153.853.8
{45}CCF51.262.6166.9390.2446.7828.7648.590.491856.4456.44
GGF_SE { k , d n } {120, 10}1NN44.858.1763.3284.5456.0532.9546.830.477855.2455.2456.19
{20, 30}LSVM53.0266.5466.9584.6147.2729.4131.880.466154.2454.24
{10, 50}KSVM67.5468.2466.887.1241.3223.9824.410.465754.254.2
{90, 15}RF42.8864.968.0792.5656.6826.6356.540.513858.3258.32
{120, 40}CCF4765.8367.8892.5157.2927.0755.150.521258.9658.96
MA { μ , k , d n } {2, 70, 35}1NN30.8858.6861.3982.0577.2727.7854.020.486856.0156.0157.52
{2.5, 60, 35}LSVM26.2266.6378.272.4442.916.155.270.429651.1151.11
{2, 70, 25}KSVM31.4469.5478.89359.0517.7353.760.505557.6257.62
{1, 110, 45}RF75.3472.1564.6691.6148.8830.1243.240.543360.8660.86
{1, 110, 45}CCF65.8573.2472.6193.615523.850.050.55762.0262.02
MA_UN { k , d n } {100, 25}1NN31.6156.8557.2980.7173.9826.6154.830.469854.5554.5556.54
{100, 30}LSVM26.5167.1276.873.7841.0715.7155.830.42850.9850.98
{100, 20}KSVM32.5668.27489.2958.6318.8855.320.494856.756.7
{20, 25}RF75.1567.9363.1787.9344.293150.830.533860.0460.04
{20, 40}CCF75.2769.0760.9589.8350.0732.4145.560.538660.4560.45
MA_SU { d n } {50}1NN26.7152.7861.1580.2269.9326.0754.460.452253.0553.0554.53
{50}LSVM25.257.277.5670.2936.8516.6853.760.395948.2248.22
{50}KSVM28.6860.6874.8387.956.217.4650.390.460253.7453.74
{50}RF49.7667.167.1291.947.2728.8554.320.510558.0558.05
{45}CCF64.0769.1266.7892.4152.3927.6344.630.528459.5859.58
MIMA { μ , b , d n } {2.5, 35, 35}1NN27.6857.0762.5681.3972.1726.4655.510.471454.6954.6958.01
{3, 25, 5}LSVM23.6171.9378.6379.9844.2913.7654.510.444552.3952.39
{1.5, 35, 15}KSVM34.1568.1272.992.2753.5122.0759.340.503957.4857.48
{0.5, 40, 35}RF66.2276.8865.5192.847.7826.2759.020.557562.0762.07
{0.5, 55, 40}CCF76.7877.4965.1292.735028.7853.150.573463.4463.44
MIMA_UN { b , d n } {5, 35}1NN34.3455.2454.8580.7671.4128.4453.390.464154.0654.0656.56
{10, 50}LSVM28.3967.7876.0574.1740.7319.2253.020.432351.3451.34
{5, 30}KSVM31.968.1274.7890.7859.4420.157.90.50557.5757.57
{20, 30}RF58.9566.5471.7689.6851.6825.2442.680.510958.0858.08
{15, 45}CCF83.7167.6168.0789.7357.9827.6837.370.553661.7461.74
MIMA-D { μ , b , d n } {3, 50, 35}1NN28.7657.6362.6880.2274.8324.9855.490.474354.9454.9456.5
{3, 40, 15}LSVM25.2767.4478.4673.2939.8515.4152.170.419850.2750.27
{3, 40, 15}KSVM33.1268.9570.4192.855.820.0560.590.502957.3957.39
{3, 35, 30}RF52.5472.2773.5692.2449.8324.2947.710.520758.9258.92
{2.5, 30, 40}CCF55.5173.6672.9892.8553.6124.0554.220.544860.9860.98
MIMA-D_UN { b , d n } {20, 30}1NN34.9356.0256.5480.4675.2926.3954.410.473454.8654.8660.29
{55, 5}LSVM87.2255.244857.4136.241.9554.410.467454.3554.35
{15, 30}KSVM35.2767.9377.5491.8366.5617.3754.730.518758.7558.75
{20, 50}RF82.9565.1758.1288.4154.3234.0556.630.566162.8162.81
{20, 30}CCF78.5472.2965.8592.6349.8826.6853.560.565762.7862.78

Share and Cite

MDPI and ACS Style

Hu, J.; Hong, D.; Wang, Y.; Zhu, X.X. A Comparative Review of Manifold Learning Techniques for Hyperspectral and Polarimetric SAR Image Fusion. Remote Sens. 2019, 11, 681. https://doi.org/10.3390/rs11060681

AMA Style

Hu J, Hong D, Wang Y, Zhu XX. A Comparative Review of Manifold Learning Techniques for Hyperspectral and Polarimetric SAR Image Fusion. Remote Sensing. 2019; 11(6):681. https://doi.org/10.3390/rs11060681

Chicago/Turabian Style

Hu, Jingliang, Danfeng Hong, Yuanyuan Wang, and Xiao Xiang Zhu. 2019. "A Comparative Review of Manifold Learning Techniques for Hyperspectral and Polarimetric SAR Image Fusion" Remote Sensing 11, no. 6: 681. https://doi.org/10.3390/rs11060681

APA Style

Hu, J., Hong, D., Wang, Y., & Zhu, X. X. (2019). A Comparative Review of Manifold Learning Techniques for Hyperspectral and Polarimetric SAR Image Fusion. Remote Sensing, 11(6), 681. https://doi.org/10.3390/rs11060681

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop