Dimensionality Reduction of SPD Data Based on Riemannian Manifold Tangent Spaces and Isometry
Abstract
:1. Introduction
- SPD manifold is not a vector space and does not have the concepts of metric and inner product itself, so the traditional algorithms based on Euclidean space are not feasible here. It is not only unreasonable but also inadequate to apply the principles of Euclidean spaces to analyze SPD manifolds, for example, to directly use the Frobenius inner product to measure the similarity and dissimilarity between two SPD matrices.
- If the original dimensionality of SPD data is relatively high, there must be lots of redundant information, which not only affects the calculation speed, but may also affect the discrimination effect. In addition, it is well known that high-dimensional spaces can lead to curse of dimensionality, which is embodied in the exponential increase in the geometric volume of the space as the dimensionality increases, making available data extremely sparse. Further, sparsity is a problem for any method that requires statistical significance. Moreover, the high-dimensional feature space is of little significance to the distance metric. Since most classifiers rely on Euclidean distance or Mahalanobis distance, classification performance decreases with the increase in dimensionality.
- (1)
- By using Riemannian manifold geometry, high-dimensional SPD data are mapped to the tangent space at the identity matrix, which makes linear operations possible and preserves the data form and attributes of original SPD data to the maximum extent.
- (2)
- We embedded the mapped data into a simple low-dimensional tangent space through bilinear transformation, so as to effectively avoid the problems caused by curse of dimensionality such as over-fitting and high calculation costs.
- (3)
- The bilinear transformation is determined by the proposed isometric criterion to maintain the distance relationship before and after DR. This is an unsupervised process, no manual labeling is required, which greatly saves labor and time costs.
2. Notations and Preliminaries
2.1. Notations
2.2. Preliminaries
3. Related Works
4. The Proposed Method
4.1. The Isometric Transformation between SPD Manifolds and Their Tangent Spaces
4.2. The Bilinear Transformation between Tangent Spaces
4.3. Dimensionality Reduction of SPD Data Based on Tangent Spaces of SPD Manifolds and Isometry
4.3.1. Isometry Modeling
4.3.2. Objective Function
4.4. Solution to RMTSISOM-SPDDR
Algorithm 1. Procedures of RMTSISOM-SPDDR. |
Input:N training samples , target dimension d and the maximum iteration times maxIter. |
Output: bilinear transformation matrix . |
1: Map SPD data into the corresponding tangent space at identity using Eqution (7); 2: Construct Riemannian metric matrix using Equation (16); 3: Construct the inner product matrix using Equation (20); 4: Initialize , namely, the first d columns of ; 5: while t = 1: maxIter Calculate using Equation (32); Do eigen-decomposition of to obtain the bilinear transformation matrix ; end while 6: return . |
4.5. Complexity Analysis
5. Comparison with Other State-of-the-Art Algorithms
5.1. SSPDDR
5.2. PDDRSPD
5.3. LG-SPDDR
- The given SPD data are firstly mapped from the original SPD manifold to the corresponding tangent space and turn into ;
- Subsequently, is dimensional-reduced by the bilinear transformation, transformed into another tangent space and turns into ;
- Finally, through the exp-transformation, is remapped back into a new low-dimensional SPD manifold and turns into .
5.4. PSSSD
5.5. DARG-Graph
6. Experiments
6.1. Datasets Description and Experiment Settings
6.2. The Experiment on the Motion Dataset
6.2.1. The Description of Motion
6.2.2. The Experimental Results on Motion
6.3. The Experiment on the Traffic Dataset
6.3.1. The Description of Traffic
6.3.2. The Experimental Results on Traffic
6.4. The Experiment on the ICT-TV Database
6.4.1. The Description of ICT-TV
6.4.2. The Experimental Results on ICT-TV
6.5. The Experiment on the ETH-80 Dataset
6.5.1. The Description of ETH-80
6.5.2. The Experimental Results on ETH-80
7. Discussions and Conclusions
- (1)
- As a non-Euclidean data form, SPD data have in-depth applications in machine learning. Compared with vector representation, SPD data can more effectively extract higher-order statistical information. Generally, in order to avoid the problems of high computational complexity and sparse distribution in high-dimensional SPD manifolds, we hope to reduce the dimensionality of SPD data while maintaining useful and representative information. However, the nonlinear manifold structure results in difficulties in learning tasks where linear operations are needed.
- (2)
- SPD data equipped with a Riemannian metric generally constitute a Riemannian manifold. For Riemannian manifolds, we have known that all tangent spaces are complete finite-dimensional Hilbert spaces, namely, Euclidean spaces. Hence, this paper transfers learning tasks from original SPD manifolds to their tangent spaces.
- (3)
- The tangent space of an SPD manifold is a symmetric matrix space. From the perspective of data form and attributes, it can be proved to be the minimum linear extension of the original manifold. Inspired by this, we map SPD data into the tangent space at identity by the isometric transformation (log).
- (4)
- A framework is proposed for SPD data dimensionality reduction (DR). RMTSISOM-SPDDR realizes the procedure through the bilinear transformation between tangent spaces at identity matrices. These tangent spaces are Hilbert spaces and the required bilinear transformation can be determined to a specific criterion and then taken as the DR transformation for the original SPD manifold.
- (5)
- This paper specifies the bilinear transformation by the global isometric criterion. The so-called isometric criterion means that the geodesic distance on the original high-dimensional SPD manifold is equal to the corresponding Euclidean distance in the tangent space of the low-dimensional SPD manifold. This preserves the distance relationship between data points well.
- (6)
- In comparison to many existing state-of-the-art algorithms, such as PDDRSPD [34], PSSSD [36], DARG-Graph [37], SSPDDR [32] and LG-SPDDR [38], there are three differences between them and our proposed method (RMTSISOM-SPDDR). First, most of these algorithms are based on the bilinear transformation between two manifolds, while the proposed RMTSISOM-SPDDR is based on the bilinear transformation between the tangent spaces of the two manifolds. The tangent spaces are finite Hilbert spaces, i.e., Euclidean spaces, which can support more complex DR models. Second, all of these algorithms utilize local analysis, while the proposed RMTSISOM-SPDDR is based on global isometry, providing a different perspective. Finally, all of these algorithms are supervised, while the proposed RMTSISOM-SPDDR is unsupervised.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Differential Manifolds and Tangent Spaces
- Linear:
- Leibniz Law:
References
- Zhang, J.; Yu, J.; Tao, D. Local Deep-Feature Alignment for Unsupervised Dimension Reduction. IEEE Trans. Image Process. 2018, 27, 2420–2432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tang, J.; Shao, L.; Li, X.; Lu, K. A Local Structural Descriptor for Image Matching via Normalized Graph Laplacian Embedding. IEEE Trans. Cybern. 2015, 46, 410–420. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Yan, S.; Wang, H.; Fu, Y.; Yan, J.; Tang, X.; Huang, T.S. Synchronized Submanifold Embedding for Person-Independent Pose Estimation and Beyond. IEEE Trans. Image Process. 2008, 18, 202–210. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Ng, M.K.; Cong, G.; Ye, Y.; Wu, Q. MR-NTD: Manifold Regularization Nonnegative Tucker Decomposition for Tensor Data Dimension Reduction and Representation. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1787–1800. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.-L.; Hsu, C.-T.; Liao, H.-Y.M. Simultaneous Tensor Decomposition and Completion Using Factor Priors. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 577–591. [Google Scholar] [CrossRef] [PubMed]
- Du, B.; Zhang, M.; Zhang, L.; Hu, R.; Tao, D. PLTD: Patch-Based Low-Rank Tensor Decomposition for Hyperspectral Images. IEEE Trans. Multimed. 2017, 19, 67–79. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Huang, Z.; Gool, L.V. A riemannian network for SPD matrix learning. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 2036–2042. [Google Scholar]
- Huang, Z.; Wang, R.; Shan, S.; Li, X.; Chen, X. Log-euclidean metric learning on symmetric positive definite manifold with application to image set classification. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 6–1 July 2015; Volume 37, pp. 720–729. [Google Scholar]
- Hussein, M.E.; Torki, M.; Gowayyed, M.A.; El-Saban, M. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, Beijing, China, 3–9 August 2013; pp. 2466–2472. [Google Scholar]
- Faraki, M.; Harandi, M.; Porikli, F. Image set classification by symmetric positive semi-definite matrices. In Proceedings of the 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Placid, NY, USA, 7–10 March 2016; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2016; pp. 1–8. [Google Scholar]
- Chen, K.-X.; Wu, X.-J. Component SPD matrices: A low-dimensional discriminative data descriptor for image set classification. Comput. Vis. Media 2018, 4, 245–252. [Google Scholar] [CrossRef] [Green Version]
- Tuzel, O.; Porikli, F.; Meer, P. Region Covariance: A Fast Descriptor for Detection and Classification. In Proceedings of the Transactions on Petri Nets and Other Models of Concurrency XV, Aachen, Germany, 23–28 June 2019; Springer Science and Business Media LLC.: Berlin, Germany, 2006; pp. 589–600. [Google Scholar]
- Tuzel, O.; Porikli, F.; Meer, P. Pedestrian Detection via Classification on Riemannian Manifolds. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 1713–1727. [Google Scholar] [CrossRef]
- Harandi, M.; Sanderson, C.; Hartley, R.; Lovell, B.C. Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach. In Transactions on Petri Nets and Other Models of Concurrency XV; Springer: Berlin/Heidelberg, Germany, 2012; pp. 216–229. [Google Scholar]
- Jayasumana, S.; Hartley, R.; Salzmann, M.; Li, H.; Harandi, M. Kernel Methods on the Riemannian Manifold of Symmetric Positive Definite Matrices. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 73–80. [Google Scholar]
- Harandi, M.; Salzmann, M.; Hartley, R. From Manifold to Manifold: Geometry-Aware Dimensionality Reduction for SPD Matrices. In Proceedings of the Transactions on Petri Nets and Other Models of Concurrency XV; Springer Science and Business Media LLC.: Berlin, Germany, 2014; pp. 17–32. [Google Scholar]
- Arsigny, V.; Fillard, P.; Pennec, X.; Ayache, N. Geometric Means in a Novel Vector Space Structure on Symmetric Positive-Definite Matrices. SIAM J. Matrix Anal. Appl. 2007, 29, 328–347. [Google Scholar] [CrossRef] [Green Version]
- Ilea, I.; Bombrun, L.B.; Said, S.; Berthoumieu, Y. Covariance Matrices Encoding Based on the Log-Euclidean and Affine In-variant Riemannian Metrics. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 18–22 June 2018; pp. 506–509. [Google Scholar]
- Li, P.; Wang, Q.; Zuo, W.; Zhang, L. Log-Euclidean Kernels for Sparse Representation and Dictionary Learning. In Proceedings of the 2013 IEEE International Conference on Computer Vision; Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2013; pp. 1601–1608. [Google Scholar]
- Yamin, A.; Dayan, M.; Squarcina, L.; Brambilla, P.; Murino, V.; Diwadkar, V.; Sona, D. Comparison Of Brain Connectomes Using Geodesic Distance On Manifold: A Twins Study. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019); Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2019; pp. 1797–1800. [Google Scholar]
- Zhang, J.; Wang, L.; Zhou, L.; Li, W. Learning Discriminative Stein Kernel for SPD Matrices and Its Applications. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1020–1033. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, R.; Wu, X.-J.; Chen, K.-X.; Kittler, J. Multiple Manifolds Metric Learning with Application to Image Set Classification. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR); Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2018; pp. 627–632. [Google Scholar]
- Huang, Z.; Wang, R.; Shan, S.; Van Gool, L.; Chen, X. Cross Euclidean-to-Riemannian Metric Learning with Application to Face Recognition from Video. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2827–2840. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Vemulapalli, R.; Pillai, J.K.; Chellappa, R. Kernel Learning for Extrinsic Classification of Manifold Features. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition; IEEE: Piscataway, NJ, USA, 2013; pp. 1782–1789. [Google Scholar]
- Sanin, A.; Sanderson, C.; Harandi, M.T.; Lovell, B.C. Spatio-temporal covariance descriptors for action and gesture recogni-tion. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Washington, DC, USA, 15–17 January 2013; pp. 103–110. [Google Scholar]
- Vemulapalli, R.; Jacobs, D. Riemannian Metric Learning for Symmetric Positive Definite Matrices. arXiv 2015, arXiv:1501.02393. [Google Scholar]
- Zhou, L.; Wang, L.; Zhang, J.; Shi, Y.; Gao, Y. Revisiting Metric Learning for SPD Matrix Based Visual Representation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2017; pp. 7111–7119. [Google Scholar]
- Dong, Z.; Jia, S.; Zhang, C.; Pei, M.; Wu, Y. Deep manifold learning of symmetric positive definite matrices with application to face recognition. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 4009–4015. [Google Scholar]
- Wang, H.; Banerjee, A.; Boley, D. Common component analysis for multiple covariance matrices. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining—KDD ’11, San Diego, CA, USA, 21–24 August 2011; pp. 956–964. [Google Scholar]
- Harandi, M.; Salzmann, M.; Hartley, R. Dimensionality Reduction on SPD Manifolds: The Emergence of Geometry-Aware Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 48–62. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Weinberger, K.Q.; Saul, L.K. Unsupervised Learning of Image Manifolds by Semidefinite Programming. Int. J. Comput. Vis. 2006, 70, 77–90. [Google Scholar] [CrossRef] [Green Version]
- Ren, J.; Wu, X.-J. Probability Distribution-Based Dimensionality Reduction on Riemannian Manifold of SPD Matrices. IEEE Access 2020, 8, 153881–153890. [Google Scholar] [CrossRef]
- Grey, D.R. Multivariate analysis, by K. V. Mardia, J.T. Kent and J. M. Bibby. Pp 522. £14·60. 1979. ISBN 0 12 471252 5 (Academic Press). Math. Gaz. 1981, 65, 75–76. [Google Scholar] [CrossRef]
- Gao, Z.; Wu, Y.; Harandi, M.; Jia, Y. A Robust Distance Measure for Similarity-Based Classification on the SPD Manifold. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3230–3244. [Google Scholar] [CrossRef]
- Wang, W.; Wang, R.; Huang, Z.; Shan, S.; Chen, X. Discriminant Analysis on Riemannian Manifold of Gaussian Distributions for Face Recognition with Image Sets. IEEE Trans. Image Process. 2017, 27, 1. [Google Scholar] [CrossRef]
- Xu, C.; Lu, C.; Gao, J.; Zheng, W.; Wang, T.; Yan, S. Discriminative Analysis for Symmetric Positive Definite Matrices on Lie Groups. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1576–1585. [Google Scholar] [CrossRef]
- Müller, M.; Röder, T.; Clausen, M.; Eberhardt, B.; Krüger, B.; Weber, A. Documentation Mocap Database HDM05. 2007. Available online: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.71.7245 (accessed on 23 August 2021).
- Chan, A.B.; Vasconcelos, N. Probabilistic Kernels for the Classification of Auto-Regressive Visual Processes. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05); Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2005; Volume 841, pp. 846–851. [Google Scholar]
- Harandi, M.; Salzmann, M.; Baktashmotlagh, M. Beyond Gauss: Image-Set Matching on the Riemannian Manifold of PDFs. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV); Institute of Electrical and Electronics Engineers (IEEE): Piscataway, NJ, USA, 2015; pp. 4112–4120. [Google Scholar]
- Li, Y.; Wang, R.; Shan, S.; Chen, X. Hierarchical hybrid statistic based video binary code and its application to face retrieval in TV-series. In Proceedings of the 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia, 4–8 May 2015; Volume 1, pp. 1–8. [Google Scholar]
- Leibe, B.; Schiele, B. Analyzing appearance and contour based methods for object categorization. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Madison, Wisconsin, 18–20 June 2003; pp. 409–415. [Google Scholar]
- Munkres, J.R.; Auslander, L.; MacKenzie, R.E. Introduction to Differentiable Manifolds. Am. Math. Mon. 1964, 71, 1059. [Google Scholar] [CrossRef] [Green Version]
Notations | Descriptions |
---|---|
The tangent space of the differential manifold at | |
The differentiable function germ of the differential manifold at | |
The entire SPD matrices | |
The entire symmetric matrices | |
Capital letter denotes an SPD matrix | |
Greek capital letter denotes a tangent vector of an SPD manifold |
Methods | Parameter Settings |
---|---|
SSPDDR | None |
PDDRSPD | , , |
DARG-Graph | |
LG-SPDDR | , |
PSSSD | , |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, W.; Ma, Z.; Gan, W.; Liu, S. Dimensionality Reduction of SPD Data Based on Riemannian Manifold Tangent Spaces and Isometry. Entropy 2021, 23, 1117. https://doi.org/10.3390/e23091117
Gao W, Ma Z, Gan W, Liu S. Dimensionality Reduction of SPD Data Based on Riemannian Manifold Tangent Spaces and Isometry. Entropy. 2021; 23(9):1117. https://doi.org/10.3390/e23091117
Chicago/Turabian StyleGao, Wenxu, Zhengming Ma, Weichao Gan, and Shuyu Liu. 2021. "Dimensionality Reduction of SPD Data Based on Riemannian Manifold Tangent Spaces and Isometry" Entropy 23, no. 9: 1117. https://doi.org/10.3390/e23091117
APA StyleGao, W., Ma, Z., Gan, W., & Liu, S. (2021). Dimensionality Reduction of SPD Data Based on Riemannian Manifold Tangent Spaces and Isometry. Entropy, 23(9), 1117. https://doi.org/10.3390/e23091117