TREPH: A Plug-In Topological Layer for Graph Neural Networks
Abstract
:1. Introduction
- A plug-in topological layer named TREPH is proposed for GNN, which utilizes EPH for effective extraction of topological features and can be conveniently inserted into any GNN architecture;
- TREPH is proved to be differentiable and strictly more expressive than PH-based representations, which in turn is strictly stronger than message-passing GNNs in expressive power;
- By making use of the uniformity of EPH, a novel aggregation mechanism is designed to empower graph nodes with the ability to perform shape detection;
- Experiments on benchmark datasets for graph classification show the competitiveness of TREPH, achieving state-of-the-art performances.
2. Related Work
2.1. Graph Neural Networks
2.2. Topological Data Analysis
3. Methodology
3.1. Preliminaries
3.1.1. Homology
3.1.2. Persistent Homology
- For each pair such that v (resp. e) is added at time a (resp. b), we place a point on the plane with coordinate ;
- For each unpaired vertex v added at time a, we place a point .
3.1.3. Extended Persistent Homology
- A pair is associated with a component (0-dimensional). The points form the diagram ;
- A pair is of dimension 0. Geometrically, this means and are where f obtains minimum and maximum in a component of G. The points form the diagram ;
- A pair is of dimension 1. In this case, there is a loop whose maximum and minimum are obtained at and , respectively. The points form the diagram ;
- A pair is of dimension 1. It is not easy to illustrate the geometric meaning of such a pair directly. Fortunately, it can be interpreted as a 0-dimensional feature of that persists from to by the symmetry of extended persistence (see [19] p.162). The points form the diagram .
3.2. Treph
Algorithm 1: Aggregation. |
3.3. Differentiability and Expressive Power
4. Experiments
4.1. Datasets
4.2. Structure-Based Experiments
4.2.1. TREPH in GNNs
4.2.2. Study of TREPH Positions
4.3. Comparison with State-of-the-Art Methods
4.4. Ablation Study
- Each point of is broken into a point for the 0-dimensional PD of f and a point for the 0-dimensional PD of ;
- Each point of is broken into a point for the 1-dimensional PD of f and a point for the 1-dimensional PD of .
4.5. Analysis of Hyperparameters
4.6. Implementation Details
4.7. Complexity Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
GNN | Graph Neural Network |
TDA | Topological Data Analysis |
PH | Persistent Homology |
EPH | Extended Persistent Homology |
PD | Persistence Diagram |
TREPH | Topological Representation with Extended Persistent Homology |
References
- Duvenaud, D.; Maclaurin, D.; Aguilera-Iparraguirre, J.; Gómez-Bombarelli, R.; Hirzel, T.; Aspuru-Guzik, A.; Adams, R.P. Convolutional Networks on Graphs for Learning Molecular Fingerprints. Adv. Neural Inf. Process. Syst. 2015, 28, 2224–2232. [Google Scholar]
- Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia, 6–11 August 2017; Volume 70, pp. 1263–1272. [Google Scholar]
- Cranmer, M.D.; Xu, R.; Battaglia, P.; Ho, S. Learning Symbolic Physics with Graph Networks. arXiv 2019, arXiv:1909.05862. [Google Scholar]
- Sanchez-Gonzalez, A.; Godwin, J.; Pfaff, T.; Ying, R.; Leskovec, J.; Battaglia, P. Learning to Simulate Complex Physics with Graph Networks. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; Volume 119, pp. 8459–8468. [Google Scholar]
- Schlichtkrull, M.; Kipf, T.N.; Bloem, P.; van den Berg, R.; Titov, I.; Welling, M. Modeling Relational Data with Graph Convolutional Networks. In European Semantic Web Conference; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2018; Volume 10843, pp. 593–607. [Google Scholar]
- Chami, I.; Wolf, A.; Juan, D.C.; Sala, F.; Ravi, S.; Ré, C. Low-Dimensional Hyperbolic Knowledge Graph Embeddings. arXiv 2020, arXiv:2005.00545. [Google Scholar]
- Monti, F.; Bronstein, M.; Bresson, X. Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks. Adv. Neural Inf. Process. Syst. 2017, 30, 3697–3707. [Google Scholar]
- Ying, R.; He, R.; Chen, K.; Eksombatchai, P.; Hamilton, W.L.; Leskovec, J. Graph Convolutional Neural Networks for Web-Scale Recommender Systems. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; pp. 974–983. [Google Scholar]
- Hamilton, W.L.; Zhang, J.; Danescu-Niculescu-Mizil, C.; Jurafsky, D.; Leskovec, J. Loyalty in Online Communities. In Proceedings of the Eleventh International AAAI Conference on Web and Social Media, Montreal, QC, Canada, 15–18 May 2017; pp. 540–543. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, Toulon, France, 24–26 April 2017. [Google Scholar]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 4–24. [Google Scholar] [CrossRef] [PubMed]
- Defferrard, M.; Bresson, X.; Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. Adv. Neural Inf. Process. Syst. 2016, 29, 3837–3845. [Google Scholar]
- Atwood, J.; Towsley, D. Diffusion-convolutional neural networks. Adv. Neural Inf. Process. Syst. 2016, 29, 1993–2001. [Google Scholar]
- Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. Adv. Neural Inf. Process. Syst. 2017, 30, 1024–1034. [Google Scholar]
- Monti, F.; Boscaini, D.; Masci, J.; Rodola, E.; Svoboda, J.; Bronstein, M.M. Geometric deep learning on graphs and manifolds using mixture model CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5115–5124. [Google Scholar]
- Veličković, P.; Cucurull, G.; Casanova, A.; Romero, A.; Liò, P.; Bengio, Y. Graph attention networks. In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Bouritsas, G.; Frasca, F.; Zafeiriou, S.P.; Bronstein, M. Improving graph neural network expressivity via subgraph isomorphism counting. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 657–668. [Google Scholar] [CrossRef] [PubMed]
- Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How powerful are graph neural networks? In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
- Edelsbrunner, H.; Harer, J. Computational Topology: An Introduction; American Mathematical Society: Providence, RI, USA, 2010. [Google Scholar]
- Dey, T.K.; Wang, Y. Computational Topology for Data Analysis; Cambridge University Press: Cambridge, UK, 2022. [Google Scholar]
- Yu, D.; Zhou, X.; Pan, Y.; Niu, Z.; Yuan, X.; Sun, H. University Academic Performance Development Prediction Based on TDA. Entropy 2023, 25, 24. [Google Scholar] [CrossRef]
- Emrani, S.; Gentimis, T.; Krim, H. Persistent Homology of Delay Embeddings and its Application to Wheeze Detection. IEEE Signal Process. Lett. 2014, 21, 459–463. [Google Scholar] [CrossRef]
- Perea, J.A.; Harer, J. Sliding windows and persistence: An application of topological methods to signal analysis. Found. Comput. Math. 2015, 15, 799–838. [Google Scholar] [CrossRef]
- Erden, F.; Cetin, A.E. Period Estimation of an Almost Periodic Signal Using Persistent Homology With Application to Respiratory Rate Measurement. IEEE Signal Process. Lett. 2017, 24, 958–962. [Google Scholar] [CrossRef]
- Qaiser, T.; Tsang, Y.W.; Taniyama, D.; Sakamoto, N.; Nakane, K.; Epstein, D.; Rajpoot, N. Fast and accurate tumor segmentation of histology images using persistent homology and deep convolutional features. Med. Image Anal. 2019, 55, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Rieck, B.; Yates, T.; Bock, C.; Borgwardt, K.; Wolf, G.; Turk-Browne, N.; Krishnaswamy, S. Uncovering the Topology of Time-Varying fMRI Data using Cubical Persistence. Adv. Neural Inf. Process. Syst. 2020, 33, 6900–6912. [Google Scholar]
- Mezher, R.; Arayro, J.; Hascoet, N.; Chinesta, F. Study of Concentrated Short Fiber Suspensions in Flows, Using Topological Data Analysis. Entropy 2021, 23, 1229. [Google Scholar] [CrossRef]
- Yao, Y.; Sun, J.; Huang, X.; Bowman, G.R.; Singh, G.; Lesnick, M.; Guibas, L.J.; Pande, V.S.; Carlsson, G. Topological methods for exploring low-density states in biomolecular folding pathways. J. Chem. Phys. 2009, 130, 144115. [Google Scholar] [CrossRef]
- Wang, B.; Wei, G.W. Object-oriented persistent homology. J. Comput. Phys. 2016, 305, 276–299. [Google Scholar] [CrossRef] [Green Version]
- Lee, Y.; Barthel, S.D.; Dlotko, P.; Moosavi, S.M.; Hess, K.; Smit, B. Quantifying similarity of pore-geometry in nanoporous materials. Nat. Commun. 2017, 8, 15396. [Google Scholar] [CrossRef]
- Smith, A.D.; Dłotko, P.; Zavala, V.M. Topological data analysis: Concepts, computation, and applications in chemical engineering. Comput. Chem. Eng. 2021, 146, 107202. [Google Scholar] [CrossRef]
- Kovacev-Nikolic, V.; Bubenik, P.; Nikolić, D.; Heo, G. Using persistent homology and dynamical distances to analyze protein binding. Stat. Appl. Genet. Mol. Biol. 2016, 15, 19–38. [Google Scholar] [CrossRef] [PubMed]
- Nakamura, T.; Hiraoka, Y.; Hirata, A.; Escolar, E.G.; Nishiura, Y. Persistent homology and many-body atomic structure for medium-range order in the glass. Nanotechnology 2015, 26, 304001. [Google Scholar] [CrossRef] [PubMed]
- Buchet, M.; Hiraoka, Y.; Obayashi, I. Persistent homology and materials informatics. In Nanoinformatics; Springer: Singapore, 2018; pp. 75–95. [Google Scholar]
- Yen, P.T.W.; Xia, K.; Cheong, S.A. Understanding Changes in the Topology and Geometry of Financial Market Correlations during a Market Crash. Entropy 2021, 23, 1211. [Google Scholar] [CrossRef] [PubMed]
- Pun, C.S.; Xia, K.; Lee, S.X. Persistent-homology-based machine learning and its applications—A survey. arXiv 2018, arXiv:1811.00252. [Google Scholar] [CrossRef]
- Hensel, F.; Moor, M.; Rieck, B. A survey of topological machine learning methods. Front. Artif. Intell. 2021, 4, 681108. [Google Scholar] [CrossRef]
- Zhao, Q.; Ye, Z.; Chen, C.; Wang, Y. Persistence enhanced graph neural network. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; Volume 108, pp. 2896–2906. [Google Scholar]
- Chen, Y.; Coskunuzer, B.; Gel, Y.R. Topological relational learning on graphs. Adv. Neural Inf. Process. Syst. 2021, 34, 27029–27042. [Google Scholar]
- Hofer, C.D.; Kwitt, R.; Niethammer, M. Learning representations of persistence barcodes. J. Mach. Learn. Res. 2019, 20, 1–45. [Google Scholar]
- Carriere, M.; Chazal, F.; Ike, Y.; Lacombe, T.; Royer, M.; Umeda, Y. PersLay: A neural network layer for persistence diagrams and new graph topological signatures. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, Online, 26–28 August 2020; Volume 108, pp. 2786–2796. [Google Scholar]
- Hofer, C.; Graf, F.; Rieck, B.; Niethammer, M.; Kwitt, R. Graph filtration learning. In Proceedings of the 37th International Conference on Machine Learning, Virtual, 13–18 July 2020; Volume 119, pp. 4314–4323. [Google Scholar]
- Horn, M.; De Brouwer, E.; Moor, M.; Moreau, Y.; Rieck, B.; Borgwardt, K. Topological graph neural networks. In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
- Cohen-Steiner, D.; Edelsbrunner, H.; Harer, J. Extending Persistence Using Poincaré and Lefschetz Duality. Found. Comput. Math. 2009, 9, 79–103. [Google Scholar] [CrossRef]
- Yan, Z.; Ma, T.; Gao, L.; Tang, Z.; Chen, C. Link prediction with persistent homology: An interactive view. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; Volume 139, pp. 11659–11669. [Google Scholar]
- Zhao, Q.; Wang, Y. Learning metrics for persistence-based summaries and applications for graph classification. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; Volume 32, pp. 9855–9866. [Google Scholar]
- Royer, M.; Chazal, F.; Levrard, C.; Umeda, Y.; Ike, Y. ATOL: Measure Vectorization for Automatic Topologically-Oriented Learning. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, San Diego, CA, USA, 13–15 April 2021; Volume 130, pp. 1000–1008. [Google Scholar]
- Gori, M.; Monfardini, G.; Scarselli, F. A new model for learning in graph domains. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; Volume 2, pp. 729–734. [Google Scholar]
- Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2009, 20, 61–80. [Google Scholar] [CrossRef]
- Gallicchio, C.; Micheli, A. Graph echo state networks. In Proceedings of the 2010 International Joint Conference on Neural Networks, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
- Weisfeiler, B.; Leman, A.A. The reduction of a graph to canonical form and the algebra which appears therein. NTI Ser. 1968, 2, 12–16. [Google Scholar]
- Maron, H.; Ben-Hamu, H.; Serviansky, H.; Lipman, Y. Provably powerful graph networks. Adv. Neural Inf. Process. Syst. 2019, 32, 2153–2164. [Google Scholar]
- Chen, Z.; Villar, S.; Chen, L.; Bruna, J. On the equivalence between graph isomorphism testing and function approximation with GNNs. Adv. Neural Inf. Process. Syst. 2019, 32, 15868–15876. [Google Scholar]
- Chazal, F.; Michel, B. An Introduction to Topological Data Analysis: Fundamental and Practical Aspects for Data Scientists. Front. Artif. Intell. 2021, 4, 667963. [Google Scholar] [CrossRef]
- Cang, Z.; Mu, L.; Wu, K.; Opron, K.; Xia, K.; Wei, G.W. A topological approach for protein classification. Comput. Math. Biophys. 2015, 3, 140–162. [Google Scholar] [CrossRef]
- Adcock, A.; Carlsson, E.; Carlsson, G. The ring of algebraic functions on persistence bar codes. Homol. Homotopy Appl. 2016, 18, 381–402. [Google Scholar] [CrossRef]
- Atienza, N.; Gonzalez-Diaz, R.; Rucco, M. Persistent entropy for separating topological features from noise in vietoris-rips complexes. J. Intell. Inf. Syst. 2019, 52, 637–655. [Google Scholar] [CrossRef]
- Rieck, B.; Sadlo, F.; Leitte, H. Topological Machine Learning with Persistence Indicator Functions. In Topological Methods in Data Analysis and Visualization V; Mathematics and Visualization; Springer: Cham, Switzerland, 2020; pp. 87–101. [Google Scholar]
- Bubenik, P. Statistical topological data analysis using persistence landscapes. J. Mach. Learn. Res. 2015, 16, 77–102. [Google Scholar]
- Adams, H.; Emerson, T.; Kirby, M.; Neville, R.; Peterson, C.; Shipman, P.; Chepushtanova, S.; Hanson, E.; Motta, F.; Ziegelmeier, L. Persistence images: A stable vector representation of persistent homology. J. Mach. Learn. Res. 2017, 18, 1–35. [Google Scholar]
- Mileyko, Y.; Mukherjee, S.; Harer, J. Probability measures on the space of persistence diagrams. Inverse Probl. 2011, 27, 124007. [Google Scholar] [CrossRef]
- Reininghaus, J.; Huber, S.; Bauer, U.; Kwitt, R. A Stable Multi-Scale Kernel for Topological Machine Learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4741–4748. [Google Scholar]
- Kusano, G.; Hiraoka, Y.; Fukumizu, K. Persistence weighted Gaussian kernel for topological data analysis. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 2004–2013. [Google Scholar]
- Chazal, F.; Fasy, B.; Lecci, F.; Michel, B.; Rinaldo, A.; Rinaldo, A.; Wasserman, L. Robust topological inference: Distance to a measure and kernel distance. J. Mach. Learn. Res. 2017, 18, 5845–5884. [Google Scholar]
- Le, T.; Yamada, M. Persistence Fisher Kernel: A Riemannian Manifold Kernel for Persistence Diagrams. Adv. Neural Inf. Process. Syst. 2018, 31, 10028–10039. [Google Scholar]
- Tran, Q.H.; Vo, V.T.; Hasegawa, Y. Scale-variant topological information for characterizing the structure of complex networks. Phys. Rev. E 2019, 100, 032308. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chen, Y.; Garcia, E.K.; Gupta, M.R.; Rahimi, A.; Cazzanti, L. Similarity-based classification: Concepts and algorithms. J. Mach. Learn. Res. 2009, 10, 747–776. [Google Scholar]
- Hofer, C.; Kwitt, R.; Niethammer, M.; Uhl, A. Deep learning with topological signatures. Adv. Neural Inf. Process. Syst. 2017, 30, 1634–1644. [Google Scholar]
- Hatcher, A. Algebraic Topology; Cambridge University Press: Cambridge, UK, 2002. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; Volume 37, pp. 448–456. [Google Scholar]
- Morris, C.; Kriege, N.M.; Bause, F.; Kersting, K.; Mutzel, P.; Neumann, M. TUDataset: A collection of benchmark datasets for learning with graphs. arXiv 2020, arXiv:2007.08663. [Google Scholar]
- Borgwardt, K.; Ghisu, E.; Llinares-López, F.; O’Bray, L.; Rieck, B. Graph Kernels: State-of-the-Art and Future Challenges; Now Foundations and Trends: Boston, MA, USA, 2020; Volume 13, pp. 531–712. [Google Scholar]
- Niepert, M.; Ahmed, M.; Kutzkov, K. Learning convolutional neural networks for graphs. In Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 2014–2023. [Google Scholar]
- Zhang, M.; Cui, Z.; Neumann, M.; Chen, Y. An End-to-End Deep Learning Architecture for Graph Classification. In Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32, pp. 4438–4445. [Google Scholar]
- Chen, D.; O’Bray, L.; Borgwardt, K. Structure-Aware Transformer for Graph Representation Learning. In Proceedings of the 39th International Conference on Machine Learning, Baltimore, MA, USA, 17–23 July 2022; Volume 162, pp. 3469–3489. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8024–8035. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Yan, Z.; Ma, T.; Gao, L.; Tang, Z.; Wang, Y.; Chen, C. Neural approximation of graph topological features. In Proceedings of the Thirty-Sixth Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–8 December 2022. [Google Scholar]
Symbol | Meaning |
---|---|
G | An undirected graph. |
V | The vertex set of G. |
E | The edge set of G. |
N | The cardinality of V. |
The input node features of G. | |
The ring of integers. | |
Ker | The kernel of a homomorphism. |
Im | The image of a homomorphism. |
f | A function . |
The sublevel subgraph of G at a. | |
The reversed real line. | |
An element regarded as an element of . | |
Copies of . | |
Elements of . | |
The superlevel subgraph of G at . | |
The four Persistence Diagrams for EPH. | |
The Filtration module. | |
The Vectorization module. | |
The Aggregation module. | |
The process of computing EPH for all the filter functions. | |
The number of filter functions. | |
The dimension of vectorized representations of PD points. |
Datasets | REDDIT-B | IMDB-B | IMDB-M | COX2 | DHFR | NCI1 |
---|---|---|---|---|---|---|
Graphs | 2000 | 1000 | 1500 | 467 | 467 | 4110 |
Classes | 2 | 2 | 3 | 2 | 2 | 2 |
Avg. #Nodes | 429.63 | 19.77 | 13.00 | 41.22 | 42.43 | 29.87 |
Avg. #Edges | 497.75 | 96.53 | 65.94 | 43.45 | 44.54 | 32.30 |
Method | REDDIT-B | IMDB-B | IMDB-M | COX2 | DHFR | NCI1 |
---|---|---|---|---|---|---|
4-GCN | ||||||
3-GCN-1-TREPH | ||||||
4-GAT | ||||||
3-GAT-1-TREPH | ||||||
4-GIN | ||||||
3-GIN-1-TREPH |
Method | REDDIT-B | IMDB-B | IMDB-M | COX2 | DHFR | NCI1 |
---|---|---|---|---|---|---|
4-GCN | 19,298 | 19,298 | 19,331 | 19,298 | 19,298 | 19,298 |
3-GCN-1-TREPH | 102,891 | 102,891 | 102,924 | 102,891 | 102,891 | 102,891 |
4-GAT | 19,810 | 19,810 | 19,843 | 19,810 | 19,810 | 19,810 |
3-GAT-1-TREPH | 103,275 | 103,275 | 103,308 | 103,275 | 103,275 | 103,275 |
4-GIN | 36,454 | 36,454 | 36,487 | 36,454 | 36,454 | 36,454 |
3-GIN-1-TREPH | 115,758 | 115,758 | 115,791 | 115,758 | 115,758 | 115,758 |
Method | Pos | REDDIT-B | IMDB-B | IMDB-M | COX2 | DHFR | NCI1 |
---|---|---|---|---|---|---|---|
3-GCN-1-TREPH | 1 | 94.0 | 77.0 | 49.3 | 78.7 | 80.0 | 76.6 |
2 | 94.5 | 79.0 | 48.0 | 83.0 | 82.7 | 78.4 | |
3 | 94.5 | 77.0 | 49.3 | 78.7 | 82.7 | 77.6 | |
4 | 96.0 | 78.0 | 49.3 | 63.8 | 86.7 | 78.4 | |
3-GAT-1-TREPH | 1 | 91.5 | 76.0 | 46.0 | 78.7 | 81.3 | 76.2 |
2 | 51.0 | 57.0 | 37.3 | 78.7 | 61.3 | 60.6 | |
3 | 50.0 | 50.0 | 33.3 | 78.7 | 38.7 | 50.1 | |
4 | 50.0 | 50.0 | 33.3 | 78.7 | 61.3 | 49.9 | |
3-GIN-1-TREPH | 1 | 90.0 | 77.0 | 47.3 | 80.9 | 81.3 | 78.8 |
2 | 94.5 | 79.0 | 50.0 | 89.4 | 88.0 | 79.8 | |
3 | 91.0 | 78.0 | 48.7 | 83.0 | 88.0 | 79.1 | |
4 | 94.0 | 78.0 | 50.0 | 85.1 | 77.3 | 80.8 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ye, X.; Sun, F.; Xiang, S. TREPH: A Plug-In Topological Layer for Graph Neural Networks. Entropy 2023, 25, 331. https://doi.org/10.3390/e25020331
Ye X, Sun F, Xiang S. TREPH: A Plug-In Topological Layer for Graph Neural Networks. Entropy. 2023; 25(2):331. https://doi.org/10.3390/e25020331
Chicago/Turabian StyleYe, Xue, Fang Sun, and Shiming Xiang. 2023. "TREPH: A Plug-In Topological Layer for Graph Neural Networks" Entropy 25, no. 2: 331. https://doi.org/10.3390/e25020331
APA StyleYe, X., Sun, F., & Xiang, S. (2023). TREPH: A Plug-In Topological Layer for Graph Neural Networks. Entropy, 25(2), 331. https://doi.org/10.3390/e25020331