Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks
Abstract
:1. Introduction
Contribution
- Propose a hybrid clustering method termed SC+PCA which integrates the objective function of Principal Component Analysis (PCA) with that of the Spectral Clustering model to preserve local and global information in graphs;
- Create a hierarchical GNN (h-GNN) by successively stacking multiple more compressed-size graphs in hierarchical levels as a means of increasing the network’s depth through SC+PCA;
- Propose message-passing mechanisms that inherently propagate local and global messages among nodes and across graphs in four ways: First, by maintaining the standard message propagation as local information within neighboring nodes. Second, by propagating messages as global information from nodes of graphs to nodes of another graph sequentially lower in the hierarchy of the graph network’s depth. Third, by propagating messages as global information from nodes of graphs to nodes of another graph sequentially higher in the hierarchy of the network’s depth. Fourth, by propagating messages as global information from nodes of the global graph in the highest hierarchy to nodes of the base graph in the lowest hierarchy of the network’s depth.
2. Proposed Hierarchical Graph Neural Network (h-GNN)
2.1. Overview of the Proposed Approach
2.2. Technical Details of the Proposed Approach
2.2.1. Feature Detection and Description Step
2.2.2. Feature Matching Step
GNN Module
- Spectral Clustering with Principal Component Analysis (SC+PCA)
- Initiating Node Representations with Attention Mechanisms for each Graph of a Hierarchical Level
Message Passing in h-GNN
- 1.
- Within-level Message Propagation: This type adopts the general standard message-passing mechanism of SuperGlue [2]. To further enrich initial node representations, these representations are iteratively refined by updating and aggregating information received from neighboring nodes within the same graph . Basically, communication between nodes within a graph at a particular hierarchical level updates the initial node representations of the graph. Node representations are initiated with attention mechanisms. Following the general definition given by [32] of the standard message-passing mechanism, the -th iteration of the within-level message passing to improve the node representation beyond the initial node representations can be expressed as
- 2.
- Bottom-up Message Propagation. After obtaining the node representations within all graphs , we perform the bottom-up type of message passing. In this type, directed messages propagate from a node in the graph to the corresponding mega-node in the mega-graph . This enriches node information by aggregating from lower levels to higher levels in the hierarchy. Still following the general definition of GNN’s message passing in [32], the node representations are updated through bottom-up message propagation as follows:
- 3.
- Type-1 Top-down Level Message Propagation: This is the direct reverse of the bottom-up type of message passing, where in order to update the node representations , messages are directed from a mega-node in the mega-graph to the corresponding node in the graph ). This type of message passing updates node representations of graph ) by aggregating discriminative features of nodes from higher levels to lower levels in the hierarchy, as follows:
- 4.
- Type-2 Top-down Level Message Propagation: This type is where we update the node representations of original nodes of the base graphs with the node representations of the global node of the global graph . This updates node information by aggregating from the highest level in the hierarchy to the lowest level and could be expressed as
2.2.3. Matching Module
2.2.4. Loss Functions
3. Experiments and Discussions
3.1. Platform
3.2. Implementation Setup
3.3. Evaluations
3.3.1. Homography Estimation
3.3.2. Tests on RANSAC Variants for Outdoor and Indoor Camera Pose Estimation
3.3.3. Qualitative Analysis of the Open-Source Dataset
3.3.4. Computation Time and Memory Complexities
3.4. Ablation Study
3.4.1. h-GNN’s Performance in SfM Pipeline
3.4.2. Effects of SC+PCA Clustering Technique on h-GNN
3.4.3. Effects of Message-Passing (MP) Mechanism on h-GNN
3.5. Discussions
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-Supervised Interest Point Detection and Description. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 224–236. [Google Scholar] [CrossRef]
- Sarlin, P.-E.; DeTone, D.; Malisiewicz, T.; Rabinovich, A. SuperGlue: Learning Feature Matching with Graph Neural Networks. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 4938–4947. [Google Scholar] [CrossRef]
- Lindenberger, P.; Sarlin, P.-E.; Pollefeys, M. LightGlue: Local Feature Matching at Light Speed. In Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 17627–17638. [Google Scholar] [CrossRef]
- Zaman, A.; Yangyu, F.; Irfan, M.; Ayub, M.S.; Guoyun, L.; Shiya, L. LifelongGlue: Keypoint matching for 3D reconstruction with continual neural networks. Expert Syst. Appl. 2022, 195, 116613. [Google Scholar] [CrossRef]
- Bellavia, F. Image Matching by Bare Homography. IEEE Trans. Image Process. 2024, 33, 696–708. [Google Scholar] [CrossRef] [PubMed]
- Xu, M.; Wang, Y.; Xu, B.; Zhang, J.; Ren, J.; Huang, Z.; Poslad, S.; Xu, P. A critical analysis of image-based camera pose estimation techniques. Neurocomputing 2024, 570, 127125. [Google Scholar] [CrossRef]
- Cao, M.; Jia, W.; Lv, Z.; Zheng, L.; Liu, X. SuperPixel-Based Feature Tracking for Structure from Motion. Appl. Sci. 2019, 9, 2961. [Google Scholar] [CrossRef]
- Liu, Y.; Huang, K.; Li, J.; Li, X.; Zeng, Z.; Chang, L.; Zhou, J. AdaSG: A Lightweight Feature Point Matching Method Using Adaptive Descriptor with GNN for VSLAM. Sensors 2022, 22, 5992. [Google Scholar] [CrossRef] [PubMed]
- Salimpour, S.; Queralta, J.P.; Westerlund, T. Self-calibrating anomaly and change detection for autonomous inspection robots. In Proceedings of the 6th IEEE International Conference on Robotic Computing, Naples, Italy, 5–7 December 2022; pp. 207–214. [Google Scholar] [CrossRef]
- Le, V.P.; De Tran, C. Key-point matching with post-filter using sift and brief in logo spotting. In Proceedings of the 2015 IEEE International Conference on Computing & Communication Technologies-Research, Innovation, and Vision for Future, Can Tho, Vietnam, 25–28 January 2015; pp. 89–93. [Google Scholar] [CrossRef]
- Xu, Y.; Li, Y.J.; Weng, X.; Kitani, K. Wide-baseline multi-camera calibration using person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13134–13143. [Google Scholar] [CrossRef]
- Zhou, Y.; Guo, Y.; Lin, K.P.; Yang, F.; Li, L. USuperGlue: An unsupervised UAV image matching network based on local self-attention. Soft Comput. 2023, 1–21. [Google Scholar] [CrossRef]
- Ma, J.; Jiang, X.; Fan, A.; Jiang, J.; Yan, J. Image matching from handcrafted to deep features: A survey. Int. J. Comput. Vis. 2021, 129, 23–79. [Google Scholar] [CrossRef]
- Khemani, B.; Patil, S.; Kotecha, K.; Tanwar, S. A review of graph neural networks: Concepts, architectures, techniques, challenges, datasets, applications, and future directions. J. Big Data 2024, 11, 18. [Google Scholar] [CrossRef]
- Xu, J.; Chen, J.; You, S.; Xiao, Z.; Yang, Y.; Lu, J. Robustness of deep learning models on graphs: A survey. AI Open 2021, 2, 69–78. [Google Scholar] [CrossRef]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Philip, S.Y. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef] [PubMed]
- Gilmer, J.; Schoenholz, S.S.; Riley, P.F.; Vinyals, O.; Dahl, G.E. Message Passing Neural Networks. In Machine Learning Meets Quantum Physics; Schütt, K., Chmiela, S., von Lilienfeld, O., Tkatchenko, A., Tsuda, K., Müller, K.R., Eds.; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 2020; Volume 968, pp. 199–214. [Google Scholar] [CrossRef]
- Ahmed, A.; Shervashidze, N.; Narayanamurthy, S.M.; Josifovski, V.; Smola, A.J. Distributed large-scale natural graph factorization. In Proceedings of the 22nd International World Wide Web Conference, Janeiro, Brazil, 13–17 May 2013; pp. 37–48. [Google Scholar] [CrossRef]
- Alon, U.; Yahav, E. On the bottleneck of graph neural networks and its practical implications. In Proceedings of the 9th International Conference on Learning Representations, Vienna, Austria, 3–7 May 2021; pp. 1–16. Available online: https://openreview.net/pdf?id=i80OPhOCVH2 (accessed on 10 April 2024).
- Zhong, Z.; Li, C.T.; Pang, J. Hierarchical message-passing graph neural networks. Data Min. Knowl. Discov. 2023, 37, 381–408. [Google Scholar] [CrossRef]
- Oono, K.; Suzuki, T. Graph neural networks exponentially lose expressive power for node classification. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020; pp. 1–37. Available online: https://openreview.net/forum?id=S1ldO2EFPr (accessed on 10 April 2024).
- Itoh, T.D.; Kubo, T.; Ikeda, K. Multi-level attention pooling for graph neural networks: Unifying graph representations with multiple localities. Neural Netw. 2022, 145, 356–373. [Google Scholar] [CrossRef] [PubMed]
- Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How powerful are graph neural networks? In Proceedings of the 7th International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019; pp. 1–17. Available online: https://openreview.net/pdf?id=ryGs6iA5Km (accessed on 10 April 2024).
- Xu, K.; Li, C.; Tian, Y.; Sonobe, T.; Kawarabayashi, K.I.; Jegelka, S. Representation learning on graphs with jumping knowledge networks. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 June 2018; pp. 5453–5462. Available online: https://proceedings.mlr.press/v80/xu18c.html (accessed on 10 April 2024).
- Li, Q.; Han, Z.; Wu, X. Deeper insights into graph convolutional networks for semi-supervised learning. In Proceedings of the 2018 AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; Volume 32, pp. 3538–3545. [Google Scholar] [CrossRef]
- Zhou, S.; Liu, X.; Zhu, C.; Liu, Q.; Yin, J. Spectral clustering-based local and global structure preservation for feature selection. In Proceedings of the 2014 International Joint Conference on Neural Networks, Beijing, China, 6–11 July 2014; pp. 550–557. [Google Scholar] [CrossRef]
- Wang, F.; Zhang, C.; Li, T. Clustering with local and global regularization. IEEE Trans. Knowl. Data Eng. 2009, 21, 1665–1678. [Google Scholar] [CrossRef]
- Elisa, S.S. Graph clustering. Comput. Sci. Rev. 2007, 1, 27–64. [Google Scholar] [CrossRef]
- Shun, J.; Blelloch, G.E. Ligra: A lightweight graph processing framework for shared memory. In Proceedings of the 18th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Shenzhen, China, 23–27 February 2013; pp. 135–146. [Google Scholar] [CrossRef]
- Li, G.; Rao, W.; Jin, Z. Efficient compression on real world directed graphs. In Proceedings of the Web and Big Data 1st International Joint Conference, APWeb-WAIM, Beijing, China, 7–9 July 2017; pp. 116–131. [Google Scholar] [CrossRef]
- Ma, E.J. Computational Representations of Message Passing—Essays on Data Science. 2021. Available online: https://ericmjl.github.io/essays-on-data-science/machine-learning/message-passing (accessed on 2 January 2024).
- Fan, X.; Gong, M.; Wu, Y.; Qin, A.K.; Xie, Y. Propagation enhanced neural message passing for graph representation learning. IEEE Trans. Knowl. Data Eng. 2021, 35, 1952–1964. [Google Scholar] [CrossRef]
- Tu, W.; Guan, R.; Zhou, S.; Ma, C.; Peng, X.; Cai, Z.; Liu, Z.; Cheng, J.; Liu, X. Attribute-Missing Graph Clustering Network. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 15392–15401. [Google Scholar] [CrossRef]
- Revaud, J.; De Souza, C.; Humenberger, M.; Weinzaepfel, P. R2D2: Repeatable and reliable detector and descriptor. arXiv 2019, arXiv:1906.06195. [Google Scholar]
- Balntas, V.; Lenc, K.; Vedaldi, A.; Mikolajczyk, K. KHPatches: A Benchmark and Evaluation of Handcrafted and Learned Local Descriptors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3852–3861. [Google Scholar] [CrossRef]
- Radenović, F.; Iscen, A.; Tolias, G.; Avrithis, Y.; Chum, O. Revisiting Oxford and Paris: Large-scale image retrieval benchmarking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5706–5715. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Yi, K.M.; Trulls, E.; Ono, Y.; Lepetit, V.; Salzmann, M.; Fua, P. Learning to find good correspondences. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2666–2674. [Google Scholar] [CrossRef]
- Zhang, J.; Sun, D.; Luo, Z.; Yao, A.; Zhou, L.; Shen, T.; Chen, Y.; Liao, H.; Quan, L. Learning two-view correspondences and geometry using order-aware network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5845–5854. [Google Scholar] [CrossRef]
- Chen, H.; Luo, Z.; Zhang, J.; Zhou, L.; Bai, X.; Hu, Z.; Tai, C.-L.; Quan, L. Learning to Match Features with Seeded Graph Matching Network. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 6301–6310. [Google Scholar] [CrossRef]
- Li, Z.; Snavely, N. MegaDepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2041–2050. [Google Scholar]
- Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; Zhou, X. LoFTR: Detector-Free Local Feature Matching with Transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 11–17 October 2021; pp. 8922–8931. [Google Scholar] [CrossRef]
- Xiao, J.; Owens, A.; Torralba, A. SUN3D: A Database of Big Spaces Reconstructed Using SfM and Object Labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1625–1632. [Google Scholar] [CrossRef]
- Wang, W.; Sun, Y.; Liu, Z.; Qin, Z.; Wang, C.; Qin, J. Image matching via the local neighborhood for low inlier ratio. J. Electron. Imaging 2022, 31, 023039. [Google Scholar] [CrossRef]
- Jiang, X.; Wang, Y.; Fan, A.; Ma, J. Learning for mismatch removal via graph attention networks. ISPRS J. Photogramm. Remote Sens. 2022, 190, 181–195. [Google Scholar] [CrossRef]
- Truong, Q.; Chin, P. Weisfeiler and Lehman Go Paths: Learning Topological Features via Path Complexes. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 15382–15391. [Google Scholar] [CrossRef]
Features Detector and Descriptor | P|R|A (HPatches) (Oxford and Paris) | AUC-RANSAC @ (3|5|10) px (HPatches) (Oxford and Paris) | AUC-Weighted DLT @ (3|5|10) px (HPatches) (Oxford and Paris) | |
---|---|---|---|---|
↳ | Matchers | |||
SuperPoint | ||||
↳ | NN w/mutual checks | (31.7|37.4|41.1) (33.4|40.1|46.8) | (32.1|39.5|42.6) (33.4|43.3|48.6) | (0.0|1.7|1.8) (0.1|1.3|1.6) |
↳ | NN w/PointCN | (52.1|55.3|57.8) (59.9|57.7|61.9) | (33.1|60.1|73.1) (33.8|73.8|78.4) | (15.8|35.3|44.6) (17.2|38.1|47.6) |
↳ | NN w/OA-Net | (57.9|60.6|63.2) (67.8|71.4|61.5) | (34.2|62.9|74.8) (33.9|73.1|78.5) | (19.8|1.3|49.6) (22.5|1.3|53.6) |
↳ | SGMNet | (63.0|64.6|67.9) (72.5|73.3|63.2) | (36.2|63.2|76.8) (35.5|73.1|79.9) | (32.3|58.1|69.8) (33.2|71.2|74.3) |
↳ | SuperGlue | (66.2|74.4|70.3) (71.7|75.1|74.1) | (37.3|63.5|79.4) (36.5|73.2|80.7) | (34.5|60.7|74.6) (34.1|71.6|74.4) |
↳ | LightGlue | (67.3|83.0|83.7) (73.7|77.2|73.7) | (38.1|69.8|80.1) (36.8|74.1|81.3) | (38.3|76.7|79.1) (36.7|72.5|74.5) |
↳ | LifelongGlue | (81.2|85.3|85.8) (88.6|83.3|87.4) | (40.5|69.4|80.2) (38.9|76.6|82.7) | (38.8|78.2|80.1) (36.4|72.8|75.8) |
Our Feature Detector and Descriptor | ||||
↳ | h-GNN | (85.4|87.3|86.2) (90.4|83.1|88.0) | (39.9|79.4|82.7) (42.9|80.8|85.2) | (39.7|72.7|83.1) (38.5|74.2|78.4) |
Dense | ||||
↳ | LoFTR | (94.9|92.2|92.7) (89.8|91.1|90.8) | (41.9|78.5|81.3) (45.4|78.1|84.4) | (39.1|72.2|85.3) (37.9|73.6|77.8) |
Baselines | RANSAC Variants | Camera Pose (Datasets) | |
---|---|---|---|
Outdoor (MegaDepth) | Indoor (SUN3D) | ||
LFGC-Net | RANSAC | 35.86 | 17.78 |
DegenSAC | 33.43 | 19.81 | |
MAGSAC | 38.48 | 18.75 | |
PROSAC | 32.86 | 16.86 | |
GC-RANSAC | 33.12 | 15.29 | |
LO-RANSAC | 33.78 | 16.03 | |
Sparse GANet | RANSAC | 46.36 | 21.5 |
DegenSAC | 44.24 | 23.78 | |
MAGSAC | 48.44 | 22.35 | |
PROSAC | 42.05 | 19.36 | |
GC-RANSAC | 42.19 | 17.86 | |
LO-RANSAC | 44.64 | 19.55 | |
NN w/OA-Net | RANSAC | 35.85 | 18.67 |
DegenSAC | 34.68 | 19.32 | |
MAGSAC | 37.22 | 18.98 | |
PROSAC | 34.25 | 17.17 | |
GC-RANSAC | 34.38 | 15.24 | |
LO-RANSAC | 34.42 | 15.02 | |
SGMNet | RANSAC | 43.46 | 17.21 |
DegenSAC | 41.38 | 20.62 | |
MAGSAC | 44.36 | 18.18 | |
PROSAC | 37.24 | 15.93 | |
GC-RANSAC | 36.58 | 14.56 | |
LO-RANSAC | 38.74 | 17.14 | |
SuperGlue | RANSAC | 46.02 | 21.93 |
DegenSAC | 44.8 | 22.23 | |
MAGSAC | 47.8 | 22.57 | |
PROSAC | 42.5 | 20.88 | |
GC-RANSAC | 41.2 | 19.31 | |
LO-RANSAC | 45.6 | 20.91 | |
LightGlue | RANSAC | 44.44 | 21.43 |
DegenSAC | 43.12 | 22.98 | |
MAGSAC | 46.67 | 23.02 | |
PROSAC | 40.42 | 18.73 | |
GC-RANSAC | 41.95 | 17.81 | |
LO-RANSAC | 42.02 | 19.94 | |
LifelongGlue | RANSAC | 43.61 | 21.14 |
DegenSAC | 42.46 | 23.66 | |
MAGSAC | 44.86 | 21.91 | |
PROSAC | 37.76 | 18.62 | |
GC-RANSAC | 39.13 | 17.91 | |
LO-RANSAC | 42.43 | 19.69 | |
h-GNN | RANSAC | 46.94 | 21.46 |
DegenSAC | 47.35 | 22.79 | |
MAGSAC | 49.38 | 21.74 | |
PROSAC | 42.96 | 19.93 | |
GC-RANSAC | 42.31 | 21.19 | |
LO-RANSAC | 45.55 | 18.87 |
Matcher | Semper Statue Dresden from Strach’s Dataset | |
---|---|---|
Keypoints | Matches | |
LoFTR | 1760 | 1624 |
h-GNN | 1721 | 1589 |
SparseGANet | 927 | 896 |
SuperPoint+SuperGlue | 732 | 477 |
SuperPoint+LightGlue | 707 | 470 |
SuperPoint+LifelongGlue | 738 | 458 |
LFGC-Net | 631 | 389 |
NN w/PointCN | 622 | 355 |
NN w/OA-Net | 583 | 336 |
NN w/mutual checks | 574 | 333 |
Methods | Pose Estimation | ||
---|---|---|---|
Fixed Clusters | |||
h-GNN 8-L @2k | 25.45 | 36.63 | 54.73 |
h-GNN 16-L @2k | 33.61 | 49.24 | 66.12 |
h-GNN 32-L @2k | 38.82 | 56.81 | 73.68 |
Varied Cluster @1k | 46.94 | 61.73 | 77.28 |
Methods | Pose Estimation | ||
---|---|---|---|
h-GNN w/only Within-level | 42 | 56.93 | 71.87 |
h-GNN w/only bottom-up MP | 42.65 | 57.35 | 73.03 |
h-GNN w/only Type-1 MP | 42.63 | 57.36 | 73.03 |
h-GNN w/only Type-2 MP | 43.33 | 58.11 | 74.55 |
Full h-GNN configuration | 46.94 | 61.73 | 77.28 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Opanin Gyamfi, E.; Qin, Z.; Mantebea Danso, J.; Adu-Gyamfi, D. Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks. Information 2024, 15, 602. https://doi.org/10.3390/info15100602
Opanin Gyamfi E, Qin Z, Mantebea Danso J, Adu-Gyamfi D. Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks. Information. 2024; 15(10):602. https://doi.org/10.3390/info15100602
Chicago/Turabian StyleOpanin Gyamfi, Enoch, Zhiguang Qin, Juliana Mantebea Danso, and Daniel Adu-Gyamfi. 2024. "Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks" Information 15, no. 10: 602. https://doi.org/10.3390/info15100602
APA StyleOpanin Gyamfi, E., Qin, Z., Mantebea Danso, J., & Adu-Gyamfi, D. (2024). Hierarchical Graph Neural Network: A Lightweight Image Matching Model with Enhanced Message Passing of Local and Global Information in Hierarchical Graph Neural Networks. Information, 15(10), 602. https://doi.org/10.3390/info15100602