Mechanism and Modeling of Graph Convolutional Networks

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Networks".

Deadline for manuscript submissions: 15 April 2025 | Viewed by 4335

Special Issue Editors

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610039, China
Interests: machine learning; medical image analysis; graph learning; artificial neural networks; graph-related neural networks
Special Issues, Collections and Topics in MDPI journals
School of Mathematical and Computational Sciences, Massey University, Auckland 1142, New Zealand
Interests: clustering analysis; spectral learning; graph machine learning
CBICA, University of Pennsylvania, Philadelphia, PA 19104, USA
Interests: medical image registration; medical image segmentation; machine learning; deep learning

Special Issue Information

Dear Colleagues,

Graph convolutional networks (GCNs) have been developed rapidly, leading to the creation of diverse models in different fields, such as the biomedical, genetical analysis, and pattern recognition fields. GCNs are a type of deep learning model that operate on graph-structured data, as they can capture the local structure of data and identify patterns and regularities in the data based on tasks including node classification, graph classification, and link prediction. Moreover, GCNs not only can be used to learn node representations capturing the topology between the data, but can be utilized as features for downstream tasks, such as classification and clustering. However, there are various issues that can be found in GCNs. First, it is not convenient to predict the unseen data since the designed graph only considers the correlation for the training data. Second, GCNs need to consume a lot of storage space to store the graph structure, and thus, it is important to consider the size of the graph. Third, for the homogeneous graph or the heterogeneous graph, it is important to consider the different kinds of data for the specific tasks. To deal with the discussed issues and the existing research challenges, this Special Issue aims to encourage scholars to design more interesting works based on GCNs and to explore the mechanism and modeling of the framework of GCNs. Moreover, high-quality submissions involving theory analysis and the interpretability of GCNs are welcome.

Below is an incomplete list of potential topics to be covered in the Special Issue:

  • Theory construction and analysis of GCNs;
  • Kernel-based, metric-based, and causal inference-based learning for GCNs;
  • Explainable representation learning for GCNs;
  • Supervised, semi-supervised, unsupervised, transfer, and reinforcement-based learning for GCNs;
  • Missing information imputation of GCN models;
  • Safety and reliability of GCNs with representation learning;
  • Sub-graph learning for GCNs;
  • Federated learning in GCN models;
  • Homogeneity graphs and heterogeneity graphs for GCNs.

Dr. Rongyao Hu
Dr. Tong Liu
Dr. Jiong Wu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • theory construction and analysis of GCNs
  • kernel-based, metrics-based, and causal inference-based learning for GCNs
  • explainable representation learning for GCNs
  • supervised, semi-supervised, unsupervised, transfer, and reinforcement-based learning for GCNs
  • missing information imputation of GCN models
  • safety and reliability of GCNs with representation learning
  • sub-graph learning for GCNs
  • federated learning in GCN models
  • homogeneity graphs and heterogeneity graphs for GCNs

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 7637 KiB  
Article
TEM Strata Inversion Imaging with IP Effect Based on Enhanced GCN by Extracting Long-Dependency Features
by Ruiheng Li, Yi Di, Hao Tian and Lu Gan
Electronics 2023, 12(19), 4138; https://doi.org/10.3390/electronics12194138 - 4 Oct 2023
Viewed by 1154
Abstract
Utilizing neural network models to inverse time-domain electromagnetic signals enables rapid acquisition of electrical structures, a non-intrusive method widely applied in geological and environmental surveys. However, traditional multi-layer perceptron (MLP) feature extraction is limited, struggling with cases involving complex electrical media with induced [...] Read more.
Utilizing neural network models to inverse time-domain electromagnetic signals enables rapid acquisition of electrical structures, a non-intrusive method widely applied in geological and environmental surveys. However, traditional multi-layer perceptron (MLP) feature extraction is limited, struggling with cases involving complex electrical media with induced polarization effects, thereby limiting the inversion model’s predictive capacity. A graph-topology-based neural network model for strata electrical structure imaging with long-dependency feature extraction was proposed. We employ graph convolutional networks (GCN) for capturing non-Euclidean features like resistivity-thickness coupling and Long Short-Term Memory (LSTM) to capture long-dependency features. The LSTM compensates for GCN’s constraints in capturing distant node relationships. Using case studies with 5-strata and 9-strata resistivity models containing induced polarization effects, compared to traditional MLP networks, the proposed model utilizing time-domain features and graph-topology-based electrical structure extraction significantly improves performance. The mean absolute error in inversion misfit is reduced from 10–20% to around 2–3%. Full article
(This article belongs to the Special Issue Mechanism and Modeling of Graph Convolutional Networks)
Show Figures

Figure 1

13 pages, 774 KiB  
Article
A Next POI Recommendation Based on Graph Convolutional Network by Adaptive Time Patterns
by Jiang Wu, Shaojie Jiang and Lei Shi
Electronics 2023, 12(5), 1241; https://doi.org/10.3390/electronics12051241 - 4 Mar 2023
Cited by 3 | Viewed by 2148
Abstract
Users’ activities in location-based social networks (LBSNs) can be naturally transformed into graph structural data, and more advanced graph representation learning techniques can be adopted for analyzing user preferences, which benefits a variety of real-world applications. This paper focuses on the next point-of-interest [...] Read more.
Users’ activities in location-based social networks (LBSNs) can be naturally transformed into graph structural data, and more advanced graph representation learning techniques can be adopted for analyzing user preferences, which benefits a variety of real-world applications. This paper focuses on the next point-of-interest (POI) recommendation task in LBSNs. We argue that existing graph-based POI recommendation methods only consider user preferences from several individual contextual factors, ignoring the influence of interactions between different contextual information. This practice leads to the suboptimal learning of user preferences. To address this problem, we propose a novel method called hierarchical attention-based graph convolutional network (HAGCN) for the next POI recommendation, a technique which leverages graph convolutional networks to extract the representations of POIs from predefined graphs via different time patterns and develops a hierarchical attention mechanism to adaptively learn user preferences from the interactions between different contextual data. Moreover, HAGCN uses a dynamic preference estimation to precisely learn user preferences. We conduct extensive experiments on real-world datasets to evaluate the performance of HAGCN against representative baseline models in the field of next POI recommendation. The experimental results demonstrate the superiority of our proposed method on the next POI recommendation task. Full article
(This article belongs to the Special Issue Mechanism and Modeling of Graph Convolutional Networks)
Show Figures

Figure 1

Back to TopTop