Next Article in Journal
A State-Interactive MAC Layer TDMA Protocol Based on Smart Antennas
Next Article in Special Issue
Cross-Social-Network User Identification Based on Bidirectional GCN and MNF-UI Models
Previous Article in Journal
Unveiling New IoT Antenna Developments: Planar Multibeam Metasurface Half-Maxwell Fish-Eye Lens with Wavelength Etching
Previous Article in Special Issue
Optimizing Network Service Continuity with Quality-Driven Resource Migration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integration of Semantic and Topological Structural Similarity Comparison for Entity Alignment without Pre-Training

1
School of Computer Science and Technology, Beihang University, Beijing 100191, China
2
School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(11), 2036; https://doi.org/10.3390/electronics13112036
Submission received: 1 May 2024 / Revised: 17 May 2024 / Accepted: 21 May 2024 / Published: 23 May 2024
(This article belongs to the Special Issue Knowledge Information Extraction Research)

Abstract

:
Entity alignment (EA) is a critical task in integrating diverse knowledge graph (KG) data and plays a central role in data-driven AI applications. Traditional EA approaches rely on entity embeddings, but their effectiveness is limited by scarce KG input data and representation learning techniques. Large language models have shown promise, but face challenges such as high hardware requirements, large model sizes and computational inefficiency, which limit their applicability. To overcome these limitations, we propose an entity-alignment model that compares the similarity between entities by capturing both semantic and topological information to enable the alignment of entities with high similarity. First, we analyze descriptive information to quantify semantic similarity, including individual features such as types and attributes. Then, for topological analysis, we introduce four conditions based on graph connectivity and structural patterns to determine subgraph similarity within three hops of the entity’s neighborhood, thereby improving accuracy. Finally, we integrate semantic and topological similarity using a weighted approach that considers dataset features. Our model requires no pre-training and is designed to be compact and generalizable to different datasets. Experimental results on four standard EA datasets validate the effectiveness of our proposed model.

1. Introduction

The entity-alignment (EA) [1,2] task involves matching and aligning entities from various data [3] sources or Knowledge graphs (KGs) [4,5,6]. Its objective is to establish associations between entities that share the same semantic meaning. This enables accurate correspondence when performing operations such as data integration [7], querying [8] and analysis [9] across different data sources. EA plays a crucial role in natural language processing, information extraction [10], intelligent question-answering systems [11] and social network analysis [12].
In the field of EA, knowledge-representation methods are primarily employed to obtain the vector information of entities for alignment purposes. Existing EA methods can be broadly categorized into three groups: translation-based methods, graph neural network (GNN)-based methods and other approaches. Translation-based methods, such as MTransE [13] and BootEA [14], use the TransE [15] framework to transform individual elements in the KG into semantically rich vector representations. This facilitates tasks such as semantic relevance reasoning, entity relationship reasoning and knowledge graph applications. GNN-based methods [16], represented by models such as GCN-Align [17], RDGCN [18] and Dual-AMN [19], generate entity embeddings by aggregating neighborhood information using graph neural networks. These models effectively capture structural information and learn entity embeddings. Other GNN-based methods, such as TEA-GNN [20], TREA [21] and STEA [22], further improve performance by incorporating temporal features. Other approaches, including Fualign [23], Simple-HHEA [24] and BERT-INT [25], address the challenge of heterogeneity in KGs by exploiting side information. However, these three types of EA methods have significant limitations. They rely heavily on the performance of the Knowledge Representation Learning (KRL) model and the quality of the datasets used. Additionally, when different data sources have distinct mapping spaces within the KRL model, performing similarity comparisons becomes challenging.
Recently, large-scale language models (LLMs) [26] have demonstrated exceptional performance in various natural language processing tasks, including XLNet [27], GPT [28], Transformer-XL [29] and RoBERTa [30]. These transformer-based models take advantage of rich contextual information from large corpora and have proven to be highly effective in improving entity-related tasks in KGs. As a result, EA models based on LLMs have emerged, such as LLMEA [7], ChatEA [31] and BERT-INT [25], which effectively leverage the semantic and ancillary information provided by LLMs to perform entity-alignment (EA) tasks. However, it is important to recognize that while LLMs trained on large-scale textual data have rich linguistic patterns and statistical information, they may have limitations when dealing with ambiguity and polysemy. In such cases, LLMs tend to favor the most frequent or common meanings, potentially overlooking other possible interpretations in the given context. This limitation [32] becomes particularly significant in entity-alignment tasks, where a precise understanding of contextual information is crucial for accurately defining an entity. Alignment accuracy depends directly on this understanding, and therefore the limitations of LLMs can affect overall alignment performance.
In this paper, we present our entity-alignment model, which integrates semantic and structural information without the need for pre-training. The descriptive information of an entity, e.g. Beijing, the capital and largest city of China, with geographic coordinates (39°54′ N, 116°23′ E). Beijing holds significant political, cultural, and educational importance as the center of China… Utilizing this descriptive information, we compare entities for semantic similarity. However, entities in the graph not only have semantic attributes but also exhibit structural characteristics, including attributes and features of neighboring nodes. To capture these characteristics, we propose a structure-based similarity-comparison module. This module takes into account the features of neighboring nodes and attributes of edges to determine whether entities in different graphs share similar structural features, employing conditional comparisons. Finally, the contributions of the two modules are effectively fused by employing a weighted sum approach, which takes into consideration the characteristics of the dataset. We evaluate the effectiveness of our approach on several EA datasets, including the traditional DBP15K (EN-FR) and DBP-WIKI datasets, as well as the more challenging and practical ICEWS-WIKI and ICEWS-YAGO datasets. These datasets exhibit a high degree of KG heterogeneity and complexity in capturing correlations between KGs. Through extensive experiments, we demonstrate that our method outperforms existing state-of-the-art EA models. Our contributions can be summarized as follows:
  • We propose a novel EA model that employs both semantic and structural similarity comparisons. The entities are enhanced through the integration of semantic and structural information, thereby achieving highly accurate alignment. Furthermore, weighting factors are introduced to effectively balance the contributions of the two models, ensuring optimal alignment across different dataset features.
  • We conduct EA task-based experiments on four datasets, and the results of these experiments demonstrate the effectiveness of our model.
The remainder of the paper is structured as follows: Section 2 provides a comprehensive explanation of our model. Section 3 presents the experimental setup. Section 4 shows the results and thoroughly analyzes these results. Section 5 concludes the paper.

2. Methods

This section describes the details of our approach. The overall framework is shown in Figure 1. Our model consists of three primary modules: semantic-based similarity comparison, topology-based similarity comparison and fusion of semantic and structural similarity comparison.

2.1. Preliminaries

  • Knowledge graph (KG). We define a KG as G = { E , R } , where E represents the set of entities and R represents the set of relations. In a KG, a fact or edge is represented as a triple ( h , r , t ) , where h E denotes the head entity, t E denotes the tail entity and r R denotes the relation between them. The embedding vectors for h, r and t are denoted by h , r , and t , respectively, using bold characters.
  • Entity alignment (EA). EA is a crucial task in KG research. Given two KGs, G 1 = ( E 1 , R 1 ) and G 2 = ( E 2 , R 2 ) , the goal is to determine the identical entity set S = { ( e i , e j ) | e i E 1 , e j E 2 }. In this set, each pair ( e i , e j ) represents the same real-world entity but exists in different KGs.

2.2. Semantic-Based Similarity Comparison

To semantically measure whether two entities are similar, we quantify the descriptive information of the entities using Bert, and then test the quantified similarity. Specifically, consider the entities e i G i , e j G j , e i descriptive information labeled as D e s e i and e j descriptive information labeled as D e s e j . The Bert employs the Transformer architecture, which is capable of processing longer text sequences and enabling better understanding of contextual information.
The vectors Des e i and Des e j are obtained after quantization by Bert. To test the similarity between the two vectors, we employ cosine similarity, which offers several advantages: simplicity, efficiency and independence of length and dimension in measuring vector similarity. These advantages align well with the experimental needs of our study. The cosine-based vector similarity calculation is:
S e m a n t i c ( e i , e j ) = Des e i · Des e j Des e i Des e j
where S e m a n t i c ( e i , e j ) represents the similarity between e i and e j , · denotes the dot product of vectors and Des e i and Des e j denote the norms of the vectors Des e i and Des e j respectively.

2.3. Topology-Based Similarity Comparison

An entity in a KG contains not only its semantic information, but also additional information such as topology. Entities in different KGs often share similarities in their links and relations. For example, an entity classified as person may have relations such as co-worker, colleague and more. These relations also connect to entity types that share similarities. Consequently, even when dealing with different KGs, the relations and associated entity types tend to have similarities. Leveraging this insight allows us to achieve entity alignment across different KGs.
We employ K-means to group entities based on their quantized descriptive information, allowing us to label entities in the same cluster as belonging to the same type. For example, consider the triple (Beijing, belongs_to, China), where both Beijing and China belong to the geography category. After clustering, these entities are assigned to cluster C E 0 . Similarly, the relationship ‘belongs_to’ is assigned to cluster C R 1 . Therefore, the type of the triple can be represented as ( C E 0 , C R 1 , C E 0 ) .
We consider the entities and relations within three hops of the neighbor of the target entity to form a subgraph. The similarity between these subgraphs is then determined by evaluating their isomorphism.
Given two entities e i G i and e j G j with corresponding subgraphs S u b i = ( V i , R i ) and S u b j = ( V j , R j ) respectively, where V is the set of entities and R is the set of relations. The isomorphism between these two subgraphs can be determined based on the following conditions.
Condition 1: 
e i S u b i e j S u b j : ( e i , e j ) C E k
where C E k is the kth cluster. It is required that both e i and e j are of the same type.
Condition 2: 
( e i , e i ) E i ( e j , e j ) E j
where e denotes the neighborhood of node e. ( e , e ) denotes the edge connected between e and e . The formula indicates that the edge present in the subgraph S u b i are also present in S u b j .
Condition 3: 
e i V i e j V j λ V i ( e i ) = λ V j ( e j )
where V i denotes the set of nodes in S u b i and V j denotes the set of nodes in S u b j . λ V i ( e i ) denotes the type of neighboring nodes of e i , where the nodes all belong to V i . This condition indicates that the types of all connected nodes of e i S u b i and the types of the nodes connected to e j S u b j should be similar.
Condition 4: 
e i , e i E i ( e j , e j ) E j λ E i e i , e i = λ E j e j , e j
where E i denotes the set of edges in S u b i , while E j denotes the set of edges in S u b j . λ E i ( e i , e i ) denotes the type of neighboring edge of e i , where the edges all belong to E i . This condition indicates that the types of all connected edges of e i S u b i and the types of all edges connected to e j S u b j should be similar.
The two graphs can be considered isomorphic if and only if all of the above conditions are satisfied simultaneously.

2.4. Fusion of Semantic and Structural Similarity Comparison

In real-world scenarios, certain datasets are incomplete, resulting in a sparse structure. Therefore we introduce a parameter α [ 0 , 1 ] to balance the semantic and structural information. The formula is defined as follows:
S ( e i , e j ) = α × ( S e m a n t i c ( e i , e j ) ) + ( 1 α ) × ( S t r u c t u r e ( e i , e j ) ) , α [ 0 , 1 ]
where S t r u c t u r e ( e i , e j ) denotes the structural similarity, which is calculated on the basis of the degree of coverage of the four conditions. For example, the third condition, e i S u b i has approximately 80% of the number of nodes of the same type as e j S u b j . Therefore, the structural similarity can be defined by the following formula:
S t r u c t u r e ( e i , e j ) = α 1 × ( # E d g e ) + α 2 × ( # T y p e n o d e ) + α 3 × ( # T y p e e d g e ) α 1 + α 2 + α 3 = 1
where # E d g e denotes the number of identical edges (i.e., Condition 2), # T y p e n o d e denotes the number of neighbors of the same type (i.e., Condition 3) and # T y p e e d g e denotes the number of neighboring edges of the same type (i.e., Condition 4). The symbols α 1 , α 2 and α 3 denote the weights of the three conditions. These parameters are defined in a predetermined manner, and when the similarity between entities e i and e j , denoted as S ( e i , e j ) , exceeds a certain threshold (e.g., S ( e i , e j ) > 0.9 ), it indicates a high degree of similarity. Based on this similarity measure, entities e i and e j can be determined to be the same entity.

3. Experiment Setting

We conduct comparison experiments on the EA task using four standard datasets. Furthermore, we perform ablation studies to evaluate the specific contributions of two modules in our model. The effect of hyper-parameters on model performance is evaluated. The experiments are conducted on an Ubuntu 18.04.4 environment with 62 GB of RAM and a 60 GB GPU.

3.1. Datasets

We conduct experiments on four entity-alignment datasets. The statistics of these selected datasets are summarized in Table 1.
DBP15K(EN-FR) and DBP-WIK [33] are two simple EA datasets, which share a similar structure for their KG pairs, with an equivalent number of entities. Furthermore, the structural features, such as the number of facts and density, of these two datasets closely align. ICEWS-WIKI and ICEWS-YAGO [24] are two complex EA datasets. Here, the KG pairs exhibit significant heterogeneity, differing not only in the number of entities but also in structural features. Notably, the quantity of anchors does not equal the number of entities. Consequently, aligning these complex datasets poses greater challenges.
To ensure consistency, we standardized the average length of descriptive information for entities across all datasets to 800 words. In cases where entities had insufficient descriptive information, we utilize ChatGPT [28] to supplement and enhance the available information.

3.2. Baselines

We select a set of state-of-the-art EA methods. These methods encompass a wide range of input features. Among the selected methods, we included translation-based approaches such as MTransE [13] and BootEA [14]. Additionally, we incorporated GNN-based techniques such as GCN-Align [17], RDGCN [18] and Dual-AMN [19]. Each baseline model has unique characteristics and high performance that make them suitable for evaluating the effectiveness of our proposed method.

3.3. Hyper-Parameters

To determine the optimal hyperparameters for each dataset, we perform a grid search. The optimal experimental setup includes the following hyperparameters:
For DBP15K(EN-FR): α − 0.6, α 1 − 0.4, α 2 − 0.3, α 3 − 0.3. For DBP-WIKI: α − 0.6, α 1 − 0.3, α 2 − 0.4, α 3 − 0.3. For ICEWS-WIKI: α − 0.4, α 1 − 0.2, α 2 − 0.4, α 3 − 0.4. For ICEWS-YAGO: α − 0.4, α 1 − 0.2, α 2 − 0.4, α 3 − 0.4.

3.4. Evaluation Metrics

In line with widely adopted evaluation methods in EA research, we use two metrics for evaluation:
(1)
Hits@k, measuring the percentage of correct alignments within the top k ( k = 1 , 10 ) matches. It is computed by 1 / | S | i = 1 | S | I rank i < n , where I ( · ) is the indicator function.
(2)
Mean Reciprocal Rank (MRR), reflecting the average inverse ranking of correct results. It is computed by average of the reciprocal ranks 1 / | S | i = 1 | S | 1 rank i , where rank i , i { 1 , , | S | } is a set of ranking results.
Higher values in Hits@k and MRR indicate superior performance in EA.

4. Results and Discussion

4.1. Performance Comparison

Table 2 and Table 3 show our results on the four datasets. It can be concluded from the results:
  • In Table 2, which shows the test results of our model on two non-temporal upper datasets, the results show that our model performs consistently at a high level for most of the evaluation parameters, closely following the Dual-AMN model.
  • Table 3 presents the evaluation results of our method on the time series dataset. The results clearly show that our method outperforms the other baseline models. Compared to the Dual-AMN model, our method achieves an average improvement of 0.03 in Hit@1, Hits@10 and MRR, which highlights the effectiveness and superiority of our model.
  • The results clearly show that our method has a significant advantage on the temporal dataset. This advantage is due to the use of temporal features, which act as identifiers during entity alignment, enabling us to accurately determine the same entities. Analyzing the parameters α 1 , α 2 , α 3 and α , we observe that the structure-based information is more useful in datasets with higher graph density (e.g., ICEWS-WIKI, ICEWS-YAGO). On the other hand, descriptive information of entities tends to be more advantageous in cases where the graph structure is sparser (e.g., DBP15K(EN-FR), DBP-WIKI), while alignment using semantic information proves to be more effective.

4.2. Parameter Sensitivity

We perform parameter sensitivity tests on the model, specifically analyzing the effect of the α parameter on the performance of the model. The performance of the model is evaluated using MRR. The results are shown in Figure 2. We can conclude that:
  • As shown in the figure, the analysis reveals a clear trend. It is evident that each dataset has an optimal α value, beyond which the performance of the model either converges or even tends to decrease. This finding suggests that there is a critical threshold for the α parameter, beyond which further tuning may not significantly improve model performance.
  • When evaluating the DBP15K(EN-FR) and DBP-WIKI datasets, we observed that the model’s MRR initially increases with increasing α values, but eventually starts to decrease. Here, α represents the weight of semantic information, indicating that increased emphasis on semantic information leads to improved performance in the early stages. However, in the later stages, the pronounced decrease in performance suggests that overemphasis on semantic information may lead to diminishing returns.
  • When evaluating the ICEWS-WIKI and ICEWS-YAGO datasets, we observe a consistent trend in the performance of the model. The MRR peaks at a α value of 0.4 for these datasets, while the best overall performance for the DBP15K(EN-FR) and DBP-WIKI datasets is achieved at a α value of 0.6. This observation suggests that the ICEWS-WIKI and ICEWS-YAGO datasets have a higher dependence on the structural information of the model. This higher dependence on structural information can be attributed to the higher density and number of triples present in these datasets. Consequently, comparing structures in these datasets provides more valuable information.
  • In summary, structural information is advantageous in densely structured datasets because it provides richer information about neighboring nodes and links. On the contrary, in sparsely structured datasets, semantic information is particularly helpful to compensate for the lack of structural information and to provide semantic information about characterized entities.

4.3. Ablation Studies

We perform an ablation study using four datasets to evaluate the contribution of the two modules in the model. The symbol (+SSC) indicates the Semantic-based Similarity Comparison module, and the symbol (+TSC) indicates the Topology-based Similarity Comparison module. From the results, we can conclude that:
  • Table 4 presents the performance of both the DBP15K(EN-FR) and DBP-WIKI datasets, demonstrating that the SSC module contributes significantly more than the TSC module to these two datasets. The SSC module exhibited a greater impact on the datasets, with an average increase of 0.318 in Hits@1, 0.244 in Hits@10 and 0.258 in MRR compared to the TSC module. This suggests that the semantic similarity module is a more effective approach for these two datasets.
  • Table 5 presents the performance of the ICEWS-WIKI and ICEWS-YAGO datasets, indicating that the TSC module exhibits a significant contribution to both datasets compared to the SSC module. On average, Hits@1 improves by 0.0235, Hits@10 improves by 0.0865 and MRR improves by 0.0455. These results suggest that employing the structural similarity module is more effective for these two datasets.
  • The results presented above shows that both modules contribute to the overall performance of our model. However, it is important to note that each module shows different contributions based on the KG characteristics.

5. Conclusions

This paper presents a novel entity-alignment module that combines semantic and structural comparisons. The descriptive information of an entity provides various features, semantics and characteristics, which we quantify to measure semantic similarity. Additionally, for graphs with higher structural density, we introduce a structure-based similarity-comparison module. This module assesses similarity by comparing the types of neighboring nodes and link attributes of entities across different graphs. Finally, we fuse the comparison results from both modules. Experimental results on four datasets demonstrate the effectiveness of our method. In future work, our future research aims to capture more distinguishable features for entity alignment.

Author Contributions

Conceptualization, Y.L. (Yao Liu) and Y.L. (Ye Liu); Methodology, Y.L. (Yao Liu) and Y.L. (Ye Liu); Validation, Y.L. (Yao Liu) and Y.L. (Ye Liu); Formal analysis, Y.L. (Yao Liu) and Y.L. (Ye Liu); Investigation, Y.L. (Yao Liu) and Y.L. (Ye Liu); Resources, Y.L. (Yao Liu) and Y.L. (Ye Liu); Data curation, Y.L. (Yao Liu) and Y.L. (Ye Liu); Writing – original draft, Y.L. (Yao Liu) and Y.L. (Ye Liu); Writing – review & editing, Y.L. (Yao Liu) and Y.L. (Ye Liu); Visualization, Y.L. (Yao Liu) and Y.L. (Ye Liu); Supervision, Y.L. (Yao Liu) and Y.L. (Ye Liu); Project administration, Yao Liu. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Yan, Z.; Peng, R.; Wu, H. Similarity propagation based semi-supervised entity alignment. Eng. Appl. Artif. Intell. 2024, 130, 107787. [Google Scholar] [CrossRef]
  2. Yang, L.; Chen, J.; Wang, Z.; Shang, F. Relation mapping based on higher-order graph convolutional network for entity alignment. Eng. Appl. Artif. Intell. 2024, 133, 108009. [Google Scholar] [CrossRef]
  3. Ranaldi, L.; Pucci, G. Knowing knowledge: Epistemological study of knowledge in transformers. Appl. Sci. 2023, 13, 677. [Google Scholar] [CrossRef]
  4. Cao, J.; Fang, J.; Meng, Z.; Liang, S. Knowledge graph embedding: A survey from the perspective of representation spaces. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
  5. Ge, X.; Wang, Y.C.; Wang, B.; Kuo, C.C.J. Knowledge Graph Embedding: An Overview. Apsipa Trans. Signal Inf. Process. 2024, 13, 1–51. [Google Scholar] [CrossRef]
  6. Wang, Y.; Lipka, N.; Rossi, R.A.; Siu, A.; Zhang, R.; Derr, T. Knowledge graph prompting for multi-document question answering. AAAI Conf. Artif. Intell. 2024, 38, 19206–19214. [Google Scholar] [CrossRef]
  7. Yang, L.; Chen, H.; Wang, X.; Yang, J.; Wang, F.Y.; Liu, H. Two Heads Are Better Than One: Integrating Knowledge from Knowledge Graphs and Large Language Models for Entity Alignment. arXiv 2024, arXiv:2401.16960. [Google Scholar]
  8. Chen, C.; Zheng, F.; Cui, J.; Cao, Y.; Liu, G.; Wu, J.; Zhou, J. Survey and open problems in privacy-preserving knowledge graph: Merging, query, representation, completion, and applications. Int. J. Mach. Learn. Cybern. 2024, 1–20. [Google Scholar] [CrossRef]
  9. Chen, J.; Yang, L.; Wang, Z.; Gong, M. Higher-order GNN with Local Inflation for entity alignment. Knowl.-Based Syst. 2024, 293, 111634. [Google Scholar] [CrossRef]
  10. Luo, S.; Yu, J. ESGNet: A multimodal network model incorporating entity semantic graphs for information extraction from Chinese resumes. Inf. Process. Manag. 2024, 61, 103524. [Google Scholar] [CrossRef]
  11. Chen, Z.; Zhang, Y.; Fang, Y.; Geng, Y.; Guo, L.; Chen, X.; Li, Q.; Zhang, W.; Chen, J.; Zhu, Y.; et al. Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey. arXiv 2024, arXiv:2402.05391. [Google Scholar]
  12. Huber, M.N.; Angst, M.; Fischer, M. The Link Between Social-Ecological Network Fit and Outcomes: A Rare Empirical Assessment of a Prominent Hypothesis. Soc. Nat. Resour. 2024, 1–18. [Google Scholar] [CrossRef]
  13. Chen, M.; Tian, Y.; Yang, M.; Zaniolo, C. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. arXiv 2016, arXiv:1611.03954. [Google Scholar]
  14. Sun, Z.; Hu, W.; Zhang, Q.; Qu, Y. Bootstrapping entity alignment with knowledge graph embedding. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence IJCAI-18, Stockholm, Sweden, 9–19 July 2018; Volume 18. [Google Scholar]
  15. Bordes, A.; Usunier, N.; Garcia-Duran, A.; Weston, J.; Yakhnenko, O. Translating Embeddings for Modeling Multi-relational Data. In Advances in Neural Information Processing Systems 26; Curran Associates Inc.: Glasgow, UK, 2013. [Google Scholar]
  16. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  17. Wang, Z.; Lv, Q.; Lan, X.; Zhang, Y. Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 349–357. [Google Scholar]
  18. Chen, Z.; Wu, Y.; Feng, Y.; Zhao, D. Integrating manifold knowledge for global entity linking with heterogeneous graphs. Data Intell. 2022, 4, 20–40. [Google Scholar] [CrossRef]
  19. Mao, X.; Wang, W.; Wu, Y.; Lan, M. Boosting the speed of entity alignment 10*: Dual attention matching network with normalized hard sample mining. In WWW ’21: Proceedings of the Web Conference 2021; Association for Computing Machinery: New York, NY, USA, 2021. [Google Scholar]
  20. Xu, C.; Su, F.; Lehmann, J. Time-aware graph neural networks for entity alignment between temporal knowledge graphs. arXiv 2022, arXiv:2203.02150. [Google Scholar]
  21. Xu, C.; Su, F.; Xiong, V.; Lehmann, J. Time-aware Entity Alignment using Temporal Relational Attention. In Proceedings of the ACM Web Conference 2022 (WWW ’22), Barcelona, Spain, Online, 26–29 June 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 788–797. [Google Scholar]
  22. Cai, L.; Mao, X.; Ma, M.; Yuan, H.; Zhu, J.; Lan, M. A simple temporal information matching mechanism for entity alignment between temporal knowledge graphs. arXiv 2022, arXiv:2209.09677. [Google Scholar]
  23. Wang, C.; Huang, Z.; Wan, Y.; Wei, J.; Zhao, J.; Wang, P. FuAlign:Cross-lingual entity alignment via multi-view representation learning of fused knowledge graphs. Inform. Fusion 2023, 89, 41–52. [Google Scholar] [CrossRef]
  24. Jiang, X.; Xu, C.; Shen, Y.; Su, F.; Wang, Y.; Sun, F.; Li, Z.; Shen, H. Rethinking gnn-based entity alignment on heterogeneous knowledge graphs: New datasets and a new method. arXiv 2023, arXiv:2304.03468. [Google Scholar]
  25. Tang, X.; Zhang, J.; Chen, B.; Yang, Y.; Chen, H.; Li, C. BERT-INT: A BERT-based Interaction Model For Knowledge Graph Alignment. Interactions 2020, 100, e1. [Google Scholar]
  26. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
  27. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.; Le, Q.V. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32; Curran Associates Inc.: Glasgow, UK, 2019. [Google Scholar]
  28. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving Language Understanding by Generative Pre-Training. Preprint. 2018. Available online: https://paperswithcode.com/paper/improving-language-understanding-by (accessed on 20 May 2024).
  29. Dai, Z.; Yang, Z.; Yang, Y.; Carbonell, J.; Le, Q.V.; Salakhutdinov, R. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv 2019, arXiv:1901.02860. [Google Scholar]
  30. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
  31. Jiang, X.; Shen, Y.; Shi, Z.; Xu, C.; Li, W.; Li, Z.; Guo, J.; Shen, H.; Wang, Y. Unlocking the Power of Large Language Models for Entity Alignment. arXiv 2024, arXiv:2402.15048. [Google Scholar]
  32. Lynch, C.J.; Jensen, E.; Munro, M.H.; Zamponi, V.; Martinez, J.; O’Brien, K.; Feldhaus, B.; Smith, K.; Reinhold, A.M.; Gore, R. GPT-4 Generated Narratives of Life Events using a Structured Narrative Prompt: A Validation Study. arXiv 2024, arXiv:2402.05435. [Google Scholar]
  33. Sun, Z.; Zhang, Q.; Hu, W.; Wang, C.; Chen, M.; Akrami, F.; Li, C. A benchmarking study of embedding-based entity alignment for knowledge graphs. arXiv 2020, arXiv:2003.07743. [Google Scholar] [CrossRef]
Figure 1. An overview of our model. The framework of our model consists of three components: Semantic-based Similarity Comparison, Structural-based Similarity Comparison and Fusion of Semantic and Structural Similarity Comparison. The Semantic-based Similarity Comparison module employs Bert to quantify the descriptive information of entities from different KGs and calculates their similarity using the cosine function. The Topological-based Similarity Comparison module determines the similarity of two subgraphs based on four specific conditions. Finally, the results of the two comparisons are integrated.
Figure 1. An overview of our model. The framework of our model consists of three components: Semantic-based Similarity Comparison, Structural-based Similarity Comparison and Fusion of Semantic and Structural Similarity Comparison. The Semantic-based Similarity Comparison module employs Bert to quantify the descriptive information of entities from different KGs and calculates their similarity using the cosine function. The Topological-based Similarity Comparison module determines the similarity of two subgraphs based on four specific conditions. Finally, the results of the two comparisons are integrated.
Electronics 13 02036 g001
Figure 2. Results of the effect of α on model performance on four datasets.
Figure 2. Results of the effect of α on model performance on four datasets.
Electronics 13 02036 g002
Table 1. The detailed statistics of the datasets. Temporal denotes whether the dataset contains temporal information.
Table 1. The detailed statistics of the datasets. Temporal denotes whether the dataset contains temporal information.
Dataset#Entities#Relations#FactsDensity#AnchorsTemporal
DBP15K(EN-FR)EN15,00019396,3186.42115,000No
FR15,00016680,1125.341No
DBP-WIKIDBP100,000413293,9902.94010,000No
WIKI100,000261251,7082.517No
ICEWS-WIKIICEWS11,0472723,527,881319.3525058Yes
WIKI15,896226198,25712.472Yes
ICEWS-YAGOICEWS26,8632724,192,555156.07218,824Yes
YAGO22,73441107,1184.712Yes
Table 2. Main experiment results on the four datasets. Bold: the best result; underline: the runner-up result.
Table 2. Main experiment results on the four datasets. Bold: the best result; underline: the runner-up result.
ModelsDBP15K(EN-FR)DBP-WIKI
Hits@1Hits@10MRRHits@1Hits@10MRR
MTransE0.2470.5770.3600.2810.5200.363
BootEA0.6530.8740.7310.7480.8980.801
GCN-Align0.4110.7720.5300.4940.7560.590
RDGCN0.8730.9500.9010.9740.9940.980
Dual-AMN0.9540.9940.9700.9830.9960.991
Ours0.9010.9800.9620.9800.9930.981
Table 3. Main experiment results on the four datasets. Bold: the best result; underline: the runner-up result.
Table 3. Main experiment results on the four datasets. Bold: the best result; underline: the runner-up result.
ModelsICEWS-WIKIICEWS-YAGO
Hits@1Hits@10MRRHits@1Hits@10MRR
MTransE0.0210.1580.0680.0120.0840.040
BootEA0.0720.2750.1390.0200.1200.056
GCN-Align0.0460.1840.0930.0170.0850.038
RDGCN0.0640.2020.0960.0290.0970.042
Dual-AMN0.0830.2810.1450.0310.1440.068
Ours0.0810.2850.1480.0330.1460.071
Table 4. Results of ablation studies on DBP15K(EN-FR) and DBP-WIKI. Bold: the best result; underline: the runner-up result.
Table 4. Results of ablation studies on DBP15K(EN-FR) and DBP-WIKI. Bold: the best result; underline: the runner-up result.
ModelsDBP15K(EN-FR)DBP-WIKI
Hits@1Hits@10MRRHits@1Hits@10MRR
SSC0.7930.8100.8920.8490.9300.890
TSC0.5600.6780.6920.4450.5740.573
SSC+TSC0.9010.9800.9620.9800.9930.981
Table 5. Results of ablation studies on ICEWS-WIKI and ICEWS-YAGO. Bold: the best result; underline: the runner-up result.
Table 5. Results of ablation studies on ICEWS-WIKI and ICEWS-YAGO. Bold: the best result; underline: the runner-up result.
ModelsICEWS-WIKIICEWS-YAGO
Hits@1Hits@10MRRHits@1Hits@10MRR
SSC0.0340.1210.0860.0220.0910.032
TSC0.0730.2450.1410.0300.1400.068
SSC+TSC0.0810.2850.1480.0330.1460.071
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Liu, Y. Integration of Semantic and Topological Structural Similarity Comparison for Entity Alignment without Pre-Training. Electronics 2024, 13, 2036. https://doi.org/10.3390/electronics13112036

AMA Style

Liu Y, Liu Y. Integration of Semantic and Topological Structural Similarity Comparison for Entity Alignment without Pre-Training. Electronics. 2024; 13(11):2036. https://doi.org/10.3390/electronics13112036

Chicago/Turabian Style

Liu, Yao, and Ye Liu. 2024. "Integration of Semantic and Topological Structural Similarity Comparison for Entity Alignment without Pre-Training" Electronics 13, no. 11: 2036. https://doi.org/10.3390/electronics13112036

APA Style

Liu, Y., & Liu, Y. (2024). Integration of Semantic and Topological Structural Similarity Comparison for Entity Alignment without Pre-Training. Electronics, 13(11), 2036. https://doi.org/10.3390/electronics13112036

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop