Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction
Abstract
:1. Introduction
- We incorporate the type markers alongside text tokens in the same encoder, thus preserving task relevance rather than treating them as isolated components. Building upon a novel word-pair tagging approach, we condense our table into two dimensions.
- We propose a multi-level attention mechanism that models interactions around unit features, capturing dependencies between table structure-aware and sequence-aware features. This mechanism effectively integrates the inherent relationships between feature sequences relevant to entities or relations, while maintaining the efficiency advantage of the model.
2. Related Work
- Overlapping: Based on the different overlapping patterns of triples [21], sentences can be divided into three categories, as suggested by [22]: Normal, Entity Pair Overlap (EPO) and Single Entity Overlap (SEO). A sentence is classified as Normal if none of its triples have overlapping entities. It is categorized as EPO if some of its triples have overlapping entity pairs. Meanwhile, a sentence falls into the SEO class if some of its triples have an overlapping entity but do not have overlapping entity pairs. Note that a sentence can belong to both the EPO and SEO classes.
- Interaction: Since these tasks are closely interconnected, joint models capable of simultaneously extracting entities and their relations within a single framework have the potential to leverage inter-task correlations and dependencies, leading to potential performance improvements. Several recent efforts have aimed to exploit such inter-task correlations by jointly modeling both NER and RE tasks.
3. Methods
3.1. Task Description
3.2. Word-Pair Tagging
- Diagonal markers in the purple part indicate entity-head and entity-tail. The orange part on the right represents the connection between an entity-head and an entity type. Similarly, the orange part below the table represents the connection between an entity-tail and an entity type. When both the entity-head and entity-tail have the same entity type, they can form an entity. The table exactly expresses how to detect the correct span boundary of the spans, as shown in Figure 2, where (“Reagan”, Peop), (“State Department”, Org) and (“U.S.”, Loc) can be extracted.
- Out-of-diagonal markers in the purple part indicate subjects and objects. The green part on the right represents the connection between a subject and a relation type, while the green part below the table represents the connection between an object and a relation type. If a subject and object share the same relation type, they can form relational triples. Therefrom, the table can exactly express overlapped relations, e.g., the location entity “U.S.” participates in two relations, (“Reagan”, “U.S.”, Live_In) and (“State Department”, “U.S.”, OrgBased_In).
3.3. Text Representation
3.4. Multi-Level Attention Encoder
3.5. Training
4. Results
4.1. Datasets
4.2. Evaluation Metrics
4.3. Experiment Settings
4.4. Results
Method | Metric | NER | RE | RE+ |
---|---|---|---|---|
Luan et al. (2018) [29] | Prec. | 67.2 | 47.6 | - |
Rec. | 61.5 | 33.5 | - | |
F1 | 64.2 | 39.3 | - | |
Wadden et al. (2019) [36] | Prec. | - | - | - |
Rec. | - | - | - | |
F1 | 67.5 | 48.4 | - | |
Eberts and Ulges (2020) [13] | Prec. | 70.9 | 53.4 | 40.5 |
Rec. | 69.8 | 48.5 | 36.8 | |
F1 | 70.3 | 50.8 | 38.6 | |
Shen et al. (2021) [37] | Prec. | 70.2 | 52.6 | - |
Rec. | 70.2 | 52.3 | - | |
F1 | 70.2 | 52.4 | - | |
Zhong and Chen (2021) [3] | Prec. | - | - | - |
Rec. | - | - | - | |
F1 | 68.9 | 50.1 | 36.8 | |
Wang et al. (2021) [14] | Prec. | 65.8 | - | 37.3 |
Rec. | 71.1 | - | 36.6 | |
F1 | 68.4 | - | 36.9 | |
Yan et al. (2021) [38] | Prec. | - | - | - |
Rec. | - | - | - | |
F1 | 66.8 | - | 38.4 | |
Santosh et al. (2021) [39] | Prec. | 69.8 | 51.9 | 39.9 |
Rec. | 71.3 | 50.6 | 39.0 | |
F1 | 70.5 | 51.3 | 39.4 | |
Ye et al. (2022) [4] | Prec. | - | - | - |
Rec. | - | - | - | |
F1 | 69.9 | 53.2 | 41.6 | |
Jeong et al. (2022) [40] | Prec. | - | - | - |
Rec. | - | - | - | |
F1 | 70.8 | 45.5 | - | |
Ours | Prec. | 71.4 | 58.3 | 47.1 |
Rec. | 69.2 | 50.1 | 39.2 | |
F1 | 70.9 | 53.9 | 42.8 |
Method | Model | Score | NER | RE+ |
---|---|---|---|---|
Giannis et al. (2018) [41] | - | Prec. | 84.72 | 72.10 |
Rec. | 88.16 | 77.24 | ||
F1 | 86.40 | 74.58 | ||
Eberts and Ulges (2020) [13] | BERT | Prec. | 89.26 | 78.09 |
Rec. | 89.26 | 80.43 | ||
F1 | 89.25 | 79.24 | ||
Wang and Lu (2020) [42] | ALBERT | Prec. | - | - |
Rec. | - | - | ||
F1 | 89.70 | 80.10 | ||
Cabot and Navigli (2021) [43] | BART | Prec. | - | 81.50 |
Rec. | - | 83.10 | ||
F1 | - | 82.20 | ||
Yan et al. (2021) [38] | ALBERT | Prec. | - | - |
Rec. | - | - | ||
F1 | 91.30 | 83.20 | ||
Crone (2020) [35] | BERT | Prec. | 89.06 | 80.51 |
Rec. | 89.63 | 86.81 | ||
F1 | 89.48 | 83.74 | ||
Zhao et al. (2021) [44] | BERT | Prec. | - | - |
Rec. | - | - | ||
F1 | 89.40 | 81.14 | ||
Wan et al. (2023) [20] | BERT | Prec. | - | - |
Rec. | - | - | ||
F1 | 91.30 | 83.07 | ||
Wang et al. (2022) [34] | GLM | Prec. | - | - |
Rec. | - | - | ||
F1 | 91.1 | 83.8 | ||
Ours | BERT | Prec. | 88.39 | 82.09 |
Rec. | 93.31 | 87.82 | ||
F1 | 90.78 | 84.86 | ||
ALBERT | Prec. | 90.88 | 83.90 | |
Rec. | 92.78 | 87.10 | ||
F1 | 91.82 | 85.47 |
Method | Model | Metric | Score | NER | RE+ |
---|---|---|---|---|---|
Li et al. (2019) [45] | - | Micro | Prec. | 89.00 | 69.20 |
Rec. | 86.60 | 68.20 | |||
F1 | 87.80 | 68.90 | |||
Eberts and Ulges (2020) [13] | BERT | Macro | Prec. | 85.78 | 74.75 |
Rec. | 86.84 | 71.52 | |||
F1 | 86.25 | 72.87 | |||
Crone (2020) [35] | BERT | Macro | Prec. | 87.92 | 77.73 |
Rec. | 86.42 | 68.38 | |||
F1 | 87.00 | 72.63 | |||
Wang and Lu (2020) [42] | ALBERT | Macro | Prec. | - | - |
Rec. | - | - | |||
F1 | 86.90 | 75.40 | |||
Cabot and Navigli (2021) [43] | BERT | Macro | Prec. | - | - |
Rec. | - | - | |||
F1 | - | 76.65 | |||
Micro | Prec. | - | - | ||
Rec. | - | - | |||
F1 | - | 75.40 | |||
Shen et al. (2021) [37] | - | Micro | Prec. | 90.30 | 73.00 |
Rec. | 90.30 | 71.60 | |||
F1 | 90.30 | 73.60 | |||
Zhao et al. (2021) [44] | - | Micro | Prec. | - | - |
Rec. | - | - | |||
F1 | 90.62 | 72.97 | |||
Wan et al. (2023) [20] | BERT | Micro | Prec. | - | - |
Rec. | - | - | |||
F1 | 91.43 | 74.39 | |||
Wang et al. (2022) [34] | GLM | Macro | Prec. | - | - |
Rec. | - | - | |||
F1 | 90.70 | 78.30 | |||
Ours | BERT | Macro | Prec. | 90.72 | 77.56 |
Rec. | 84.84 | 75.03 | |||
F1 | 87.68 | 76.27 | |||
Micro | Prec. | 91.01 | 75.13 | ||
Rec. | 89.58 | 74.46 | |||
F1 | 90.28 | 74.79 |
5. Ablation Study
5.1. Effect of Encode Layers
5.2. Effect of Table Encoding
- Concat: the concat method represents each word-pair representation via concatenating the corresponding distinct tokens features. While this method collects information at the token level, it overlooks the connections between tokens, leading to coarse-grained formative features. Consequently, using the Concat model leads to a drop in NER and RE+ F1-score performance by 0.69% and 1.4%, respectively.
- Multi-head CNN: the convolutional approach is a natural method to merge all of the features, and it might be necessary to utilize all local features and predict scores on a global scale. Fusion features that are composed of correlations between unit features can help the model in capturing local sentence features and in learning connections between features, thus learning semantic structural information in sentences. When constructing the CNN structure, we still employed a two-layer CNN with convolutional kernels of 3, and we set its output dimension number of the decoder to be the same as the number of heads. CNN-based models are effective in capturing local features of adjacent cells, but make it difficult to capture long-distance dependencies. As shown in Table 5, using the multi-head CNN has a small negative impact, with performance declining by 0.24% and 0.46% for NER and RE.
- CLN: we use the Conditional Layer Normalization (CLN) proposed in [46], which generates a high-quality representation of the word-pair grid. The layer normalization is conducted in the feature dimension. The results, as displayed in Table 5, show a decrease of 1.83% in NER and a decrease of 0.96% in RE.
5.3. Effect of Type Information
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
MDPI | Multidisciplinary Digital Publishing Institute |
DOAJ | Directory of open access journals |
TLA | Three-letter acronym |
LD | Linear dichroism |
References
- Chan, Y.S.; Roth, D. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 2011, HLT’11, Portland, ON, USA, 19–24 June 2011; pp. 551–560. [Google Scholar]
- Gormley, M.R.; Yu, M.; Dredze, M. Improved relation extraction with feature-rich compositional embedding models. arXiv 2015, arXiv:1505.02419. [Google Scholar] [CrossRef]
- Zhong, Z.; Chen, D. A Frustratingly Easy Approach for Entity and Relation Extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online, 6–11 June 2021; pp. 50–61. [Google Scholar] [CrossRef]
- Ye, D.; Lin, Y.; Li, P.; Sun, M. Packed Levitated Marker for Entity and Relation Extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, 22–27 May 2022; pp. 4904–4917. [Google Scholar] [CrossRef]
- Li, Q.; Ji, H. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, MD, USA, 22–27 June 2014; pp. 402–412. [Google Scholar] [CrossRef]
- Wang, S.; Zhang, Y.; Che, W.; Liu, T. Joint extraction of entities and relations based on a novel graph scheme. In Proceedings of the Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 1227–1236. [Google Scholar]
- Verga, P.; Strubell, E.; McCallum, A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. arXiv 2018, arXiv:1802.10569. [Google Scholar] [CrossRef]
- Wang, Y.; Yu, B.; Zhang, Y.; Liu, T.; Sun, L. TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking. arXiv 2020, arXiv:2010.13415. [Google Scholar]
- Zheng, H.; Wen, R.; Chen, X.; Yang, Y.; Zhang, Y.; Zhang, Z.; Zhang, N.; Qin, B.; Xu, M.; Zheng, Y. PRGC: Potential Relation and Global Correspondence Based Joint Relational Triple Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Virtual, 1–6 August 2021. [Google Scholar] [CrossRef]
- Sui, D.; Zeng, X.; Chen, Y.; Liu, K.; Zhao, J. Joint entity and relation extraction with set prediction networks. In IEEE Transactions on Neural Networks and Learning Systems; IEEE: New York, NY, USA, 2023; pp. 1–12. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Wu, S.; He, Y. Enriching pre-trained language model with entity information for relation classification. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, Beijing, China, 3–7 November 2019; pp. 2361–2364. [Google Scholar] [CrossRef]
- Eberts, M.; Ulges, A. Span-Based Joint Entity and Relation Extraction with Transformer Pre-Training. arXiv 2020, arXiv:1909.07755. [Google Scholar]
- Wang, Y.; Sun, C.; Wu, Y.; Zhou, H.; Yan, J. UniRE: A Unified Label Space for Entity Relation Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Virtual, 1–6 August 2021; pp. 220–231. [Google Scholar] [CrossRef]
- Shang, Y.M.; Huang, H.; Mao, X.L. OneRel:Joint Entity and Relation Extraction with One Module in One Step. Proc. Aaai Conf. Artif. Intell. 2022, 36, 11285–11293. [Google Scholar] [CrossRef]
- Tang, W.; Xu, B.; Zhao, Y.; Mao, Z.; Liu, Y.; Liao, Y.; Xie, H. UniRel: Unified Representation and Interaction for Joint Relational Triple Extraction. arXiv 2022, arXiv:2211.09039. [Google Scholar]
- Liang, S.; Wei, W.; Mao, X.L.; Fu, Y.; Fang, R.; Chen, D. STAGE: Span Tagging and Greedy Inference Scheme for Aspect Sentiment Triplet Extraction. Aaai Conf. Artif. Intell. 2023, 37, 13174–13182. [Google Scholar] [CrossRef]
- Ren, F.; Zhang, L.; Yin, S.; Zhao, X.; Liu, S.; Li, B.; Liu, Y. A novel global feature-oriented relational triple extraction model based on table filling. arXiv 2021, arXiv:2109.06705. [Google Scholar]
- Ma, Y.; Hiraoka, T.; Okazaki, N. Named entity recognition and relation extraction using enhanced table filling by contextualized representations. J. Nat. Lang. Process. 2022, 29, 187–223. [Google Scholar] [CrossRef]
- Wan, Q.; Wei, L.; Zhao, S.; Liu, J. A Span-based Multi-Modal Attention Network for joint entity-relation extraction. Knowl.-Based Syst. 2023, 262, 110228. [Google Scholar] [CrossRef]
- Hoffmann, R.; Zhang, C.; Ling, X.; Zettlemoyer, L.; Weld, D.S. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Portland, ON, USA, 19–24 June 2011; pp. 541–550. [Google Scholar]
- Zeng, X.; Zeng, D.; He, S.; Liu, K.; Zhao, J. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia, 15–20 July 2018; pp. 506–514. [Google Scholar] [CrossRef]
- Zheng, S.; Wang, F.; Bao, H.; Hao, Y.; Zhou, P.; Xu, B. Joint extraction of entities and relations based on a novel tagging scheme. arXiv 2017, arXiv:1706.05075. [Google Scholar]
- Yu, B.; Zhang, Z.; Shu, X.; Wang, Y.; Liu, T.; Wang, B.; Li, S. Joint extraction of entities and relations based on a novel decomposition strategy. arXiv 2019, arXiv:1909.04273. [Google Scholar]
- Dixit, K.; Al-Onaizan, Y. Span-level model for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 5308–5314. [Google Scholar] [CrossRef]
- Miwa, M.; Sasaki, Y. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1858–1869. [Google Scholar] [CrossRef]
- Tran, T.; Kavuluru, R. Neural metric learning for fast end-to-end relation extraction. arXiv 2019, arXiv:1905.07458. [Google Scholar]
- Ren, F.; Zhang, L.; Yin, S.; Zhao, X.; Liu, S.; Li, B.; Liu, Y. A Novel Global Feature-Oriented Relational Triple Extraction Model based on Table Filling. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Virtual, 7–11 November 2021; pp. 2646–2656. [Google Scholar] [CrossRef]
- Luan, Y.; He, L.; Ostendorf, M.; Hajishirzi, H. Multi-Task Identification of Entities, Relations, and Coreference for Scientific Knowledge Graph Construction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018. [Google Scholar] [CrossRef]
- Gurulingappa, H.; Rajput, A.M.; Roberts, A.; Fluck, J.; Hofmann-Apitius, M.; Toldo, L. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. J. Biomed. Inform. 2012, 45, 885–892. [Google Scholar] [CrossRef] [PubMed]
- Roth, D.; Yih, W.t. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL-2004) at HLT-NAACL 2004, Boston, MA, USA, 6–7 May 2004; pp. 1–8. [Google Scholar]
- Gupta, P.; Schütze, H.; Andrassy, B. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of the COLING 2016, 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan, 11–16 December 2016; pp. 2537–2547. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Wang, C.; Liu, X.; Chen, Z.; Hong, H.; Tang, J.; Song, D. DeepStruct: Pretraining of language models for structure prediction. arXiv 2022, arXiv:2205.10475. [Google Scholar]
- Crone, P. Deeper task-specificity improves joint entity and relation extraction. arXiv 2020, arXiv:2002.06424. [Google Scholar]
- Wadden, D.; Wennberg, U.; Luan, Y.; Hajishirzi, H. Entity, Relation, and Event Extraction with Contextualized Span Representations. arXiv 2019, arXiv:1909.03546. [Google Scholar]
- Shen, Y.; Ma, X.; Tang, Y.; Lu, W. A Trigger-Sense Memory Flow Framework for Joint Entity and Relation Extraction. In Proceedings of the Web Conference 2021, Ljubljana, Slovenia, 19–23 April 2021; pp. 1704–1715. [Google Scholar] [CrossRef]
- Yan, Z.; Zhang, C.; Fu, J.; Zhang, Q.; Wei, Z. A partition filter network for joint entity and relation extraction. arXiv 2021, arXiv:2108.12202. [Google Scholar]
- Santosh, T.; Chakraborty, P.; Dutta, S.; Sanyal, D.K.; Das, P.P. Joint entity and relation extraction from scientific documents: Role of linguistic information and entity types. In Proceedings of the 2nd Workshop on Extraction and Evaluation of Knowledge Entities from Scientific Documents (EEKE 2021), Online, 27–30 September 2021. [Google Scholar]
- Jeong, Y.; Kim, E. Scideberta: Learning deberta for science technology documents and fine-tuning information extraction tasks. IEEE Access 2022, 10, 60805–60813. [Google Scholar] [CrossRef]
- Giannis, B.; Johannes, D.; Thomas, D.; Chris, D. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Syst. Appl. 2018, 114, 34–45. [Google Scholar] [CrossRef]
- Wang, J.; Lu, W. Two are Better than One: Joint Entity and Relation Extraction with Table-Sequence Encoders. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; pp. 1706–1721. [Google Scholar] [CrossRef]
- Cabot, P.L.H.; Navigli, R. REBEL: Relation extraction by end-to-end language generation. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual, 16–20 November 2021; pp. 2370–2381. [Google Scholar] [CrossRef]
- Zhao, S.; Hu, M.; Cai, Z.; Liu, F. Modeling dense cross-modal interactions for joint entity-relation extraction. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, Online, 7–15 January 2021; pp. 4032–4038. [Google Scholar]
- Li, X.; Yin, F.; Sun, Z.; Li, X.; Yuan, A.; Chai, D.; Zhou, M.; Li, J. Entity-relation extraction as multi-turn question answering. arXiv 2019, arXiv:1905.05529. [Google Scholar]
- Li, J.; Fei, H.; Liu, J.; Wu, S.; Zhang, M.; Teng, C.; Ji, D.; Li, F. Unified Named Entity Recognition as Word-Word Relation Classification. Proc. AAAI Conf. Artif. Intell. 2022, 36, 10965–10973. [Google Scholar] [CrossRef]
Method | Metric | ADE | SciERC |
---|---|---|---|
Our model | Prec. | 82.14 | 47.88 |
Rec. | 85.88 | 37.51 | |
F1 | 83.97 | 42.07 | |
A | Prec. | 81.98 | 43.62 |
Rec. | 85.30 | 35.75 | |
F1 | 83.61 | 39.29 | |
B&C | Prec. | 82.17 | 47.78 |
Rec. | 83.77 | 37.87 | |
F1 | 82.96 | 40.07 | |
D | Prec. | 80.78 | 45.34 |
Rec. | 83.92 | 37.51 | |
F1 | 82.32 | 41.26 |
Method | NER | RE |
---|---|---|
Our model | 90.78 | 84.86 |
Concat | 90.09 (−0.69) | 83.46 (−1.40) |
Multi-heads CNN | 90.54 (−0.24) | 84.40 (−0.46) |
CLN | 88.95 (−1.83) | 83.92 (−0.94) |
Separate-type | 87.91 (−2.87) | 83.25 (−1.61) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Z.; Shi, L.; Yuan, Y.; Zhou, H.; Xu, S. Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction. Information 2024, 15, 407. https://doi.org/10.3390/info15070407
Zhang Z, Shi L, Yuan Y, Zhou H, Xu S. Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction. Information. 2024; 15(7):407. https://doi.org/10.3390/info15070407
Chicago/Turabian StyleZhang, Zhenyu, Lin Shi, Yang Yuan, Huanyue Zhou, and Shoukun Xu. 2024. "Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction" Information 15, no. 7: 407. https://doi.org/10.3390/info15070407
APA StyleZhang, Z., Shi, L., Yuan, Y., Zhou, H., & Xu, S. (2024). Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction. Information, 15(7), 407. https://doi.org/10.3390/info15070407