Span-Based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination
Abstract
:1. Introduction
- We propose a new extraction paradigm. It solves the coarse-grained problem of the existing paradigm. A method guided by it can fully integrate entity and relation information in the extraction process to solve the problem of insufficient information fusion.
- We adopt the new extraction paradigm and propose SSPC, a model for Span-based Fine-Grained Entity-Relation Extraction with Sub-Prompts Combination. Moreover, this model can also be easily adapted to the relation extraction task. SSPC first performs the extraction and then performs the extraction, and finally combines intermediate results through a logic rule to determine whether they are triples that are indeed contained in the sentence.
- We test our model on ADE, TACRED, and TACREV, and the results show that SSPC can significantly and consistently outperform existing state-of-the-art baselines.
2. Related Work
2.1. Joint Entity and Relation Extraction
2.2. Prompt Tuning
3. Methodology
- If is of type “Effect” and the sentence indicates that it was caused by a type of “Drug”, then can be the subject of the relation:
- If is of type “Drug” and the sentence indicates that it resulted in an “Effect”, then can be the object of the relation:
- If the sentence expresses that and have a “Cause” relation in the sentence, they can form a triple:
3.1. Sub-Prompts Design
3.2. Extraction
3.3. Extraction
3.4. Classification
- The embedding of . All the token embeddings are combined using a fusion, . Regarding the fusion function f, we choose max-pooling, obtaining the ’s representation .
- The size embedding of . Given the span size l, we look-up a size embedding from a dedicated embedding matrix, obtaining the ’s size representation . These size embeddings are learned by backpropagation.
- The embedding of context. Obviously, words from the context are essential indicators of the expressed relation. We use a more localized context drawn from the direct surrounding of the spans: Given the span ranging from the end of the to the beginning of the , we combine its embeddings by max-pooling, obtaining a context representation . If the range is empty (e.g., in case of overlapping entities), we set .
- The embedding of . Similar to the ’s embedding, we obtain the ’s representation .
- The size embedding of . Similar to the ’s size embedding, we obtain the ’s size representation .
3.5. Joint Training
- For the Extraction and Extraction, We construct n within-sentence negative examples (“no_relation”) by
- replacing the subject and object of the R (the subject acts as the object, and the object acts as the subject),
- blurring the subject or object’s boundary of the R (expanding or narrowing the starting and ending positions of a span), and
- generating another unrelated span as the subject or the object of the R.
These negative examples are combined with the existing positive examples to form a training set. - For the Extraction, We construct n within-sentence negative examples (“no_relation”) by
- replacing the subject and object of a single triple,
- cross replacing the subjects and objects of multiple triples,
- blurring the subject or object’s boundary of a single triple, and
- generating other unrelated spans as subjects and objects to form triples.
These negative examples are combined with the existing positive examples to form a training set. - For example, given the sentence, “Adriamycin—induced cardiomyopathy aggravated by cis—platinum nephrotoxicity requiring dialysis.”, we will construct negative samples
3.6. Adapt to the Relation Extraction Task
4. Experiments
4.1. Datasets
- ADE: The ADE dataset [36] is about drugs and their adverse effects. A sentence contains entities of drug type and adverse effect type and expresses the corresponding relation between them. It consists of 4272 sentences, of which 1695 contain overlapping phenomena. As shown in Table 1, there are triples in the ADE dataset. It can be seen that even if only one relation is involved, entities and relations have become complex, mainly including multiple triples (1), subjects and many-to-one (2), and objects one-to-many (3), and subjects and objects crossing (4). As in previous work, we conduct 10-fold cross-validation. The core pre-trained language model is “microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext”; the maximum length of a sentence is set to 128; the maximum length of a span is set to 5; the number of within-sentence negative samples is set to 70. The experiments are conducted on an NVIDIA Geforce RTX 3090 24GB GPU. Other hyperparameters are shown in Table 2.
- TACRED and TACREV: The TACRED datatset [37] is a sizeable relation extraction dataset with 106,264 examples (68,124 for training, 22,631 for validation, and 15,509 for testing). These examples are created by combining available human annotations from the TAC KBP challenges and crowdsourcing. It contains 42 relation types, such as “per:parents”, “org:website”, and “no_relation”. The TACREV dataset [38] is built based on the original TACRED dataset. It finds out and corrects the errors in the original development set and test set of TACRED, while the training set is left intact. To experiment, we set entity types according to relation types, including “person”, “organization”, “religion”, “country”, “state”, “city”, “title”, “number”, “URL”, “event”, and “date”. For few-shot learning, consistent with the previous work, we sample K training and K validation instances per class from the original training and development set and evaluate models on the original test set. We set K from {8, 16, 32}, respectively. The core pre-trained language model is “roberta-large”; the maximum length of a sentence is set to 256; the maximum length of a span is set to 5. The experiments are conducted on an NVIDIA Geforce RTX 3090 24GB GPU. Other hyperparameters are shown in Table 2.
4.2. Comparison with the State-of-the-Art Model of the ADE Dataset
- Natural Language Understanding (NLU) models, including “CNN + Global features” proposed by [16], “BiLSTM + SDP” proposed by [17], “Multi-head” proposed by [18], “Multi-head + AT” proposed by [19], “Relation-Metric” proposed by [20], “SpERT” proposed by [10], “SPAN” proposed by [9].“SpERT” and “SPAN” are both span-based joint entity and relation extraction models. “SpERT” is a span-based joint entity and relation extraction with transformer pre-training model; “SPAN” is a span-based joint entity and relation extraction with attention-based span-specific and contextual semantic representations model and is the current span-based state-of-the-art model on the ADE dataset.
- Natural Language Generation (NLG) models, including “TANL” proposed by [21] and “REBEL” proposed by [22].Recently, the seq2seq model not only performs well in language generation but also in NLU tasks. “TANL” is generative as it translates from an input to an output in augmented natural languages. “REBEL” is a seq2seq model based on BART that performs end-to-end relation extraction and is the current state-of-the-art model on the ADE dataset.
4.3. Comparison with the State-of-the-Art Model of Relation Extraction
- ENT MARKER and TYP MARKER: [39] uses prompt tuning for relation extraction. ENT MARKER injects special symbols to index the positions of entities. It is similar to prompting by introducing extra serial information to indicate the position of special tokens, i.e., named entities. TYP MARKER additionally introduces the type information of entities. It could be regarded as a type of template for prompts but requires additional annotation of type information.
- PTR: [3] proposes prompt tuning with rules (PTR) for many-class text classification and applies logic rules to construct prompts with several sub-prompts.
- KnowPrompt: [35] incorporates knowledge among relation labels into prompt-tuning for relation extraction and proposes a knowledge-aware prompt-tuning approach with synergistic optimization. It is the current state-of-the-art model on TACTED and TACREV in few-shot learning scenarios.
4.4. Ablation Study
4.5. and Extraction Inspection
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zeng, D.; Liu, K.; Lai, S.; Zhou, G.; Zhao, J. Relation classification via convolutional deep neural network. In Proceedings of the COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, Ireland, 23–29 August 2014; pp. 2335–2344. [Google Scholar]
- Zhou, P.; Shi, W.; Tian, J.; Qi, Z.; Li, B.; Hao, H.; Xu, B. Attention-based bidirectional long short-term memory networks for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; pp. 207–212. [Google Scholar]
- Han, X.; Zhao, W.; Ding, N.; Liu, Z.; Sun, M. PTR: Prompt Tuning with Rules for Text Classification. AI Open 2022, 3, 182–192. [Google Scholar] [CrossRef]
- Miwa, M.; Bansal, M. End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, 7–12 August 2016; pp. 1105–1116. [Google Scholar]
- Adel, H.; Schütze, H. Global Normalization of Convolutional Neural Networks for Joint Entity and Relation Classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September 2017; pp. 1723–1729. [Google Scholar]
- Katiyar, A.; Cardie, C. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, BC, Canada, 30 July–4 August 2017; pp. 917–928. [Google Scholar]
- Yamada, I.; Asai, A.; Shindo, H.; Takeda, H.; Matsumoto, Y. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; pp. 6442–6454. [Google Scholar]
- Zeng, X.; Zeng, D.; He, S.; Liu, K.; Zhao, J. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia, 15–20 July 2018; pp. 506–514. [Google Scholar]
- Ji, B.; Yu, J.; Li, S.; Ma, J.; Wu, Q.; Tan, Y.; Liu, H. Span-based joint entity and relation extraction with attention-based span-specific and contextual semantic representations. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; pp. 88–99. [Google Scholar]
- Eberts, M.; Ulges, A. Span-Based Joint Entity and Relation Extraction with Transformer Pre-Training. In Proceedings of the ECAI 2020, Santiago de Compostela, Spain, 29 August–8 September 2020; pp. 2006–2013. [Google Scholar]
- Li, X.; Yin, F.; Sun, Z.; Li, X.; Yuan, A.; Chai, D.; Zhou, M.; Li, J. Entity-Relation Extraction as Multi-Turn Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, 28 July–2 August 2019; pp. 1340–1350. [Google Scholar]
- Zhao, T.; Yan, Z.; Cao, Y.; Li, Z. Asking effective and diverse questions: A machine reading comprehension based framework for joint entity-relation extraction. In Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, Yokohama, Japan, 7–15 January 2021; pp. 3948–3954. [Google Scholar]
- Chen, Z.; Guo, C. A pattern-first pipeline approach for entity and relation extraction. Neurocomputing 2022, 494, 182–191. [Google Scholar] [CrossRef]
- Takanobu, R.; Zhang, T.; Liu, J.; Huang, M. A hierarchical framework for relation extraction with reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 7072–7079. [Google Scholar]
- Xie, C.; Liang, J.; Liu, J.; Huang, C.; Huang, W.; Xiao, Y. Revisiting the Negative Data of Distantly Supervised Relation Extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Online, 5–6 August 2021; pp. 3572–3581. [Google Scholar]
- Li, F.; Zhang, Y.; Zhang, M.; Ji, D. Joint Models for Extracting Adverse Drug Events from Biomedical Text. In Proceedings of the IJCAI, New York, NY, USA, 9–15 July 2016; Volume 2016, pp. 2838–2844. [Google Scholar]
- Li, F.; Zhang, M.; Fu, G.; Ji, D. A neural joint model for entity and relation extraction from biomedical text. BMC Bioinform. 2017, 18, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Bekoulis, G.; Deleu, J.; Demeester, T.; Develder, C. Joint entity recognition and relation extraction as a multi-head selection problem. Expert Syst. Appl. 2018, 114, 34–45. [Google Scholar] [CrossRef] [Green Version]
- Bekoulis, I.; Deleu, J.; Demeester, T.; Develder, C. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the EMNLP2018, the Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 1–7. [Google Scholar]
- Tran, T.; Kavuluru, R. Neural metric learning for fast end-to-end relation extraction. arXiv 2019, arXiv:1905.07458. [Google Scholar]
- Paolini, G.; Athiwaratkun, B.; Krone, J.; Ma, J.; Achille, A.; Anubhai, R.; dos Santos, C.N.; Xiang, B.; Soatto, S. Structured Prediction as Translation between Augmented Natural Languages. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Cabot, P.L.H.; Navigli, R. REBEL: Relation extraction by end-to-end language generation. In Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic, 16–20 November 2021; pp. 2370–2381. [Google Scholar]
- Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I. Improving language understanding by generative pre-training. OpenAI 2018. Available online: https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf (accessed on 1 December 2022).
- Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the NAACL-HLT, Minneapolis, MN, USA, 2–7 June 2019; pp. 4171–4186. [Google Scholar]
- Liu, X.; Zheng, Y.; Du, Z.; Ding, M.; Qian, Y.; Yang, Z.; Tang, J. GPT understands, too. arXiv 2021, arXiv:2103.10385. [Google Scholar]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Schick, T.; Schütze, H. Exploiting cloze questions for few shot text classification and natural language inference. arXiv 2020, arXiv:2001.07676. [Google Scholar]
- Rubin, O.; Herzig, J.; Berant, J. Learning to retrieve prompts for in-context learning. arXiv 2021, arXiv:2112.08633. [Google Scholar]
- Li, J.; Tang, T.; Nie, J.Y.; Wen, J.R.; Zhao, W.X. Learning to Transfer Prompts for Text Generation. arXiv 2022, arXiv:2205.01543. [Google Scholar]
- Kasahara, T.; Kawahara, D.; Tung, N.; Li, S.; Shinzato, K.; Sato, T. Building a Personalized Dialogue System with Prompt-Tuning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop, Online, 10–15 July 2022; pp. 96–105. [Google Scholar]
- Cui, L.; Wu, Y.; Liu, J.; Yang, S.; Zhang, Y. Template-based named entity recognition using BART. arXiv 2021, arXiv:2106.01760. [Google Scholar]
- Lee, D.H.; Agarwal, M.; Kadakia, A.; Pujara, J.; Ren, X. Good examples make A faster learner: Simple demonstration-based learning for low-resource NER. arXiv 2021, arXiv:2110.08454. [Google Scholar]
- Ma, R.; Zhou, X.; Gui, T.; Tan, Y.; Zhang, Q.; Huang, X. Template-free prompt tuning for few-shot NER. arXiv 2021, arXiv:2109.13532. [Google Scholar]
- Chen, X.; Zhang, N.; Xie, X.; Deng, S.; Yao, Y.; Tan, C.; Huang, F.; Si, L.; Chen, H. Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction. In Proceedings of the ACM Web Conference 2022, Lyon, France, 25–29 April 2022; pp. 2778–2788. [Google Scholar]
- Gurulingappa, H.; Rajput, A.M.; Roberts, A.; Fluck, J.; Hofmann-Apitius, M.; Toldo, L. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. J. Biomed. Inform. 2012, 45, 885–892. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Zhong, V.; Chen, D.; Angeli, G.; Manning, C.D. Position-aware attention and supervised data improve slot filling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September 2017. [Google Scholar]
- Alt, C.; Gabryszak, A.; Hennig, L. TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, online, 5–10 July 2020; pp. 1558–1569. [Google Scholar]
- Zhou, W.; Chen, M. An Improved Baseline for Sentence-level Relation Extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing, Minneapolis, MN, USA, 20–21 November 2022; pp. 161–168. [Google Scholar]
1 | Adriamycin—induced cardiomyopathy aggravated by cis—platinum nephrotoxicity requiring dialysis. | |
{ <cardiomyopathy, Adverse-Effect, Adriamycin>, <nephrotoxicity, Adverse-Effect, cis—platinum> } | ||
2 | Possible interaction between lopinavir/ritonavir and valproic Acid exacerbates bipolar disorder. | |
{ <bipolar disorder, Adverse-Effect, lopinavir>, <bipolar disorder, Adverse-Effect, ritonavir>, <bipolar disorder, Adverse-Effect, valproic Acid> } | ||
3 | Listeria brain abscess, Pneumocystis pneumonia and Kaposi ’s sarcoma after temozolomide . | |
{ <Listeria brain abscess, Adverse-Effect, temozolomide>, <Pneumocystis pneumonia, Adverse-Effect, temozolomide>, <Kaposi ’s sarcoma, Adverse-Effect, temozolomide> } | ||
4 | Neutropenia and agranulocytosis are risks known to occur with phenothiazines and clozapine . | |
{
<Neutropenia, Adverse-Effect, phenothiazines>, <Neutropenia, Adverse-Effect, clozapine>, <agranulocytosis, Adverse-Effect, phenothiazines>, <agranulocytosis, Adverse-Effect, clozapine> } |
Max Epochs | Learning Rate | Warm-Up | Weight Decay | Batch Size | Time Per Epoch | |
---|---|---|---|---|---|---|
ADE | 20 | 10% | 0.01 | 1 | 46 min× 10 | |
TACRED | 20 | 10% | 0.01 | 8 | 102 min | |
TACREV | 20 | 10% | 0.01 | 8 | 102 min | |
TACRED | 50 | 10% | 0.01 | 8 | 0.5–2 min | |
TACREV | 50 | 10% | 0.01 | 8 | 0.5–2 min |
Dataset | Model | Entity | Relation | ||||
---|---|---|---|---|---|---|---|
Precision | Recall | F1 | Precision | Recall | F1 | ||
ADE | CNN + Global features [16] | 79.50 | 79.60 | 79.50 | 64.00 | 62.90 | 63.40 |
BiLSTM + SDP [17] | 82.70 | 86.70 | 84.60 | 67.50 | 75.80 | 71.40 | |
Multi-head [18] | 84.72 | 88.16 | 86.40 | 72.10 | 77.24 | 74.58 | |
Multi-head + AT [19] | - | - | 86.73 | - | - | 75.52 | |
Relation-Metric [20] | 86.16 | 88.08 | 87.11 | 77.36 | 77.25 | 77.29 | |
SpERT [10] | 88.99 | 89.59 | 89.28 | 77.77 | 79.96 | 78.84 | |
SPAN [9] | 89.88 | 91.32 | 90.59 | 79.56 | 81.93 | 80.73 | |
TANL [21] | - | - | 90.20 | - | - | 80.60 | |
REBEL [22] | - | - | - | - | - | 81.70 | |
SSPC (ours) | 90.66 | 91.32 | 90.99 | 80.79 | 82.86 | 81.79 |
Model | TACRED | TACREV | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Standard Setting | Few-Shot Setting | Standard Setting | Few-Shot Setting | |||||||
All Data | 8 | 16 | 32 | Mean | All Data | 8 | 16 | 32 | Mean | |
ENT MARKER [39] | 69.4 | 27.0 | 31.3 | 31.9 | 30.1 | 79.8 | 27.4 | 31.2 | 32.0 | 30.2 |
TYP MARKER [39] | 71.0 | 28.9 | 32.0 | 32.4 | 31.1 | 80.8 | 27.6 | 31.2 | 32.0 | 30.3 |
PTR [3] | 72.4 | 28.1 | 30.7 | 32.1 | 30.3 | 81.4 | 28.7 | 31.4 | 32.4 | 30.8 |
KnowPrompt [35] | 72.4 | 32.0 | 35.4 | 36.5 | 34.6 | 82.4 | 32.1 | 33.1 | 34.7 | 33.3 |
SSPC (ours) | 72.7 | 32.7 | 34.9 | 37.0 | 34.9 | 83.1 | 32.7 | 34.2 | 37.7 | 34.9 |
Method | Precision | Recall | F1 |
---|---|---|---|
Full | 80.79 | 82.86 | 81.79 |
w/o (3-2) | 68.33 | 80.18 | 73.78 |
w/o (3-1) | 73.84 | 81.25 | 77.37 |
w/o (3-1) and (3-2) | 60.40 | 87.79 | 71.56 |
K | ||||||
---|---|---|---|---|---|---|
Precision | Recall | F1 | Precision | Recall | F1 | |
1 | 86.75 | 85.69 | 86.22 | 94.97 | 94.38 | 94.67 |
2 | 84.40 | 90.05 | 87.13 | 95.97 | 96.75 | 96.36 |
3 | 85.57 | 87.37 | 86.46 | 96.95 | 95.01 | 95.97 |
4 | 85.56 | 85.41 | 85.49 | 95.66 | 95.07 | 95.37 |
5 | 85.87 | 87.07 | 86.47 | 95.92 | 97.53 | 96.72 |
6 | 84.81 | 84.39 | 84.60 | 97.73 | 94.23 | 95.95 |
7 | 84.88 | 87.20 | 86.02 | 95.61 | 95.61 | 95.61 |
8 | 82.72 | 85.89 | 84.27 | 96.75 | 95.21 | 95.98 |
9 | 86.44 | 88.08 | 87.25 | 96.33 | 97.52 | 96.92 |
10 | 85.71 | 86.01 | 85.86 | 94.61 | 97.93 | 96.24 |
Mean | 85.27 | 86.72 | 85.99 | 96.05 | 95.92 | 95.99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, N.; Liu, J.; Shi, Y. Span-Based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination. Appl. Sci. 2023, 13, 1159. https://doi.org/10.3390/app13021159
Yu N, Liu J, Shi Y. Span-Based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination. Applied Sciences. 2023; 13(2):1159. https://doi.org/10.3390/app13021159
Chicago/Turabian StyleYu, Ning, Jianyi Liu, and Yu Shi. 2023. "Span-Based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination" Applied Sciences 13, no. 2: 1159. https://doi.org/10.3390/app13021159
APA StyleYu, N., Liu, J., & Shi, Y. (2023). Span-Based Fine-Grained Entity-Relation Extraction via Sub-Prompts Combination. Applied Sciences, 13(2), 1159. https://doi.org/10.3390/app13021159