NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish
Abstract
:1. Introduction
- A monolingual abstractive text summarization model, News Abstract Summarization for Catalan (NASca), is proposed. This model, based on the BART architecture [6], is pretrained with several self-supervised tasks to improve the abstractivity of the generated summaries. For fine-tuning the model, a corpus of online newspapers is used (DACSA).
- An evaluation of the performance of the model on the summarization task and an evaluation of the degree of abstractivity of its generated summaries are presented. We compare the results of each NAS model with the results obtained by the summarization models based on well-known multilingual language models (mBART [9] and mT5 [10]) fine-tuned for the summarization task for each language using the DACSA corpus.
- A text summarization model with the same pretraining process than NASca is also trained and evaluated for Spanish, News Abstract Summarization for Spanish (NASes).
- The content reordering metric is proposed, which helps to quantify if the extractive content within the abstractive summary is written in a different order than in the document.
2. Related Work
3. Newspapers Summarization Corpus
4. Summarization Models
5. Metrics
6. Results
6.1. Summarization Performance of the Models for CATALAN
6.2. Abstractivity of the Summaries Generated by the Models for Catalan
6.3. Summarization Performance and Abstractivity of the Summaries Generated by the Models for Spanish
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
Abbreviations
DACSA | Dataset for Automatic summarization of Catalan and Spanish newspaper Articles |
GSG | Gap Sentences Generation |
MDG | Masked Document Generation |
NASca | News Abstractive Summarization for Catalan |
NASes | News Abstractive Summarization for Spanish |
NSG | Next Segment Generation |
SR | Sentence Reordering |
Appendix A. Summarization Example
References
- Rane, N.; Govilkar, S. Recent Trends in Deep Learning Based Abstractive Text Summarization. Int. J. Recent Technol. Eng. 2019, 8, 3108–3115. [Google Scholar] [CrossRef]
- Jing, H. Using Hidden Markov Modeling to Decompose Human-Written Summaries. Comput. Linguist. 2002, 28, 527–543. [Google Scholar] [CrossRef]
- Verma, P.; Pal, S.; Om, H. A Comparative Analysis on Hindi and English Extractive Text Summarization. ACM Trans. Asian Low-Resour. Lang. Inf. Process. 2019, 18, 1–39. [Google Scholar] [CrossRef]
- Widyassari, A.P.; Rustad, S.; Shidik, G.F.; Noersasongko, E.; Syukur, A.; Affandy, A.; Setiadi, D.R.I.M. Review of automatic text summarization techniques & methods. J. King Saud Univ. Comput. Inf. Sci. 2020. [Google Scholar] [CrossRef]
- National Information Standards Organization. Guidelines for Abstracts; Standard, American National Standards Institute: Gaithersburg, MD, USA, 1997. [Google Scholar]
- Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Online, 5–10 July 2020; pp. 7871–7880. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, Y.; Saleh, M.; Liu, P. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. In Proceedings of the 37th International Conference on Machine Learning, Vienna, Austria, 13–18 July 2020; pp. 11328–11339. [Google Scholar]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 2020, 21, 1–67. [Google Scholar]
- Liu, Y.; Gu, J.; Goyal, N.; Li, X.; Edunov, S.; Ghazvininejad, M.; Lewis, M.; Zettlemoyer, L. Multilingual Denoising Pre-training for Neural Machine Translation. Trans. Assoc. Comput. Linguist. 2020, 8, 726–742. [Google Scholar] [CrossRef]
- Xue, L.; Constant, N.; Roberts, A.; Kale, M.; Al-Rfou, R.; Siddhant, A.; Barua, A.; Raffel, C. mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Mexico City, Mexico, 6–11 June 2021; pp. 483–498. [Google Scholar] [CrossRef]
- Cañete, J.; Chaperon, G.; Fuentes, R.; Ho, J.H.; Kang, H.; Pérez, J. Spanish Pre-Trained BERT Model and Evaluation Data. 2020. Available online: https://users.dcc.uchile.cl/~jperez/papers/pml4dc2020.pdf (accessed on 19 October 2021).
- Martin, L.; Muller, B.; Ortiz Suárez, P.J.; Dupont, Y.; Romary, L.; de la Clergerie, É.V.; Seddah, D.; Sagot, B. CamemBERT: A Tasty French Language Model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020. [Google Scholar]
- Virtanen, A.; Kanerva, J.; Ilo, R.; Luoma, J.; Luotolahti, J.; Salakoski, T.; Ginter, F.; Pyysalo, S. Multilingual is not enough: BERT for Finnish. arXiv 2019, arXiv:1912.07076. [Google Scholar]
- Pires, T.; Schlinger, E.; Garrette, D. How Multilingual is Multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics; Association for Computational Linguistics: Florence, Italy, 2019; pp. 4996–5001. [Google Scholar] [CrossRef] [Green Version]
- DACSA: A Dataset for Automatic summarization of Catalan and Spanish newspaper Articles. Unsubmitted.
- Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; pp. 38–45. [Google Scholar]
- Zhong, M.; Liu, P.; Chen, Y.; Wang, D.; Qiu, X.; Huang, X. Extractive Summarization as Text Matching. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 6–10 July 2020; pp. 6197–6208. [Google Scholar] [CrossRef]
- Liu, Y.; Lapata, M. Text Summarization with Pretrained Encoders. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; pp. 3730–3740. [Google Scholar] [CrossRef]
- Nallapati, R.; Zhai, F.; Zhou, B. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; AAAI’17. pp. 3075–3081. [Google Scholar]
- Rush, A.M.; Chopra, S.; Weston, J. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 379–389. [Google Scholar] [CrossRef] [Green Version]
- Nallapati, R.; Zhou, B.; dos Santos, C.; Gúllçehre, Ç.; Xiang, B. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, Berlin, Germany, 11–12 August 2016; pp. 280–290. [Google Scholar] [CrossRef] [Green Version]
- See, A.; Liu, P.J.; Manning, C.D. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Vancouver, BC, Canada, 30 July–4 August 2017; pp. 1073–1083. [Google Scholar] [CrossRef] [Green Version]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, USA, 20–22 June 2017; pp. 6000–6010. [Google Scholar]
- Qi, W.; Yan, Y.; Gong, Y.; Liu, D.; Duan, N.; Chen, J.; Zhang, R.; Zhou, M. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, Online, 16–20 November 2020; pp. 2401–2410. [Google Scholar]
- Magooda, A.; Litman, D.J. Abstractive Summarization for Low Resource Data using Domain Transfer and Data Synthesis. In Proceedings of the The Thirty-Third International Flairs Conference, North Miami Beach, FL, USA, 17–20 May 2020. [Google Scholar]
- Kryściński, W.; Paulus, R.; Xiong, C.; Socher, R. Improving Abstraction in Text Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, 31 October–4 November 2018; pp. 1808–1817. [Google Scholar] [CrossRef]
- Zou, Y.; Zhang, X.; Lu, W.; Wei, F.; Zhou, M. Pre-training for Abstractive Document Summarization by Reinstating Source Text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 Novermber 2020; pp. 3646–3660. [Google Scholar] [CrossRef]
- Le, H.; Vial, L.; Frej, J.; Segonne, V.; Coavoux, M.; Lecouteux, B.; Allauzen, A.; Crabbé, B.; Besacier, L.; Schwab, D. FlauBERT: Unsupervised Language Model Pre-training for French. In Proceedings of the 12th Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; pp. 2479–2490. [Google Scholar]
- de Vries, W.; van Cranenburgh, A.; Bisazza, A.; Caselli, T.; van Noord, G.; Nissim, M. BERTje: A Dutch BERT Model. arXiv 2019, arXiv:1912.09582. [Google Scholar]
- Ángel González, J.; Hurtado, L.F.; Pla, F. TWilBert: Pre-trained deep bidirectional transformers for Spanish Twitter. Neurocomputing 2021, 426, 58–69. [Google Scholar] [CrossRef]
- Ortiz Suárez, P.J.; Romary, L.; Sagot, B. A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 5–10 July 2020; pp. 1703–1714. [Google Scholar]
- Lin, C.Y. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out; Association for Computational Linguistics: Barcelona, Spain, 2004; pp. 74–81. [Google Scholar]
- Zhang, T.; Kishore, V.; Wu, F.; Weinberger, K.Q.; Artzi, Y. BERTScore: Evaluating Text Generation with BERT. In Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Grusky, M.; Naaman, M.; Artzi, Y. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, LA, USA, 1–6 June 2018; pp. 708–719. [Google Scholar] [CrossRef] [Green Version]
- Bommasani, R.; Cardie, C. Intrinsic Evaluation of Summarization Datasets. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 16–20 November 2020; pp. 8075–8096. [Google Scholar] [CrossRef]
- Barth, W.; Mutzel, P.; Jünger, M. Simple and Efficient Bilayer Cross Counting. J. Graph Algorithms Appl. 2004, 8, 179–194. [Google Scholar] [CrossRef] [Green Version]
Article | Summary | |||||||
---|---|---|---|---|---|---|---|---|
Source | Docs | Tokens | Sents Per Doc | Words Per Sent | Sents Per Doc | Words Per Sent | ||
#1 | 238,233 | 114,500,016 | 614,146 | 17.68 | 27.19 | 115,954 | 1.14 | 20.16 |
#2 | 194,697 | 105,119,526 | 621,612 | 19.99 | 27.01 | 112,904 | 1.28 | 19.14 |
#3 | 137,447 | 63,683,416 | 485,286 | 14.99 | 30.92 | 91,975 | 1.05 | 22.65 |
#4 | 56,827 | 24,891,291 | 276,720 | 14.84 | 29.52 | 58,071 | 1.21 | 17.52 |
#5 | 44,381 | 26,977,332 | 277,225 | 18.04 | 33.69 | 55,216 | 1.15 | 23.86 |
#6 | 35,763 | 17,181,460 | 202,931 | 11.31 | 42.49 | 42,289 | 1.05 | 22.79 |
#7 * | 7104 | 3,800,842 | 83,942 | 18.04 | 29.66 | 19,267 | 1.02 | 26.51 |
#8 * | 5882 | 9,414,192 | 185,977 | 66.04 | 24.24 | 31,006 | 2.54 | 24.84 |
#9 * | 4850 | 2,667,185 | 102,024 | 23.61 | 23.29 | 19,584 | 1.16 | 28.05 |
Set | 725,184 | 368,235,260 | 1,326,343 | 17.71 | 28.67 | 223,978 | 1.17 | 20.59 |
Article | Summary | |||||||
---|---|---|---|---|---|---|---|---|
Source | Docs | Tokens | Sents Per Doc | Words Per Sent | Sents Per Doc | Words Per Sent | ||
#1 | 550,148 | 420,786,144 | 1,473,628 | 31.36 | 24.39 | 210,079 | 1.40 | 19.02 |
#2 | 342,045 | 174,411,220 | 907,312 | 16.66 | 30.61 | 148,271 | 1.06 | 22.34 |
#3 | 196,410 | 93,755,039 | 622,073 | 15.40 | 31.00 | 110,728 | 1.02 | 20.59 |
#4 | 168,065 | 105,628,806 | 659,054 | 23.35 | 26.92 | 112,908 | 1.09 | 22.30 |
#5 | 148,053 | 105,453,102 | 626,058 | 28.35 | 25.13 | 109,546 | 1.47 | 20.46 |
#6 | 116,561 | 93,956,373 | 524,177 | 26.16 | 30.81 | 169,025 | 1.27 | 43.20 |
#7 | 107,162 | 70,944,634 | 470,244 | 19.90 | 33.26 | 87,901 | 1.29 | 25.27 |
#8 | 99,098 | 65,352,628 | 495,148 | 25.03 | 26.35 | 81,654 | 1.25 | 18.38 |
#9 | 81,947 | 42,825,867 | 363,075 | 15.54 | 33.63 | 71,913 | 1.03 | 22.41 |
#10 | 74,024 | 57,782,514 | 470,826 | 30.28 | 25.78 | 81793 | 1.31 | 20.23 |
#11 * | 70,193 | 29,692,261 | 272,248 | 11.06 | 38.26 | 84,898 | 1.22 | 44.48 |
#12 | 57,235 | 28,198,002 | 294,175 | 16.06 | 30.68 | 58,580 | 1.21 | 19.49 |
#13 | 35,163 | 20,156,337 | 260,690 | 19.22 | 29.83 | 50,556 | 1.15 | 21.20 |
#14 | 35,112 | 28,408,974 | 309,194 | 30.48 | 26.55 | 78,751 | 1.18 | 28.35 |
#15 * | 17,379 | 10,099,958 | 153,598 | 16.82 | 34.54 | 41,512 | 1.85 | 26.89 |
#16 * | 16,965 | 13,791,564 | 166,446 | 28.26 | 28.77 | 29,955 | 1.07 | 25.18 |
#17 * | 2450 | 4,545,924 | 135,761 | 74.97 | 24.75 | 23,588 | 3.16 | 26.72 |
#18 * | 1374 | 641,752 | 39,094 | 17.08 | 27.34 | 12,365 | 1.98 | 29.43 |
#19 * | 643 | 398,834 | 26,797 | 17.73 | 34.99 | 2495 | 1.04 | 16.02 |
#20 * | 467 | 233,873 | 22,699 | 18.70 | 26.78 | 3857 | 1.22 | 24.23 |
#21 * | 155 | 199,140 | 19,750 | 39.06 | 32.89 | 2098 | 1.91 | 21.79 |
Set | 2,120,649 | 1,367,262,946 | 3,189,783 | 23.44 | 27.50 | 516,307 | 1.24 | 22.95 |
Article | Summary | |||||||
---|---|---|---|---|---|---|---|---|
Partition | Docs | Tokens | Sents Per Doc | Words Per Sent | Sents Per Doc | Words Per Sent | ||
Training | 636,596 | 316,817,625 | 1,206,292 | 17.39 | 28.62 | 206,616 | 1.17 | 20.36 |
Validation | 35,376 | 17,831,029 | 258,999 | 16.17 | 31.17 | 51,940 | 1.15 | 20.93 |
TESTi | 35,376 | 17,704,387 | 262,148 | 16.13 | 31.03 | 51,958 | 1.15 | 20.89 |
TESTni | 17,836 | 15,882,219 | 247,154 | 35.38 | 25.17 | 45,997 | 1.56 | 25.93 |
Article | Summary | |||||||
---|---|---|---|---|---|---|---|---|
Partition | Docs | Tokens | Sents Per Doc | Words Per Sent | Sents Per Doc | Words Per Sent | ||
Training | 1,802,919 | 1,172,626,265 | 2,920,894 | 23.94 | 27.17 | 454,179 | 1.24 | 21.99 |
Validation | 104,052 | 67,669,381 | 550,213 | 23.01 | 28.27 | 109,460 | 1.21 | 23.36 |
TESTi | 104,052 | 67,363,994 | 550,910 | 22.93 | 28.23 | 109,706 | 1.21 | 23.34 |
TESTni | 109,626 | 59,603,306 | 447,679 | 16.25 | 33.46 | 116,201 | 1.35 | 36.84 |
Partition | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-Ls | BERTScore |
---|---|---|---|---|---|---|
TESTi | NASca | 28.84(28.68, 29.01) | 11.68 (11.51, 11.85) | 22.78 (22.61, 22.94) | 23.30 (23.13, 23.46) | 71.85 (71.78, 71.92) |
mBART | 28.59 (28.42, 28.77) | 11.89(11.73, 12.06) | 23.00(22.82, 23.16) | 23.39(23.22, 23.56) | 72.03(71.96, 72.10) | |
mT5 | 27.01 (26.84, 27.18) | 10.70 (10.54, 10.87) | 21.81 (21.65, 21.97) | 22.12 (21.98, 22.29) | 71.55 (71.49, 71.61) | |
TESTni | NASca | 28.19(27.97, 28.42) | 11.20 (10.99, 11.43) | 21.45(21.20, 21.65) | 22.44(22.21, 22.67) | 70.14 (70.05, 70.22) |
mBART | 27.46 (27.24, 27.69) | 11.04 (10.81, 11.29) | 21.13 (20.93, 21.37) | 22.01 (21.78, 22.24) | 70.33 (70.25, 70.43) | |
mT5 | 27.00 (26.77, 27.23) | 11.28(11.04, 11.52) | 21.27 (21.03, 21.51) | 22.01 (21.78, 22.23) | 70.56(70.47, 70.65) |
Partition | Model | Extractive Fragment Coverage | Content Reordering | Abstractivityp (p = 2) | Novel 1-Grams | Novel 4-Grams |
---|---|---|---|---|---|---|
TESTi | NASca | 96.99(96.94, 97.04) | 46.17 (45.79, 46.55) | 47.19(46.90, 47.46) | 03.21(03.15, 03.26) | 28.65(28.41, 28.92) |
mBART | 97.73 (97.68, 97.77) | 47.85(47.44, 48.23) | 37.70 (37.42, 37.97) | 02.40 (02.36, 02.45) | 23.80 (23.55, 24.02) | |
mT5 | 98.59 (98.55, 98.62) | 41.25 (40.84, 41.67) | 38.04 (37.78, 38.28) | 01.51 (01.48, 01.55) | 21.89 (21.71, 22.08) | |
TESTni | NASca | 96.66(96.55, 96.77) | 42.37 (41.84, 42.88) | 41.89(41.44, 42.37) | 03.52(03.40, 03.63) | 26.32(25.91, 26.68) |
mBART | 97.08 (96.99, 97.16) | 42.96(42.40, 43.56) | 36.98 (36.55, 37.41) | 03.01 (02.92, 03.09) | 24.32 (23.95, 24.70) | |
mT5 | 98.31 (98.26, 98.36) | 38.82 (38.24, 39.41) | 39.18 (38.83, 39.54) | 01.80 (01.74, 01.85) | 23.20 (22.92, 23.48) |
Partition | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-Ls | BERTScore |
---|---|---|---|---|---|---|
TESTi | NASes | 33.24(33.12, 33.38) | 15.79(15.63, 15.93) | 26.76(26.63, 26.89) | 27.56(27.43, 27.69) | 73.11(73.05, 73.16) |
mBART | 31.09 (30.98, 31.20) | 13.56 (13.44, 13.68) | 24.67 (24.56, 24.78) | 25.48 (25.37, 25.58) | 72.25 (72.21, 72.30) | |
mT5 | 31.72 (31.60, 31.85) | 14.54 (14.39, 14.67) | 25.76 (25.63, 25.89) | 26.31 (26.18, 26.44) | 72.86 (72.82, 72.91) | |
TESTni | NASes | 30.60 (30.52, 30.68) | 10.75 (10.66, 10.83) | 22.29 (22.21, 22.37) | 23.06 (22.99, 23.15) | 70.66 (70.62, 70.69) |
mBART | 30.66(30.58, 30.74) | 12.08 (11.98, 12.18) | 23.13 (23.06, 23.22) | 23.89 (23.81, 23.98) | 71.07 (71.04, 71.10) | |
mT5 | 30.61 (30.51, 30.70) | 12.36(12.25, 12.47) | 23.53(23.43, 23.62) | 24.05(23.95, 24.14) | 71.26(71.22, 71.30) |
Partition | Model | Extractive Fragment Coverage | Content Reordering | Abstractivityp (p = 2) | Novel 1-Grams | Novel 4-Grams |
---|---|---|---|---|---|---|
TESTi | NASca | 97.65(97.62, 97.68) | 45.27(45.04, 45.50) | 38.15(36.97, 38.31) | 02.55(02.52, 02.58) | 21.17(21.04, 21.31) |
mBART | 98.14 (98.10, 98.18) | 37.70 (37.45, 37.92) | 35.17 (35.00, 35.32) | 01.85 (01.81, 01.89) | 17.58 (17.47, 17.70) | |
mT5 | 98.74 (98.72, 98.76) | 38.67 (38.42, 38.92) | 32.41 (32.25, 32.58) | 01.36 (01.34, 01.38) | 17.39 (17.29, 17.49) | |
TESTni | NASca | 98.16(98.13, 98.19) | 46.58(46.33, 46.82) | 29.76 (29.60, 29.92) | 02.00(01.97, 02.03) | 15.76(15.65, 15.88) |
mBART | 98.92 (98.90, 98.94) | 39.38 (39.13, 39.61) | 30.48 (30.33, 30.64) | 01.03 (01.01, 01.05) | 14.68 (14.59, 14.78) | |
mT5 | 99.24 (99.23, 99.26) | 37.17 (36.91, 37.43) | 24.19 (24.06, 24.32) | 00.83 (00.81, 00.84) | 12.08 (12.00, 12.16) |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ahuir, V.; Hurtado, L.-F.; González, J.Á.; Segarra, E. NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish. Appl. Sci. 2021, 11, 9872. https://doi.org/10.3390/app11219872
Ahuir V, Hurtado L-F, González JÁ, Segarra E. NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish. Applied Sciences. 2021; 11(21):9872. https://doi.org/10.3390/app11219872
Chicago/Turabian StyleAhuir, Vicent, Lluís-F. Hurtado, José Ángel González, and Encarna Segarra. 2021. "NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish" Applied Sciences 11, no. 21: 9872. https://doi.org/10.3390/app11219872
APA StyleAhuir, V., Hurtado, L. -F., González, J. Á., & Segarra, E. (2021). NASca and NASes: Two Monolingual Pre-Trained Models for Abstractive Summarization in Catalan and Spanish. Applied Sciences, 11(21), 9872. https://doi.org/10.3390/app11219872