Enhancing Cross-Lingual Sarcasm Detection by a Prompt Learning Framework with Data Augmentation and Contrastive Learning
Abstract
:1. Introduction
- A cross-lingual sarcasm detection problem that facilitates research on sarcastic content in low-resource languages is implemented in this paper.
- A novel cross-lingual sarcasm detection framework based on prompt learning that combines data augmentation and contrastive learning is proposed to improve cross-lingual transfer learning ability, and consistency loss is used to make the probability distribution of answers in different languages as similar as possible.
- Experiments are conducted in a zero-shot cross-lingual setting, where the model is trained on the English sarcasm dataset and subsequently evaluated on the Chinese dataset. The experimental outcomes indicate that the model in this paper achieves outstanding performance.
2. Related Works
2.1. Sarcasm Detection
2.2. Cross-Lingual Tasks
3. Proposed Method
3.1. Framework
3.2. Data Augmentation
3.3. Prompt Learning
3.4. Contrastive Learning
3.5. Training Objective
4. Experimental Settings
4.1. Datasets
- News Headlines Dataset [62]: The dataset is collected from two news websites—The Onion and HuffPost—containing about 28 K headlines, of which 13 K are sarcastic. The Onion aims to produce sarcastic news, from which all headlines in the “News Briefing” and “Photo News” categories have been collected; HuffPost publishes real (non-sarcastic) news. Compared to existing Twitter datasets, the news dataset contains high-quality labels with much less noise.
- A Balanced Chinese Internet Sarcasm Corpus [63]: The corpus contains 2 K Chinese annotated texts and balances sarcasm and non-sarcasm and longer and shorter texts, both in a 1:1 ratio. The shorter sarcastic texts mainly originate from the corpus constructed by Tang and Chen [65], while the longer texts are from Onion the Storyteller (a Weibo account). The shorter non-sarcastic texts are collected from the comment section of the official WeChat public account, while the longer texts are mainly information crawled from some official accounts on Weibo and WeChat. The balanced dataset is constructed by randomly selecting a portion of the data from each source.
- A New Benchmark Dataset [64]: The benchmark dataset is a fine-grained dataset that includes five levels of sarcasm: 1 (no sarcasm), 2 (unlikely to be sarcastic), 3 (indeterminate), 4 (mild sarcasm), and 5 (strong sarcasm). The data were randomly collected from Weibo posts, comprising 8.7k posts, with each annotated by five annotators. Ironic data (including 4 and 5) only account for 11.1% of the labeled data.
4.2. Baseline Models
- LASER+SVM [66]: LASER is an architecture designed for embedding multilingual sentences that uses a single, language-agnostic BiLSTM encoder. The SVM classifier is then trained for sarcasm detection.
- LASER+LR: In this method, LASER is used for text embeddings, followed by training a logistic regression (LR) classifier for sarcasm detection.
- LASER+RNN: The same as above—LASER technology is utilized to extract the embedded representation of the text, followed by the application of an RNN model from deep learning for the classification of sarcasm.
- mBERT: Multilingual BERT is a variant of BERT designed for multilingual tasks. It has been pre-trained on the Wikipedia corpora and covers 102 languages.
- XLM-R: The XLM model is based on the transformer architecture, similar to the BERT model, and it has been pre-trained on Wikipedia with 100 languages. Compared to XLM, XLM-R extends the training languages and increases the scale of the training corpus.
- PCT-XLM-R: PCT is a cross-lingual framework that is based on prompt learning with cross-lingual templates. PCT-XLM-R is obtained by applying PCT to XLM-R.
4.3. Implementation Details
5. Experiment Results
5.1. Main Results
5.2. Ablation Study
5.3. Visualization Analysis
6. Conclusions and Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Maynard, D.G.; Greenwood, M.A. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings of the Lrec 2014 Proceedings, ELRA, Reykjavik, Iceland, 26–31 May 2014. [Google Scholar]
- Merriam-Webster, I. The Merriam-Webster Dictionary; Merriam-Webster: Springfield, MA, USA, 1995. [Google Scholar]
- Eke, C.I.; Norman, A.A.; Shuib, L. Context-based feature technique for sarcasm identification in benchmark datasets using deep learning and BERT model. IEEE Access 2021, 9, 48501–48518. [Google Scholar] [CrossRef]
- Majumder, N.; Poria, S.; Peng, H.; Chhaya, N.; Cambria, E.; Gelbukh, A. Sentiment and sarcasm classification with multitask learning. IEEE Intell. Syst. 2019, 34, 38–43. [Google Scholar] [CrossRef]
- Ghorbanali, A.; Sohrabi, M.K.; Yaghmaee, F. Ensemble transfer learning-based multimodal sentiment analysis using weighted convolutional neural networks. Inf. Process. Manag. 2022, 59, 102929. [Google Scholar] [CrossRef]
- Maladry, A.; Lefever, E.; Van Hee, C.; Hoste, V. Irony detection for dutch: A venture into the implicit. In Proceedings of the 12th Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis, Dublin, Ireland, 26 May 2022; pp. 172–181. [Google Scholar]
- Reyes, A.; Saldívar, R. Linguistic-based Approach for Recognizing Implicit Language in Hate Speech: Exploratory Insights. Comput. Sist. 2022, 26, 101–111. [Google Scholar] [CrossRef]
- Wen, Z.; Gui, L.; Wang, Q.; Guo, M.; Yu, X.; Du, J.; Xu, R. Sememe knowledge and auxiliary information enhanced approach for sarcasm detection. Inf. Process. Manag. 2022, 59, 102883. [Google Scholar] [CrossRef]
- Reyes, A.; Rosso, P.; Veale, T. A multidimensional approach for detecting irony in twitter. Lang. Resour. Eval. 2013, 47, 239–268. [Google Scholar] [CrossRef]
- Joshi, A.; Bhattacharyya, P.; Carman, M.J. Automatic sarcasm detection: A survey. Acm Comput. Surv. 2017, 50, 1–22. [Google Scholar] [CrossRef]
- Zhang, S.; Zhang, X.; Chan, J.; Rosso, P. Irony detection via sentiment-based transfer learning. Inf. Process. Manag. 2019, 56, 1633–1644. [Google Scholar] [CrossRef]
- Ranasinghe, T.; Zampieri, M. Multilingual offensive language identification with cross-lingual embeddings. arXiv 2020, arXiv:2010.05324. [Google Scholar]
- Walker, M.A.; Tree, J.E.F.; Anand, P.; Abbott, R.; King, J. A Corpus for Research on Deliberation and Debate. In Proceedings of the LREC, Istanbul, Turkey, 23–25 May 2012; Volume 12, pp. 812–817. [Google Scholar]
- Joshi, A.; Sharma, V.; Bhattacharyya, P. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), Beijing, China, 26–31 July 2015; pp. 757–762. [Google Scholar]
- Oraby, S.; Harrison, V.; Reed, L.; Hernandez, E.; Riloff, E.; Walker, M. Creating and characterizing a diverse corpus of sarcasm in dialogue. arXiv 2017, arXiv:1709.05404. [Google Scholar]
- Khodak, M.; Saunshi, N.; Vodrahalli, K. A large self-annotated corpus for sarcasm. arXiv 2017, arXiv:1704.05579. [Google Scholar]
- Schuster, T.; Ram, O.; Barzilay, R.; Globerson, A. Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. arXiv 2019, arXiv:1902.09492. [Google Scholar]
- Pant, K.; Dadu, T. Cross-lingual inductive transfer to detect offensive language. arXiv 2020, arXiv:2007.03771. [Google Scholar]
- Taghizadeh, N.; Faili, H. Cross-lingual transfer learning for relation extraction using universal dependencies. Comput. Speech Lang. 2022, 71, 101265. [Google Scholar] [CrossRef]
- Pires, T.; Schlinger, E.; Garrette, D. How multilingual is multilingual BERT? arXiv 2019, arXiv:1906.01502. [Google Scholar]
- Lample, G.; Conneau, A. Cross-lingual language model pretraining. arXiv 2019, arXiv:1901.07291. [Google Scholar]
- Conneau, A.; Khandelwal, K.; Goyal, N.; Chaudhary, V.; Wenzek, G.; Guzmán, F.; Grave, E.; Ott, M.; Zettlemoyer, L.; Stoyanov, V. Unsupervised cross-lingual representation learning at scale. arXiv 2019, arXiv:1911.02116. [Google Scholar]
- Raja, E.; Soni, B.; Borgohain, S.K. Fake news detection in Dravidian languages using transfer learning with adaptive finetuning. Eng. Appl. Artif. Intell. 2023, 126, 106877. [Google Scholar] [CrossRef]
- Kumar, A.; Albuquerque, V.H.C. Sentiment analysis using XLM-R transformer and zero-shot transfer learning on resource-poor Indian language. Trans. Asian-Low-Resour. Lang. Inf. Process. 2021, 20, 1–13. [Google Scholar] [CrossRef]
- Schick, T.; Schütze, H. Exploiting cloze questions for few shot text classification and natural language inference. arXiv 2020, arXiv:2001.07676. [Google Scholar]
- Shin, T.; Razeghi, Y.; Logan, R.L., IV; Wallace, E.; Singh, S. Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv 2020, arXiv:2010.15980. [Google Scholar]
- Huang, L.; Ma, S.; Zhang, D.; Wei, F.; Wang, H. Zero-shot cross-lingual transfer of prompt-based tuning with a unified multilingual prompt. arXiv 2022, arXiv:2202.11451. [Google Scholar]
- Qi, K.; Wan, H.; Du, J.; Chen, H. Enhancing cross-lingual natural language inference by prompt-learning from cross-lingual templates. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland, 22–27 May 2022; pp. 1910–1923. [Google Scholar]
- Li, Y.; Li, Y.; Zhang, S.; Liu, G.; Chen, Y.; Shang, R.; Jiao, L. An attention-based, context-aware multimodal fusion method for sarcasm detection using inter-modality inconsistency. Knowl.-Based Syst. 2024, 287, 111457. [Google Scholar] [CrossRef]
- Liu, H.; Wei, R.; Tu, G.; Lin, J.; Liu, C.; Jiang, D. Sarcasm driven by sentiment: A sentiment-aware hierarchical fusion network for multimodal sarcasm detection. Inf. Fusion 2024, 108, 102353. [Google Scholar] [CrossRef]
- Veale, T.; Hao, Y. Detecting ironic intent in creative comparisons. In ECAI 2010; IOS Press: Amsterdam, The Netherlands, 2010; pp. 765–770. [Google Scholar]
- Wang, J.; Zhang, H.; An, T.; Jin, X.; Wang, C.; Zhao, J.; Wang, Z. Effect of vaccine efficacy on vaccination behavior with adaptive perception. Appl. Math. Comput. 2024, 469, 128543. [Google Scholar] [CrossRef]
- Hernández-Farías, I.; Benedí, J.M.; Rosso, P. Applying basic features from sentiment analysis for automatic irony detection. In Proceedings of the Pattern Recognition and Image Analysis: 7th Iberian Conference, IbPRIA 2015, Santiago de Compostela, Spain, 17–19 June 2015; Proceedings 7. Springer: Berlin/Heidelberg, Germany, 2015; pp. 337–344. [Google Scholar]
- Wang, Y.; Li, Y.; Wang, J.; Lv, H. An optical flow estimation method based on multiscale anisotropic convolution. Appl. Intell. 2024, 54, 398–413. [Google Scholar] [CrossRef]
- Zhang, H.; An, T.; Yan, P.; Hu, K.; An, J.; Shi, L.; Zhao, J.; Wang, J. Exploring cooperative evolution with tunable payoff’s loners using reinforcement learning. Chaos Solitons Fractals 2024, 178, 114358. [Google Scholar] [CrossRef]
- Riloff, E.; Qadir, A.; Surve, P.; De Silva, L.; Gilbert, N.; Huang, R. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WA, USA, 18–21 October 2013; pp. 704–714. [Google Scholar]
- Reyes, A.; Rosso, P.; Buscaldi, D. From humor recognition to irony detection: The figurative language of social media. Data Knowl. Eng. 2012, 74, 1–12. [Google Scholar] [CrossRef]
- Mukherjee, S.; Bala, P.K. Sarcasm detection in microblogs using Naïve Bayes and fuzzy clustering. Technol. Soc. 2017, 48, 19–27. [Google Scholar] [CrossRef]
- Poria, S.; Cambria, E.; Hazarika, D.; Vij, P. A deeper look into sarcastic tweets using deep convolutional neural networks. arXiv 2016, arXiv:1610.08815. [Google Scholar]
- Kumar, A.; Narapareddy, V.T.; Srikanth, V.A.; Malapati, A.; Neti, L.B.M. Sarcasm detection using multi-head attention based bidirectional LSTM. IEEE Access 2020, 8, 6388–6397. [Google Scholar] [CrossRef]
- Jamil, R.; Ashraf, I.; Rustam, F.; Saad, E.; Mehmood, A.; Choi, G.S. Detecting sarcasm in multi-domain datasets using convolutional neural networks and long short term memory network model. PeerJ Comput. Sci. 2021, 7, e645. [Google Scholar] [CrossRef]
- Babanejad, N.; Davoudi, H.; An, A.; Papagelis, M. Affective and contextual embedding for sarcasm detection. In Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, Spain, 8–13 December 2020; pp. 225–243. [Google Scholar]
- Lou, C.; Liang, B.; Gui, L.; He, Y.; Dang, Y.; Xu, R. Affective dependency graph for sarcasm detection. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, Virtual, 11–15 July 2021; pp. 1844–1849. [Google Scholar]
- Wang, X.; Dong, Y.; Jin, D.; Li, Y.; Wang, L.; Dang, J. Augmenting affective dependency graph via iterative incongruity graph learning for sarcasm detection. In Proceedings of the AAAI conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2023; Volume 37, pp. 4702–4710. [Google Scholar]
- Ren, Y.; Wang, Z.; Peng, Q.; Ji, D. A knowledge-augmented neural network model for sarcasm detection. Inf. Process. Manag. 2023, 60, 103521. [Google Scholar] [CrossRef]
- Yu, Z.; Jin, D.; Wang, X.; Li, Y.; Wang, L.; Dang, J. Commonsense knowledge enhanced sentiment dependency graph for sarcasm detection. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, Macao, China, 19–25 August 2023; pp. 2423–2431. [Google Scholar]
- Singh, P.; Lefever, E. Sentiment analysis for hinglish code-mixed tweets by means of cross-lingual word embeddings. In Proceedings of the 4th Workshop on Computational Approaches to Code Switching, Marseille, France, 11–16 May 2020; pp. 45–51. [Google Scholar]
- Mao, Z.; Gupta, P.; Wang, P.; Chu, C.; Jaggi, M.; Kurohashi, S. Lightweight cross-lingual sentence representation learning. arXiv 2021, arXiv:2105.13856. [Google Scholar]
- Li, I.; Sen, P.; Zhu, H.; Li, Y.; Radev, D. Improving cross-lingual text classification with zero-shot instance-weighting. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021), Bangkok, Thailand, 6 August 2021; pp. 1–7. [Google Scholar]
- Yang, Z.; Cui, Y.; Chen, Z.; Wang, S. Cross-lingual text classification with multilingual distillation and zero-shot-aware training. arXiv 2022, arXiv:2202.13654. [Google Scholar]
- Bukhari, S.H.H.; Zubair, A.; Arshad, M.U. Humor detection in english-urdu code-mixed language. In Proceedings of the 2023 3rd International Conference on Artificial Intelligence (ICAI), Islamabad, Pakistan, 22–23 February 2023; pp. 26–31. [Google Scholar]
- Ghayoomi, M. Enriching contextualized semantic representation with textual information transmission for COVID-19 fake news detection: A study on English and Persian. Digit. Scholarsh. Humanit. 2023, 38, 99–110. [Google Scholar] [CrossRef]
- Ding, K.; Liu, W.; Fang, Y.; Mao, W.; Zhao, Z.; Zhu, T.; Liu, H.; Tian, R.; Chen, Y. A simple and effective method to improve zero-shot cross-lingual transfer learning. arXiv 2022, arXiv:2210.09934. [Google Scholar]
- Liu, L.; Xu, D.; Zhao, P.; Zeng, D.D.; Hu, P.J.H.; Zhang, Q.; Luo, Y.; Cao, Z. A cross-lingual transfer learning method for online COVID-19-related hate speech detection. Expert Syst. Appl. 2023, 234, 121031. [Google Scholar] [CrossRef]
- Qin, L.; Ni, M.; Zhang, Y.; Che, W. Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual nlp. arXiv 2020, arXiv:2006.06402. [Google Scholar]
- Zhu, Z.; Cheng, X.; Chen, D.; Huang, Z.; Li, H.; Zou, Y. Mix before align: Towards zero-shot cross-lingual sentiment analysis via soft-mix and multi-view learning. In Proceedings of the INTERSPEECH, Dublin, Ireland, 20–24 August 2023. [Google Scholar]
- Lin, H.; Ma, J.; Chen, L.; Yang, Z.; Cheng, M.; Chen, G. Detect rumors in microblog posts for low-resource domains via adversarial contrastive learning. arXiv 2022, arXiv:2204.08143. [Google Scholar]
- Shi, X.; Liu, X.; Xu, C.; Huang, Y.; Chen, F.; Zhu, S. Cross-lingual offensive speech identification with transfer learning for low-resource languages. Comput. Electr. Eng. 2022, 101, 108005. [Google Scholar] [CrossRef]
- Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
- Schick, T.; Schütze, H. It’s not just size that matters: Small language models are also few-shot learners. arXiv 2020, arXiv:2009.07118. [Google Scholar]
- Lin, N.; Fu, Y.; Lin, X.; Zhou, D.; Yang, A.; Jiang, S. Cl-xabsa: Contrastive learning for cross-lingual aspect-based sentiment analysis. arXiv 2023, arXiv:2204.00791. [Google Scholar] [CrossRef]
- Misra, R. News headlines dataset for sarcasm detection. arXiv 2022, arXiv:2212.06035. [Google Scholar] [CrossRef]
- Zhu, Y. Open Chinese Internet Sarcasm Corpus Construction: An Approach. Front. Comput. Intell. Syst. 2022, 2, 7–9. [Google Scholar] [CrossRef]
- Xiang, R.; Gao, X.; Long, Y.; Li, A.; Chersoni, E.; Lu, Q.; Huang, C.R. Ciron: A new benchmark dataset for Chinese irony detection. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; pp. 5714–5720. [Google Scholar]
- Tang, Y.j.; Chen, H.H. Chinese irony corpus construction and ironic structure analysis. In Proceedings of the COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, Ireland, 23–29 August 2014; pp. 1269–1278. [Google Scholar]
- Artetxe, M.; Schwenk, H. Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Trans. Assoc. Comput. Linguist. 2019, 7, 597–610. [Google Scholar] [CrossRef]
- Dauphin, Y.; De Vries, H.; Bengio, Y. Equilibrated adaptive learning rates for non-convex optimization. arXiv 2015, arXiv:1502.04390. [Google Scholar]
- McInnes, L.; Healy, J.; Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv 2018, arXiv:1802.03426. [Google Scholar]
Language | Dataset | Ironic | Non-Ironic | Total |
---|---|---|---|---|
English | News Headlines Dataset [62] | 11,724 | 12,779 | 24,503 |
Chinese | A Balanced Chinese Internet Sarcasm Corpus [63] | 1000 | 1000 | 2000 |
A New Benchmark Dataset [64] | 968 | 7734 | 8702 |
Language | Dataset | Example | Label |
---|---|---|---|
English | News Headlines Dataset [62] | “bus-stop ad has more legal protections than average citizen” | ironic |
“psychologists push for smartphone warning labels” | non-ironic | ||
Chinese | A Balanced Chinese Internet Sarcasm Corpus [63] | “实用,可以一边走路一边泡脚了” | ironic |
“早安!美好的一天!” | non-ironic | ||
A New Benchmark Dataset [64] | “晕。小桔你不够专业呀,亏你工龄 还那么长了。” | ironic | |
“再过一会儿,十二点整在新台的 第一次播出就将开始啦!” | non-ironic |
Model | A Balanced Chinese Internet Sarcasm Corpus | A New Benchmark Dataset | ||||
---|---|---|---|---|---|---|
Precision | Accuracy | F1-Score | Precision | Accuracy | F1-Score | |
LASER+SVM | 55.25 | 54.80 | 53.82 | 78.85 | 56.34 | 64.44 |
LASER+LR | 56.34 | 55.85 | 54.99 | 78.61 | 56.26 | 64.37 |
LASER+RNN | 54.90 | 54.55 | 53.71 | 79.47 | 57.66 | 65.50 |
M-Bert | 60.03 | 56.10 | 52.13 | 80.00 | 72.14 | 75.64 |
XLM-R | 68.34 | 58.49 | 52.65 | 80.62 | 65.62 | 71.53 |
PCT-XLM-R | 71.60 | 67.60 | 67.41 | 83.33 | 64.86 | 71.07 |
Ours | 77.51 | 72.50 | 72.14 | 82.62 | 73.52 | 76.70 |
Model | A Balanced Chinese Internet Sarcasm Corpus | A New Benchmark Dataset | ||||
---|---|---|---|---|---|---|
Precision | Accuracy | F1-Score | Precision | Accuracy | F1-Score | |
w/o DA | 68.88 | 64.35 | 61.10 | 82.85 | 69.19 | 73.95 |
w/o CLC | 74.52 | 69.10 | 68.74 | 83.97 | 64.57 | 70.93 |
Ours | 77.51 | 72.50 | 72.14 | 82.62 | 73.52 | 76.70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
An, T.; Yan, P.; Zuo, J.; Jin, X.; Liu, M.; Wang, J. Enhancing Cross-Lingual Sarcasm Detection by a Prompt Learning Framework with Data Augmentation and Contrastive Learning. Electronics 2024, 13, 2163. https://doi.org/10.3390/electronics13112163
An T, Yan P, Zuo J, Jin X, Liu M, Wang J. Enhancing Cross-Lingual Sarcasm Detection by a Prompt Learning Framework with Data Augmentation and Contrastive Learning. Electronics. 2024; 13(11):2163. https://doi.org/10.3390/electronics13112163
Chicago/Turabian StyleAn, Tianbo, Pingping Yan, Jiaai Zuo, Xing Jin, Mingliang Liu, and Jingrui Wang. 2024. "Enhancing Cross-Lingual Sarcasm Detection by a Prompt Learning Framework with Data Augmentation and Contrastive Learning" Electronics 13, no. 11: 2163. https://doi.org/10.3390/electronics13112163
APA StyleAn, T., Yan, P., Zuo, J., Jin, X., Liu, M., & Wang, J. (2024). Enhancing Cross-Lingual Sarcasm Detection by a Prompt Learning Framework with Data Augmentation and Contrastive Learning. Electronics, 13(11), 2163. https://doi.org/10.3390/electronics13112163