Social Media Topic Classification on Greek Reddit
Abstract
:1. Introduction
2. Related Work
2.1. Greek Text Classification Models
2.2. Greek Embeddings
2.3. Greek Text Classification Datasets
2.4. Topic Classification
3. GreekReddit
3.1. Data Collection
3.2. Preprocessing
3.3. Post-Processing
3.4. Analysis
4. A Series of ML and DL Models for Greek Social Media Topic Classification
4.1. Setup
4.2. Validation and Training
4.3. Experiments
5. Discussion
5.1. Interpretation of the Results
5.2. Ethical Implications
5.3. Limitations
5.4. Future Work Directions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Model Inputs | Model Outputs |
---|---|
Δεν ξέρω αν είμαι ο μόνος αλλά πιστεύω πως όσο είμαστε απασχολημένοι με την όλη κατάσταση της αστυνομίας η κυβέρνηση προσπαθεί να καλύψει αλλά γεγονότα της επικαιρότητας όπως πανδημία και εξωτερική πολιτική. | πολιτική |
I don’t know if I’m the only one but I believe that while we are busy with the whole police situation the government is trying to cover up current events like the pandemic and foreign policy. | politics |
Άλλες οικονομίες, όπως η Κίνα, προσπαθούν να διατηρούν την αξία του νομίσματος τους χαμηλά ώστε να καταστήσουν τις εξαγωγές τους πιο ελκυστικές στο εξωτερικό. Γιατί όμως θεωρούμε πως η πτωτική πορεία της Τουρκικής λίρας είναι η “αχίλλειος πτέρνα” της Τουρκίας; | οικονομία |
Other economies, such as China, try to keep the value of their currency low to make their exports more attractive abroad. But why do we think that the downward course of the Turkish lira is the “Achilles heel” of Turkey? | economy |
Γνωρίζει κανείς γιατί δεν ψηφίζουμε πια για να βγει ποιο τραγούδι θα εκπροσωπήσει την Ελλάδα; Τα τελευταία χρόνια ο κόσμος είναι δυσαρεστημένος με τα τραγούδια που στέλνουν, γιατί συνεχίζεται αυτό; | Ψυχαγωγία /κουλτούρα |
Does anyone know why we no longer vote for which song will represent Greece? In recent years people are unhappy with the songs they send, why does this continue? | Entertainment /culture |
Model Inputs | Model Outputs |
---|---|
Βγήκαν τα αποτελέσματα των πανελληνίων και από ότι φαίνεται περνάω στη σχολή που θέλω—HΜΜΥ. H ερώτηση μου είναι τι είδους υπολογιστή να αγοράσω. Προσωπικά προτιμώ σταθερό αλλά δε ξέρω αν θα χρειαστώ λάπτοπ για εργασίες ή εργαστήρια. Επίσης μπορώ να τα βγάλω πέρα με ένα απλό λάπτοπ ή θα πρέπει να τρέξω πιο <<βαριά>> προγράμματα; Edit: budget 750–800 € Θα περάσω στη πόλη που μένω. | τεχνολογία/επιστήμη (Human assigned) εκπαίδευση (predicted) |
The results of the Panhellenic exams are out and it seems that I am getting into the school I want—ECE. My question is what kind of computer should I buy. Personally, I prefer a desktop one, but I don’t know if I will need a laptop for course or laboratory assignments. Also can I get by with a simple laptop or should I run more <<heavy>> programs? Edit: budget 750–800 € I will study in the city where I live. | technology/science (Human assigned) education (predicted) |
Καλησπέρα και Καλή Χρονιά! Έχω λογαριασμό στην Freedom 24 αλλά δεν θέλω να ασχοληθώ άλλο με αυτή την πλατφόρμα. Την βρήκα πολύ δύσχρηστη και περίεργη. Υπάρχει κάποια άλλη πλατφόρμα που μπορείς να επενδύσεις σε SP500 από Ελλάδα; | οικονομία (Human assigned) εκπαίδευση (predicted) |
Good evening and Happy New Year! I have an account with Freedom 24 but I don’t want to deal with this platform anymore. I found it very unusable and weird. Is there any other platform where you can invest in SP500 from Greece? | economy (Human assigned) education (predicted) |
Παιδιά θέλω προτάσεις για φτηνό φαγητό στην κέντρο Aθήνας, φάση all you can eat μπουφέδες ή να τρως πολύ φαγητό με λίγα χρήματα. Επίσης μοιραστείτε και ενδιαφέροντα εστιατόρια/μπαρ/ταβέρνες που έχουν happy hours. Sharing is caring. | φαγητό (Human assigned) κοινωνία (predicted) |
Guys, I want suggestions for cheap food in the center of Athens, like all you can eat buffets or eating a lot of food for little money. Also share interesting restaurants/bars/taverns that have happy hours. Sharing is caring. | food (Human assigned) society (predicted) |
References
- Minaee, S.; Kalchbrenner, N.; Cambria, E.; Nikzad, N.; Chenaghlu, M.; Gao, J. Deep Learning—Based Text Classification: A Comprehensive Review. ACM Comput. Surv. 2021, 54, 1–40. [Google Scholar] [CrossRef]
- Li, Q.; Peng, H.; Li, J.; Xia, C.; Yang, R.; Sun, L.; Yu, P.S.; He, L. A Survey on Text Classification: From Traditional to Deep Learning. ACM Trans. Intell. Syst. Technol. 2022, 13, 1–41. [Google Scholar] [CrossRef]
- Gasparetto, A.; Marcuzzo, M.; Zangari, A.; Albarelli, A. A Survey on Text Classification Algorithms: From Text to Predictions. Information 2022, 13, 83. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
- Koutsikakis, J.; Chalkidis, I.; Malakasiotis, P.; Androutsopoulos, I. GREEK-BERT: The Greeks Visiting Sesame Street. In Proceedings of the 11th Hellenic Conference on Artificial Intelligence, Athens, Greece, 2–4 September 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 110–117. [Google Scholar]
- Papaloukas, C.; Chalkidis, I.; Athinaios, K.; Pantazi, D.; Koubarakis, M. Multi-Granular Legal Topic Classification on Greek Legislation. In Proceedings of the Natural Legal Language Processing Workshop 2021, Punta Cana, Dominican Republic, 10 November 2021; Aletras, N., Androutsopoulos, I., Barrett, L., Goanta, C., Preotiuc-Pietro, D., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2021; pp. 63–75. [Google Scholar]
- Evdaimon, I.; Abdine, H.; Xypolopoulos, C.; Outsios, S.; Vazirgiannis, M.; Stamou, G. GreekBART: The First Pretrained Greek Sequence-to-Sequence Model. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, 20–25 May 2024; Calzolari, N., Kan, M.Y., Hoste, V., Lenci, A., Sakti, S., Xue, N., Eds.; ELRA: Paris, France; ICCL: Prague, Czech Republic, 2024; pp. 7949–7962. [Google Scholar]
- Giarelis, N.; Mastrokostas, C.; Siachos, I.; Karacapilidis, N. A Review of Greek NLP Technologies for Chatbot Development. In Proceedings of the 27th Pan-Hellenic Conference on Progress in Computing and Informatics, Lamia, Greece, 24–26 November 2023; Association for Computing Machinery: New York, NY, USA, 2024; pp. 15–20. [Google Scholar]
- Zhang, T. Solving Large Scale Linear Prediction Problems Using Stochastic Gradient Descent Algorithms. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, AB, Canada, 4–8 July 2004; Association for Computing Machinery: New York, NY, USA, 2004; p. 116. [Google Scholar]
- Crammer, K.; Dekel, O.; Keshet, J.; Shalev-Shwartz, S.; Singer, Y.; Warmuth, M.K. Online Passive-Aggressive Algorithms. J. Mach. Learn. Res. 2006, 7, 551–585. [Google Scholar]
- Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Murtagh, F. Multilayer Perceptrons for Classification and Regression. Neurocomputing 1991, 2, 183–197. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; Burstein, J., Doran, C., Solorio, T., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2019; pp. 4171–4186. [Google Scholar]
- Koehn, P. Europarl: A Parallel Corpus for Statistical Machine Translation. In Proceedings of the Machine Translation Summit X, Phuket, Thailand, 13 September 2005; Papers. 2005; pp. 79–86. [Google Scholar]
- Athinaios, K.; Chalkidis, I.; Pantazi, D.A.; Papaloukas, C. Named Entity Recognition Using a Novel Linguistic Model for Greek Legal Corpora Based on BERT Model. Bachelor’s Thesis, School of Science, Department of Informatics and Telecommunications, National and Kapodistrian University Of Athens, Athens, Greece, 2020. [Google Scholar]
- Grave, E.; Bojanowski, P.; Gupta, P.; Joulin, A.; Mikolov, T. Learning Word Vectors for 157 Languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, 7–12 May 2018; Calzolari, N., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Hasida, K., Isahara, H., Maegaard, B., Mariani, J., Mazo, H., et al., Eds.; European Language Resources Association (ELRA): Paris, France, 2018. [Google Scholar]
- Outsios, S.; Karatsalos, C.; Skianis, K.; Vazirgiannis, M. Evaluation of Greek Word Embeddings. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; Calzolari, N., Béchet, F., Blache, P., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Isahara, H., Maegaard, B., Mariani, J., et al., Eds.; European Language Resources Association: Paris, France, 2020; pp. 2543–2551. [Google Scholar]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient Estimation of Word Representations in Vector Space. arXiv 2013, arXiv:1301.3781. [Google Scholar] [CrossRef]
- Bojanowski, P.; Grave, E.; Joulin, A.; Mikolov, T. Enriching Word Vectors with Subword Information. Trans. Assoc. Comput. Linguist. 2017, 5, 135–146. [Google Scholar] [CrossRef]
- Outsios, S.; Skianis, K.; Meladianos, P.; Xypolopoulos, C.; Vazirgiannis, M. Word Embeddings from Large-Scale Greek Web Content. arXiv 2018, arXiv:1810.06694. [Google Scholar] [CrossRef]
- Lioudakis, M.; Outsios, S.; Vazirgiannis, M. An Ensemble Method for Producing Word Representations Focusing on the Greek Language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, Suzhou, China, 4–7 December 2020; Karakanta, A., Ojha, A.K., Liu, C.-H., Abbott, J., Ortega, J., Washington, J., Oco, N., Lakew, S.M., Pirinen, T.A., Malykh, V., et al., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2020; pp. 99–107. [Google Scholar]
- Reimers, N.; Gurevych, I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 3–7 November 2019; Inui, K., Jiang, J., Ng, V., Wan, X., Eds.; Association for Computational Linguistics: Kerrville, TX, USA, 2019; pp. 3982–3992. [Google Scholar]
- Papadopoulos, D.; Metropoulou, K.; Papadakis, N.; Matsatsinis, N. FarFetched: Entity-Centric Reasoning and Claim Validation for the Greek Language Based on Textually Represented Environments. In Proceedings of the 12th Hellenic Conference on Artificial Intelligence, Corfu, Greece, 7–9 September 2022; Association for Computing Machinery: New York, NY, USA, 2022; pp. 1–10. [Google Scholar]
- Pitenis, Z.; Zampieri, M.; Ranasinghe, T. Offensive Language Identification in Greek. In Proceedings of the Twelfth Language Resources and Evaluation Conference, Marseille, France, 11–16 May 2020; Calzolari, N., Béchet, F., Blache, P., Choukri, K., Cieri, C., Declerck, T., Goggi, S., Isahara, H., Maegaard, B., Mariani, J., et al., Eds.; European Language Resources Association: Paris, France, 2020; pp. 5113–5119. [Google Scholar]
- Omar, A.; Mahmoud, T.M.; Abd-El-Hafeez, T.; Mahfouz, A. Multi-Label Arabic Text Classification in Online Social Networks. Inf. Syst. 2021, 100, 101785. [Google Scholar] [CrossRef]
- Maslennikova, A.; Labruna, P.; Cimino, A.; Dell’Orletta, F. Quanti Anni Hai? Age Identification for Italian. In Proceedings of the CLiC-it, Bari, Italy, 13–15 November 2019; Available online: https://ceur-ws.org/Vol-2481/ (accessed on 23 July 2024).
- Papucci, M.; De Nigris, C.; Miaschi, A. Evaluating Text-To-Text Framework for Topic and Style Classification of Italian Texts. In Proceedings of the Sixth Workshop on Natural Language for Artificial Intelligence (NL4AI 2022) Co-Located with 21th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2022), Udine, Italy, 30 November 2022; Available online: https://ceur-ws.org/Vol-3287/ (accessed on 23 July 2024).
- Antypas, D.; Ushio, A.; Camacho-Collados, J.; Silva, V.; Neves, L.; Barbieri, F. Twitter Topic Classification. In Proceedings of the 29th International Conference on Computational Linguistics, Gyeongju, Republic of Korea, 12–17 October 2022; Calzolari, N., Huang, C.R., Kim, H., Pustejovsky, J., Wanner, L., Choi, K.S., Ryu, P.M., Chen, H.H., Donatelli, L., Ji, H., et al., Eds.; International Committee on Computational Linguistics: Prague, Czech Republic, 2022; pp. 3386–3400. [Google Scholar]
- Medvedev, A.N.; Lambiotte, R.; Delvenne, J.-C. The Anatomy of Reddit: An Overview of Academic Research. In Proceedings of the Dynamics On and Of Complex Networks III; Ghanbarnejad, F., Saha Roy, R., Karimi, F., Delvenne, J.-C., Mitra, B., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 183–204. [Google Scholar]
- Proferes, N.; Jones, N.; Gilbert, S.; Fiesler, C.; Zimmer, M. Studying Reddit: A Systematic Overview of Disciplines, Approaches, Methods, and Ethics. Soc. Media Soc. 2021, 7, 205630512110190. [Google Scholar] [CrossRef]
- Agarap, A.F. Deep Learning Using Rectified Linear Units (ReLU). arXiv 2018, arXiv:1803.08375. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
- Liu, D.C.; Nocedal, J. On the Limited Memory BFGS Method for Large Scale Optimization. Math. Program. 1989, 45, 503–528. [Google Scholar] [CrossRef]
- El Anigri, S.; Himmi, M.M.; Mahmoudi, A. How BERT’s Dropout Fine-Tuning Affects Text Classification? In Proceedings of the Business Intelligence, Beni Mellal, Morocco, 27–29 May 2021; Fakir, M., Baslam, M., El Ayachi, R., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 130–139. [Google Scholar]
Work | Metrics | ML Models | Transformer-Based Models | ML Models (+Transformer Embeddings) | Language | Data Source |
---|---|---|---|---|---|---|
Omar et al. [25] | Accuracy, Precision, Recall, F1 | ✓ | - | - | Arabic | Facebook, X (Twitter) |
Papucci et al. [27] | F1 Score | - | ✓ | - | Italian | ForumFree |
TweetTopic [28] | Accuracy, Precision, Recall, F1 Score, Jaccard Index | ✓ | ✓ | - | English | X (Twitter) |
Our approach | Precision Recall, F1 Score, Hamming Loss | ✓ | ✓ | ✓ | Greek |
# Total Documents | Dataset Splits # Documents | Dataset Splits % Documents | |
---|---|---|---|
OGTD | 4779 | 3345/1434/- | ~70%/~30%/- |
GreekReddit | 6534 | 5530/504/500 | ~85%/~7.5%/~7.5% |
Makedonia | 8005 | - | - |
GLC | 47,563 | 28,536/9516/9511 | ~60%/~20%/~20% |
GreekSUM | 170,017 | 150,017/10,000/10,000 | ~89%/~5.5%/~5.5% |
Category | Dataset # Documents | Total # Documents (%) |
---|---|---|
κοινωνία (society) | 1730/157/158 | 2045 (31.20%) |
εκπαίδευση (education) | 818/74/74 | 966 (14.78%) |
ψυχαγωγία/κουλτούρα (entertainment/culture) | 536/48/49 | 633 (9.69%) |
πολιτική (politics) | 507/46/46 | 599 (9.17%) |
τεχνολογία/επιστήμη (technology/science) | 497/45/45 | 587 (8.98%) |
οικονομία (economy) | 481/43/44 | 568 (8.60%) |
ταξίδια (travel) | 416/38/38 | 492 (7.53%) |
αθλητικά (sports) | 247/22/23 | 292 (4.47%) |
φαγητό (food) | 182/17/17 | 216 (3.31%) |
ιστορία (history) | 116/10/10 | 136 (2.08%) |
Model | Parameters |
---|---|
SGDC (Linear SVM + SGD) | Loss Function: hinge Penalty: {None, L1, L2, ElasticNet} Maximum Iterations: 1000 Class Weights: {balanced, None} Early Stopping: True Vectorization methods: {TF-IDF, spaCy, GREEK-BERT, MPNet-V2, XLM-R-GREEK} |
PAC | Loss Function: {hinge, squared_hinge} Maximum Iterations: 1000 Class Weights: {balanced, None} Early Stopping: True Vectorization methods: {TF-IDF, spaCy, GREEK-BERT, MPNet-V2, XLM-R-GREEK} |
GBC | Loss Function: log-loss Criterion: {squared_error, mse-friedman} Vectorization methods: {TF-IDF, spaCy, GREEK-BERT, MPNet-V2, XLM-R-GREEK} |
MLPC (Neural Network) | Hidden Layers: 1 Hidden Layer Size: 100 Activation Function: ReLU Learning Rate: ‘constant’ Solver: {Adam, L-BFGS, SGD} Maximum Iterations: {200, 1000} Early Stopping: True Vectorization methods: {TF-IDF, spaCy, GREEK-BERT, MPNet-V2, XLM-R-GREEK} |
GREEK-BERT | Learning Rate: {10−4, 5 × 10−5, 3 × 10−5, 2 × 10−5, 10−5} Batch size: {8, 12, 16} Dropout: 0.1 |
Model (Vectors) | Precision | Recall | F1 | Hamming Loss |
---|---|---|---|---|
GREEK-BERT epoch 4 | 79.46 ± 1.08 | 79.37 ± 1.17 | 79.24 ± 0.83 | 20.4 ± 0.69 |
GREEK-BERT epoch 3 | 78.88 ± 0.9 | 79.03 ± 2.19 | 78.64 ± 1.29 | 20.69 ± 0.77 |
GREEK-BERT epoch 2 | 78.44 ± 1.68 | 77.76 ± 2.29 | 77.74 ± 1.25 | 21.05 ± 0.98 |
GREEK-BERT epoch 1 | 76.99 ± 2.35 | 72.57 ± 1.71 | 73.43 ± 1.65 | 23.08 ± 0.99 |
MLPC (XLM-R-GREEK) | 72.52 ± 1.17 | 69.91 ± 1.12 | 70.77 ± 0.99 | 5.77 ± 0.19 |
MLPC (MPNet-V2) | 70.99 ± 1.16 | 69.21 ± 1.29 | 69.82 ± 1.04 | 5.95 ± 0.17 |
MLPC (GREEK-BERT) | 63.22 ± 1.16 | 65.19 ± 1.11 | 63.78 ± 1.0 | 7.05 ± 0.21 |
MLPC (spaCy) | 65.13 ± 1.55 | 60.96 ± 1.38 | 62.63 ± 1.16 | 6.97 ± 0.24 |
MLPC (TF-IDF) | 48.27 ± 1.48 | 40.58 ± 0.98 | 43.58 ± 0.94 | 8.87 ± 0.17 |
SGDC (MPNet-V2) | 78.52 ± 2.57 | 64.08 ± 3.01 | 69.53 ± 1.25 | 5.57 ± 0.21 |
SGDC (XLM-R-GREEK) | 67.33 ± 1.41 | 69.02 ± 2.51 | 67.69 ± 1.51 | 6.42 ± 0.25 |
SGDC (GREEK-BERT) | 69.93 ± 2.83 | 67.24 ± 3.27 | 67.34 ± 1.7 | 6.36 ± 0.21 |
SGDC (spaCy) | 58.74 ± 2.43 | 61.57 ± 2.94 | 58.8 ± 1.33 | 8.11 ± 0.66 |
SGDC (TF-IDF) | 35.96 ± 0.94 | 45.56 ± 2.09 | 39.83 ± 1.17 | 11.76 ± 0.26 |
PAC (XLM-R-GREEK) | 69.16 ± 3.4 | 67.73 ± 3.5 | 66.49 ± 1.62 | 6.92 ± 0.43 |
PAC (MPNet-V2) | 68.43 ± 5.84 | 68.08 ± 5.13 | 64.62 ± 2.63 | 7.5 ± 1.12 |
PAC (GREEK-BERT) | 74.94 ± 4.2 | 58.54 ± 6.73 | 60.8 ± 3.96 | 7.05 ± 0.68 |
PAC (spaCy) | 62.81 ± 5.1 | 57.75 ± 6.32 | 55.96 ± 2.28 | 8.26 ± 0.89 |
PAC (TF-IDF) | 36.56 ± 1.34 | 41.22 ± 1.13 | 38.4 ± 1.02 | 11.33 ± 0.29 |
GBC (XLM-R-GREEK) | 82.1 ± 0.28 | 54.81 ± 0.57 | 65.17 ± 0.49 | 5.35 ± 0.03 |
GBC (MPNet-V2) | 80.91 ± 0.55 | 55.05 ± 0.66 | 65.11 ± 0.56 | 5.6 ± 0.02 |
GBC (GREEK-BERT) | 73.0 ± 0.82 | 34.4 ± 0.21 | 45.77 ± 0.3 | 6.76 ± 0.02 |
GBC (spaCy) | 75.75 ± 2.71 | 40.45 ± 0.5 | 51.5 ± 0.86 | 6.45 ± 0.04 |
GBC (TF-IDF) | 58.68 ± 1.93 | 20.9 ± 0.38 | 29.53 ± 0.57 | 8.29 ± 0.04 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mastrokostas, C.; Giarelis, N.; Karacapilidis, N. Social Media Topic Classification on Greek Reddit. Information 2024, 15, 521. https://doi.org/10.3390/info15090521
Mastrokostas C, Giarelis N, Karacapilidis N. Social Media Topic Classification on Greek Reddit. Information. 2024; 15(9):521. https://doi.org/10.3390/info15090521
Chicago/Turabian StyleMastrokostas, Charalampos, Nikolaos Giarelis, and Nikos Karacapilidis. 2024. "Social Media Topic Classification on Greek Reddit" Information 15, no. 9: 521. https://doi.org/10.3390/info15090521
APA StyleMastrokostas, C., Giarelis, N., & Karacapilidis, N. (2024). Social Media Topic Classification on Greek Reddit. Information, 15(9), 521. https://doi.org/10.3390/info15090521