Hate and Fake: Tackling the Evil in Online Social Media

A special issue of Multimodal Technologies and Interaction (ISSN 2414-4088).

Deadline for manuscript submissions: closed (31 May 2021) | Viewed by 19779

Special Issue Editors


E-Mail Website
Guest Editor
DISCo, University of Milano-Bicocca, Milan, Italy
Interests: machine learning; natural language processing; deep learning; hate speech detection; topic modeling; sentiment analysis

E-Mail Website
Guest Editor
Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), San Andrés Cholula, Mexico
Interests: natural language processing; text classification; authorship analysis; social media analysis; information retrieval and text mining

E-Mail Website
Guest Editor
Departamento de Sistemas Informáticos y Computación, Universitat Politècnica de València, València, Spain
Interests: author profiling in social media; fake news detection; hate speech detection; deceptive opinion detection; stance detection; irony detection and opinion mining; plagiarism and social copying detection

Special Issue Information

Dear Colleagues,

Online platforms have become a big part of our everyday life. Expressions of thoughts and facts through social media (Facebook, Twitter, Instagram), discussion websites (Reddit), messaging services (WhatsApp, Snapchat), blogs, forums and online chats have been widespread. While new opportunities have been opened up to give a voice to people on the web, on the other hand, hate speech and fake news promote the evil related to social exclusion and misinformation. The interplay between hate speech and fake news is still growing; one fosters the other. Potential virality and presumed anonymity make the two phenomena dangerous and hurtful. This Special Issue aims to explore novel emerging technologies related to machine learning and natural language processing for countering the spread of hate and fake news on social media.

Two main questions underly this Special Issue:

  • What are the differences and similarities related to hate speech and fake news between different languages, targets, topics and platforms?
  • What are the main causes, intentional or unintentional, related to the spread of fake news and hate on social media?

The contributions submitted to the current Special Issue will provide substantial advancements to hate speech and fake news detection, both from a machine learning and linguistic perspective. We encourage authors to submit original research articles, case studies, reviews, comparative analyses, and theoretical and critical perspectives that are related, but not limited to, the following topics:

  • Machine learning and natural language processing models for detecting hate speech and fake news
  • Hate speech and fake news corpora: compilation, annotation and evaluation
  • Modelling the impact of fake news in the dissemination of hate messages
  • Cross-language and cross-cultural comparisons of hate speech and fake news
  • Explainability of hate speech and fake news detection models
  • Multi-lingual and multimodal detection of fake news and hate speech
  • Fairness, accountability, transparency, and ethics in misinformation and hate speech detection
  • Data and algorithmic bias in fake and hate speech detection
  • Fake news and hate speech spreader identification
  • Profiling of fake news and hate speech users
  • Modelling the diversity of several types of hate speech: racism, sexism, etc…
  • Modelling relationships and differences between hate speech and offensive/aggressive/vulgar language
  • Fact-checking and trustworthiness of online sources
  • Detection of different types of fake news: hoax, propaganda, trolling, and satire

Prof. Dr. Elisabetta Fersini
Dr. Manuel Montes-y-Gómez
Prof. Dr. Paolo Rosso
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Multimodal Technologies and Interaction is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 987 KiB  
Article
Using Shallow and Deep Learning to Automatically Detect Hate Motivated by Gender and Sexual Orientation on Twitter in Spanish
by Carlos Arcila-Calderón, Javier J. Amores, Patricia Sánchez-Holgado and David Blanco-Herrero
Multimodal Technol. Interact. 2021, 5(10), 63; https://doi.org/10.3390/mti5100063 - 13 Oct 2021
Cited by 19 | Viewed by 4860
Abstract
The increasing phenomenon of “cyberhate” is concerning because of the potential social implications of this form of verbal violence, which is aimed at already-stigmatized social groups. According to information collected by the Ministry of the Interior of Spain, the category of sexual orientation [...] Read more.
The increasing phenomenon of “cyberhate” is concerning because of the potential social implications of this form of verbal violence, which is aimed at already-stigmatized social groups. According to information collected by the Ministry of the Interior of Spain, the category of sexual orientation and gender identity is subject to the third-highest number of registered hate crimes, ranking behind racism/xenophobia and ideology. However, most of the existing computational approaches to online hate detection simultaneously attempt to address all types of discrimination, leading to weaker prototype performances. These approaches focus on other reasons for hate—primarily racism and xenophobia—and usually focus on English messages. Furthermore, few detection models have used manually generated databases as a training corpus. Using supervised machine learning techniques, the present research sought to overcome these limitations by developing and evaluating an automatic detector of hate speech motivated by gender and sexual orientation. The focus was Spanish-language posts on Twitter. For this purpose, eight predictive models were developed from an ad hoc generated training corpus, using shallow modeling and deep learning. The evaluation metrics showed that the deep learning algorithm performed significantly better than the shallow modeling algorithms, and logistic regression yielded the best performance of the shallow algorithms. Full article
(This article belongs to the Special Issue Hate and Fake: Tackling the Evil in Online Social Media)
Show Figures

Figure 1

26 pages, 1514 KiB  
Article
Methods for Detoxification of Texts for the Russian Language
by Daryna Dementieva, Daniil Moskovskiy, Varvara Logacheva, David Dale, Olga Kozlova, Nikita Semenov and Alexander Panchenko
Multimodal Technol. Interact. 2021, 5(9), 54; https://doi.org/10.3390/mti5090054 - 4 Sep 2021
Cited by 14 | Viewed by 4919
Abstract
We introduce the first study of the automatic detoxification of Russian texts to combat offensive language. This kind of textual style transfer can be used for processing toxic content on social media or for eliminating toxicity in automatically generated texts. While much work [...] Read more.
We introduce the first study of the automatic detoxification of Russian texts to combat offensive language. This kind of textual style transfer can be used for processing toxic content on social media or for eliminating toxicity in automatically generated texts. While much work has been done for the English language in this field, there are no works on detoxification for the Russian language. We suggest two types of models—an approach based on BERT architecture that performs local corrections and a supervised approach based on a pretrained GPT-2 language model. We compare these methods with several baselines. In addition, we provide the training datasets and describe the evaluation setup and metrics for automatic and manual evaluation. The results show that the tested approaches can be successfully used for detoxification, although there is room for improvement. Full article
(This article belongs to the Special Issue Hate and Fake: Tackling the Evil in Online Social Media)
Show Figures

Figure 1

10 pages, 2704 KiB  
Article
Multimodal Hate Speech Detection in Greek Social Media
by Konstantinos Perifanos and Dionysis Goutsos
Multimodal Technol. Interact. 2021, 5(7), 34; https://doi.org/10.3390/mti5070034 - 29 Jun 2021
Cited by 44 | Viewed by 7421
Abstract
Hateful and abusive speech presents a major challenge for all online social media platforms. Recent advances in Natural Language Processing and Natural Language Understanding allow for more accurate detection of hate speech in textual streams. This study presents a new multimodal approach to [...] Read more.
Hateful and abusive speech presents a major challenge for all online social media platforms. Recent advances in Natural Language Processing and Natural Language Understanding allow for more accurate detection of hate speech in textual streams. This study presents a new multimodal approach to hate speech detection by combining Computer Vision and Natural Language processing models for abusive context detection. Our study focuses on Twitter messages and, more specifically, on hateful, xenophobic, and racist speech in Greek aimed at refugees and migrants. In our approach, we combine transfer learning and fine-tuning of Bidirectional Encoder Representations from Transformers (BERT) and Residual Neural Networks (Resnet). Our contribution includes the development of a new dataset for hate speech classification, consisting of tweet IDs, along with the code to obtain their visual appearance, as they would have been rendered in a web browser. We have also released a pre-trained Language Model trained on Greek tweets, which has been used in our experiments. We report a consistently high level of accuracy (accuracy score = 0.970, f1-score = 0.947 in our best model) in racist and xenophobic speech detection. Full article
(This article belongs to the Special Issue Hate and Fake: Tackling the Evil in Online Social Media)
Show Figures

Figure 1

Back to TopTop