2. Online Bullying and Hate on Social Media
According to
Cleland (
2014), social media sites have paved the way for racist opinions and rhetoric to flourish online. Similarly,
Brown (
2009) argues that social networking sites make it easier to spread hate by replacing outdated forms of technology and creating a new social setting online. In addition,
Ben-David and Matamoros-Fernández (
2016) argue that with the emergence of social media, hate groups have added platforms, such Facebook, to their communicative networks, despite the fact that in its terms of service agreement, Facebook users agree not to post content that is hateful or violent. According to
Farkas et al. (
2018), research collected over a span of the last 10 years indicates how fake identities have been disseminated through social media to promote racism. Online antagonism that takes place over social media has the potential to accelerate existing real-life racism through the dispersal of hateful discourse (
Patton et al. 2017).
Milner (
2013) confirms that trolling practices that often use humor work to antagonize people from minority backgrounds, creating a “marginalized other” (p. 63). Similarly, as argued by
Matamoros-Fernández (
2017), hate takes on a new shape when it comes to the online environment, as documented by far-right extremists often active on Facebook and other social networking sites, such as YouTube, Twitter, and Instagram (
Al-Rawi 2017,
2020,
2021).
In this section, I survey a few previous studies that examined online hate against religious, ethnic, and racial groups, and I situate the literature within the broader discussion of the harmful content of social media, such as issues related to trolling, drug use, revenge porn, cyberbullying, abuse, public health, and negative psychological impact (
Al-Rawi 2019;
Baccarella et al. 2018;
Cao and Sun 2018;
Salo et al. 2018;
Scheinbaum 2017;
Smaldone et al. 2020). This study focuses on one aspect of social media that is manifested in online trolling and hate.
In their empirical research,
Vidgen and Yasseri (
2020) created a classification system to better understand Islamophobic hate on social media. Here, they distinguished between differing strengths of Islamophobia. Strong Islamophobia on social media is defined as “content which explicitly expresses negativity against Muslims” (p. 69). The authors (ibid) define weak Islamophobia as “content which implicitly implies negativity against Muslims” (p. 69). Using the power of computational analyses, an automatic software tool was created to distinguish between strong and weak Islamophobia on social media. First, to create the dataset, the research team created a list of 50,000 Twitter users composed of individuals who follow at least one of the six major political parties in the UK. Tweets from these accounts were sampled between January 2017 and June 2018, creating a dataset with 140 million tweets, which was then used to create a training dataset of 4000 tweets. A total of 1000 of the 4000 tweets within the training dataset were found using the search terms “Muslim” and “Islam”. Three blind human annotators then analyzed the tweets. Next, the researchers extracted key features that were deemed important, for example, the number of swear words, the mention of Muslim names, the mention of mosques, which were then used to test for strong and weak forms of Islamophobia. Those tweets that mentioned mosques were five times more likely to be categorized as strong Islamophobia. Similarly, through their study,
MacAvaney et al. (
2019) stress the importance of using keyword-based approaches to track potentially hateful keywords to classify online hate. Hatebase, for example, is a resource both
MacAvaney et al. (
2019) and
Vidgen and Yasseri (
2020) cite as valuable to create a classification system that can detect hate in combination with examining the sociopolitical context.
Further,
Ben-David and Matamoros-Fernández (
2016) sought out to investigate hate speech present on the Facebook pages of right-wing political parties in Spain. Using textual analysis, the authors compared the types of words that would be frequently used by political parties. There were categories or clusters created, which included Spain, immigration, independence movement, insults, Islam, Moroccan, Black people, Romanians, and South Americans. The top words identified in the Facebook posts that occurred frequently were stored and categorized. Additionally, for each political party, the authors selected the top 10 pictures and links with the highest amount of engagement and likes. Of the nine categories that emerged from the analysis of these links and images, the top category was labeled as anti-immigration, in which the images and links targeted immigrants as scapegoats for Spain’s problems (
Ben-David and Matamoros-Fernández 2016). Using a textual method to analyze the posts, the researchers showed that top words were found under the immigration category, and although political parties did not overtly propagate hate speech on their channels, they repeatedly stigmatized immigrants by repeatedly linking them with crime, trouble, and danger (
Ben-David and Matamoros-Fernández 2016). Among the visual content, 18.71% of the images collected were linked to anti-immigrant content; similarly, 25% of the links shared by the extreme-right wing parties were placed under the anti-immigration category. The research indicated that through a textual analysis, covert discrimination was found to be perpetuated by these political parties through the continuous association of immigrants with keywords such as danger and crime.
Similarly,
Sorato et al. (
2020) argue that by extracting fragments of text that are semantically similar, it is possible to depict recurrent linguistic patterns in certain kinds of discourse. The authors use a technique called SSP (Short Semantic Pattern) mining, which works to extract sequences of words that share a similar meaning in their word embedding representation. Here, Sorato et al. then used the extracted patterns and phrases to identify racist discourse presented in their dataset of collected tweets.
On the other hand, investigating how the underlying algorithms of social networking sites influence human activity is key to understanding how hate spreads online. According to
Suler’s (
2004) research on tackling hate on social networking sites, online users often feel less restrained when operating online. This is also highlighted in
Kilvington and Price’s (
2019) examination of Kick It Out, a small UK-based soccer charity that is monitoring racist abuse online and offering rehabilitation training for offenders. After a series of interviews with soccer players, fans, and social media experts,
Kilvington and Price (
2019) mentioned the lack of acknowledgement about the severity of fans’ hateful remarks towards nonwhite players. As a solution, clear guidelines, policies, and resources are needed for clubs to follow and use regarding racist incidents via social media. Furthermore, researchers have investigated how specific platforms push hateful content, highlighting that the idea that sites are “neutral” is a misconception. As argued by
Van Dijck and Poell (
2013), all human actions on social networking sites are influenced by the platform’s underlying social media algorithms, and this is also highlighted by Matamoros-Fernández’s above-mentioned study on hate and racism.
Similarly,
Matamoros-Fernández (
2017) uses social media to analyze online hate in the context of Adam Goodes, a racialized Australian footballer who was met with an influx of online hate for calling out systemic racism. The research project used an issue mapping approach to capture tweets, of which 2174 tweets were coded, containing images, 405 Facebook links, and 529 YouTube links (
Matamoros-Fernández 2017). Furthermore, to examine how platforms perpetuate hateful content, the author created a fake Facebook profile and liked one page from tweets titled “Adam Goodes Flog of the Year” to analyze which content appeared based on the platform’s algorithms. The research examined how platformed racism unfolded in the case of Adam Goodes, where racist members, videos, and comments were protected by the platform itself, indicating algorithmic bias in the customized dissemination of racist information (
Garcia 2016).
Finally,
Farkas et al. (
2018) analyzed 11 Danish Facebook pages that were disguised as Muslim extremists living in Denmark. By collecting posts made by these accounts, the researchers were able to highlight how social networking sites amplify stereotypical identities by connecting right-wing users to these fake pages. This, therefore, creates a hostile environment where Facebook users tap into a reservoir of extremism and anti-Muslim hate. In this sense, hate takes on a new form within online platforms, in which online environments form and solidify identities through posts, images, accounts, and sites that can be monitored and systematically studied.
Indeed, there are of course numerous other studies that make similar arguments to the sources cited above (see, for example,
Aguilera-Carnerero and Azeez 2016;
Awan 2014;
Miller 2017;
Williams et al. 2020), and I cannot list them all here due to the paper’s word limit. Previous studies generally show that despite social media public policies and moderation algorithms, there is ample evidence of online bullying directed at religions and hate against their followers. This study, however, discusses a unique case study on the use of trolling against religions, and it offers new evidence highlighting how certain online communities manage to bypass the policies followed by some social media platforms to express very violent messages through the coded use of language. It also provides a unique insight into the nature of algorithms used on Twitter and Instagram in relation to the use of trolling hashtags and hateful emojis.
In brief, social media research offers ample opportunities to empirically examine trolling, online bullying, and hate speech, and these platforms can also be used to monitor toxic language and identify perpetrators and online communities.
This study attempts to answer the following research questions:
- RQ1.
What are the major communities that troll Islam and Christianity on Twitter and Instagram?
- RQ2.
What is the nature of the hashtagged and emojified discourses about Christians and Muslims?
3. Methods
Using two Python scripts, I collected all the available 16,129 Instagram posts referencing #f***allah, #f***Islam, and #f***quran posted between 2013 and 2020. I also collected all the available 2089 tweets referencing the above hashtags, including #f***muslims, that were posted between 2013 and 2020. These social media messages were posted between 2013 and 2020 when the search was conducted, and these were all the posts that the Python scripts managed to retrieve. In total, I collected 18,218 social media posts and tweets that reference Islamophobic language using English language search terms that involve the use of the “f” word, spanning over 7 years. I focused my research on Twitter and Instagram because both allow hashtagged discussions, and I had the technical means to obtain the necessary data from these two platforms.
As regards Christianity-related hashtags, I collected 4012 tweets that referenced #f***bible, #f***thebible, #f***christ, #f***christianity, #f***christians, and #f***jesus and were posted between 2009 and 2020. Unlike the case for tweets referencing Islam, I used more keyword searches because there were many distinct references to Christianity as names such as Muhammed are commonly used for many Muslim men. Similar to the case of Islam, the hashtag #f***Christians does not exist on Instagram because it is blocked, and the total number of Instagram posts collected was 8573 posted between 2012 and 2020.
To analyze these social media posts, I used other Python scripts to extract the most used words, hashtags, emojis, sequence of emojis, and mentions. Finally, I used a combination of quantitative and qualitative measures to explain the collected social media data. First, the quantitative measures included the extraction of the above data (e.g., most used hashtags and emojis), while the qualitative aspects consisted of conducting a qualitative content analysis using a summative approach that focuses on the latent meaning of a text. The latter method “starts with identifying and quantifying certain words or content in text with the purpose of understanding the contextual use of the words or content” (
Hsieh and Shannon 2005, p. 1283).
To answer the first research question, I used qualitative measures including the identification of the main online communities and proper contextualization by relying on the data extracted from the most mentioned users and their discussions. To answer the second research question, I followed the same approach to provide a critical qualitative interpretation of hashtagged and emojified discourses based on the samples found in Tables 2–5. The online communities were identified based on the qualitative examination of the most mentioned users who tag each other and their shared and distinctive use of words, hashtags, emojis and sequence of emojis, and bigrams (phrases made up of two words). For a complete list of emojis found on social media, please see the official Unicode website (
Unicode 2022).
Finally, I followed a basic reverse engineering approach (
Butcher 2016, p. 88) in late 2020 in an attempt to understand hashtag policies followed by Instagram and Twitter at that time in relation to attacks against Islam and Christianity and their adherents. In this respect, I searched all of the above hashtags on Twitter and Instagram and experimented with a variety of similar other hashtags and sequence of emojis to see whether they are present and widely used or blocked on the platforms. This is because social media algorithms are considered black boxes whose details are proprietary knowledge that is not disclosed to the general public (
Christin 2020), and this is the only means to extract more information to understand the operational infrastructure or algorithms (
Eilam 2011). In other words, reverse engineering is used to “obtain missing knowledge, ideas, and design philosophy when such information is unavailable” (
Eilam 2011, p. 1).
4. Results and Discussion
The findings of the study show that many of the most mentioned Twitter users in relation to Islam are well-known Muslim politicians, organizations, or activists who are trolled due to their Muslim or liberal backgrounds (
Table 1). Some of the well-known political figures include the US democratic congresswomen Ilhan Omar (two targeted accounts), Rashida Tlaib, and Alexandria Ocasio-Cortez, as well as the Twitter accounts of the Swedish prime minister, Stefan Löfven, and the Swedish Social Democrats. The targeted US figures represent democratic voices in the United States who often defend ethnic and religious minorities from attacks by some Republican figures and the far right. However, liberal and progressive voices such as the ones cited above belong to what is known as the Squad (
Borah et al. 2022), who are themselves often trolled in mainstream media, such as Fox News, and on social media with the use of memes (
Pintak et al. 2021;
Al-Rawi 2021;
Al-Rawi et al. 2021). Together with Löfven, these figures are the main trolling targets that often receive the worst type of hateful messages in addition to the Twitter account that is curated by the US Campaign for Palestinian Rights, yet the only exception is related to references to a Hindutva anti-Muslim activist because of the reason mentioned below. Similarly, the most mentioned users on Instagram are mainly far-right supporters and are often referenced to consolidate the online influence and outreach of this trolling online community similar to the case of the Hindu activist. If we examine the top 50 most mentioned users, however, we find that majority are ultranationalist Hindu activists who repeatedly post messages such as the following “🚩🦁⚔️🙏🏹जय हिंदुत्व 🏹🙏⚔️🦁🚩 #CAA #CAB #ISupportCAA #ISupportCAB #ISupportNRC #NarendraModi #AmitShah #YogiAdityanath #India #IndianArmy #Hindu #Hinduism #ChhatrapatiShivaji #ChhatrapatiShivajiMaharaj #MaharanaPratap #PrithvirajChauhan #BajarangDal #VishvaHinduParishad #RSS #RashtriyaSwayamsevakSangh #BJP #BhartiyaJanataParty #Rajputana #Rajput #PayalRohtagi #TigerRajaSingh #HinduRashtra #F***Islam #IslamIsShit #IslamIsJihad”. Similar to far-right groups in the West, the Instagram posts of ultranationalist Hindu communities or Hindutva in reference to Hindu nationalism from India and elsewhere often tag supportive users to create a strong online community and share similar messages. It is important to note here that the online support for the Hindutva ideology can be predominately found in India but also in other Western countries where pro-Modi Indian diasporic communities live, such as the USA (
de Souza and Hussain 2021). Interestingly, this kind of support is also manifested in the seeming alliance between Modi’s and Donald Trump’s supporters, united by their hatred of Islam and negative attitude toward China (
Singh 2021). As can be seen, the results of this study align with previous research that identified the way Hindu ultranationalist communities attack Muslims online (
Gittinger 2018;
Rajan and Venkatraman 2021;
Amarasingam et al. 2022). However, this paper offers a unique insight into the way hashtags and emojis are used to troll Muslims.
There is obviously a trolling campaign found on the two social media platforms, which can be defined in the context of this study as coordinated and systematic online attacks against a minority religious group whose aim is to discredit its cause and/or demean it. This is evident from the most used hashtags, for they show clear divisive terms that are highly offensive and abusive towards Islam (
Table 2). Twitter, for example, contains many hashtags that call for deporting Muslims from Western countries, such as #VoteThemOut, #SendThemAllBack, and #DeportAllMoslums. There are also a few hashtags that are used ironically, suggesting the opposite meaning, such as #ReligionOfPeace and #Peacefortheworld. The examination of the bigrams shows that some of the top phrases include “religionofpeace islamistheproblem”, indicating the ironic use of these terms.
Similar to the observation made above, we can see that there a few references to non-US terms that attack Western liberals, such as #svpol or Swedish politics, or others that show solidarity with anti-Muslim Hindu activists, such as #standwithmodi on Twitter, while there are more atheist hashtags on Instagram. Many conservative and far-right hashtags are used on Instagram, such as #americanasf, #Merica, #covefe, #deblorable, #pepe, and #libtards, which mostly mock liberals. These Instagram posts are often accompanied by pleas to protect freedom of speech, which is clear in the use of other hashtags, such as #freedomand #liberty. This aligns with previous research on the far right and their online strategies to attract attention and gain sympathy for their causes (
Tumber and Waisbord 2021;
Gounari 2021;
Kamali 2022). What is disturbing, however, is the use of violent expressions in association with Muslims that seem to encourage physical violence, such as using the hashtag #pewpew, which is a popular one on Instagram in reference to gunshots, while other associated hashtags that promote militancy include #war, #guns, #army, and #rangers.
These latter problematic hashtags are often accompanied by other coded and more nuanced nonverbal messages represented in emojis. For example,
Table 3 shows the most frequent emojis used on Twitter and Instagram, and we can clearly see differences between the two social media platforms. While the middle finger insult against Islam is dominant on Twitter (ranked number 1), it is not the same on Instagram (ranked number 20). In terms of Twitter emoji sequences, we find that the middle finger is also used in association with the mosque, the Kaaba in Mecca, and death threats with the use of the crossed swords (⚔) and human skull (💀), which are symbols of war. Emojis express far more than mere sentiments as there are clear messages associating Islam with satanic practices (😈☪) in different frequencies as well as terrorism against white people (💥🙋🏼). We can also see ultranationalistic messages by linking these insults to the flags of countries such as the US, UK, France, Australia, Israel, and Poland represented by letter symbols and highlighting the alleged threat/emergency of Islamic expansion in these countries with the following emoji sequence (🇬🇧🇺🇸🇦🇺🇵🇱🇮🇱🚨🚨) and other similar ones. Some of the other emojis attempt to offend and mark the difference between Muslim and Christian religions by repeatedly using the pig and bacon emojis (🐷🥓), while other sequences show clearer messages, such as 🇺🇸🗽📃🔫🗡⛪🐘🐖💀💪, which can be interpreted as follows: “We have to fight (🔫) in the USA (🇺🇸) for our liberty and freedom of speech (🗽) that is enshrined in the first amendment (📃) in order to protect (🗡) our homeland (⛪) and the Republican (🐘) values as well as Christian way of life (🐖). We will use force (💪) until we die or kill our enemy (💀)”. Finally, other celebratory and positive emojis on Twitter are meant to mock and welcome insults against Islam and Muslims, such as clapping, OK, and funny faces (👏, 👍, 😂, 🤣). Some direct examples that are still found on Twitter include both written and emojified hate messages, such as “People need to stop reading that silly book now it’s made up ! #fuckallah #nosurrender 👳🔫” or “@THERACISTDOCTOR the sandniggers at the bottom aren’t swedes! shame, such a beautiful country in ruins. #fuckislam👳🔫”.
As regards Instagram emojis, there are many other sequences that could not be listed in
Table 3 due to its limited size, but they are presented here. First, we find that the poop symbol is more prominently used in terms of the sequence of emojis, and there are far more aggressive and militant ones than what is found on Twitter. For example, the gun emoji (🔫) was used 114 times on Instagram, as well as other violent emojis, such as explosion (💥) (
n = 42) and skull (☠) (
n = 19) in reference to threats against Muslims. In addition, there are more prominent country flags that express ultranationalistic sentiments, including the US, India, the Netherlands, the UK, Germany, and Israel, in different sequences. In addition, there are some frequent far-right emojis, such as the OK sign (👌) (
n = 85) and Pepe the Frog (🐸) (
n = 17), that are used by white supremacists and the Hindu OM 🕉 (
n = 45). We also find the pig and bacon emojis to be very prominent, similar to Twitter, and that there are clear threats against Muslims (🔪⚰ or ☠☠ or 👳🏼🔫🇹🇷🚞💣💨📖🔥#koran), including Muslim men of different colors (👳🏾🔫; 👳🔫) and veiled Muslim women (🖕🏼🧕🏽💩).
As regards the findings on Christianity, the top 30 most recurrent words on Twitter include the word “atheism”, which is very frequent (
n = 184) in addition to f***religion (
n = 154), f***god (
n = 137), f***trump (
n = 128), and f***republicans (
n = 107). This community seems to associate Christianity with Republican figures, such as Trump due to his conservative views and public affiliation with the Evangelical Church (
Fea 2018;
Martí 2019). On Instagram, the top 30 words are related to attacking Christianity and other general atheist terms, such as f***religion (
n = 1196), atheist (
n = 1175), and ISIS (
n = 1050). Upon examining the bigrams on Twitter, we once again find that there is emphasis on attacking conservative republicans, such as “f***christians f***republicans” (
n = 96) and “f***republicans f***trump” (
n = 86). On Instagram, however, the focus in the top bigrams is on atheists attacking religious people, such as “f***religiouspeoples f***bible” (
n = 978).
In order to answer the first research question, I identified three main groups targeting Islam based on the methodological procedures and findings presented above. There is a clear coordinated activity among the most mentioned users who tag each other in the sense that Islam is attacked while like-minded people are tagged using @username to notify them and encourage them to further collaborate. These online communities include the following: (1) far-right and antiliberal community that always associates Islam and Muslim immigrants with terrorism, (2) atheists that do not only attack Islam but all world religions, and (3) ultranationalist Hindu community.
To answer the second research question, I found that one of the dominant themes is related to stopping the alleged expansion of Shariah law and Islam in different countries, which is presented as a satanic cult (😒👹☪ or 👹☪) and expressed in different ways such as ❌❌🕋❌☪, 👊☪, and 🔫💣💯🐷🐽. In terms of political statements, there are other emojis that convey solidarity with Israel (✊✡) and the protection of freedom of speech against censorship in the USA (🤔🤐🤐🇺🇸🇺🇸🇺🇸). Regarding online trolling against Christianity, I identified two main online communities by following the procedures highlighted above: (1) atheists and (2) anti-Republican/conservative. For example, the top 10 mentioned users in tweets include Pope Francis @pontifex (
n = 9) and the US President Donald Trump @realdonaldtrump (
n = 7), as well as a few other anti-Trump and self-proclaimed atheist users. On Instagram, however, most of the top 10 users are atheists. Further,
Table 4 shows that there are many atheist-related terms on Twitter, such as #atheism, #atheist, and #nogod, as well as general attacks against Islam and Judaism. Similar to the findings presented above, the other prominent community is the anti-Republican/conservative, which is evident from the use of hashtags such as #f***trump, #impeachandimprison, #impeachtrumpnow, and #dumptrump due to Trump’s public alignment with Christian groups, as stated above. On Instagram, however, the top hashtags are exclusively focused on the atheists’ community. Incidentally, this anticonservative community is largely missing in the examined datasets on Islam especially on Twitter.
Unlike the social media posts that reference Islam, I found that the #pewpew hashtag is completely missing in the two datasets referencing Christianity. Additionally, militant emojis such as 💥 (
n = 17), 🔫 (
n = 15), and 💣 (
n = 4) are rarely used in the entire two datasets.
Table 5, for example, shows only one emoji sequence (😈⛪🔥🔫) that contains a violent message, unlike the numerous aggressive sequence of emojis found in the datasets referencing Islam. In brief, the results show that there are two main online communities that troll Christianity. The first and largest one is an atheist online group that trolls all religions mostly targeting the Twitter account of Pope Francis. This finding closely corresponds with previous research on the increasing important role of atheists in creating online spaces to gather and sometimes troll other religions (
Al-Rawi 2017;
Addington 2017;
Graczyk 2020). The second online community is anti-Trump that attacks conservative Republicans for their policies and close association with Christianity, often associating them with racism and conflict.
Aside from the discussion presented above, I followed a reverse engineering approach (
Butcher 2016) to understand the policies followed by Twitter and Instagram regarding the use of some of the above hashtags. In this respect, Instagram does not allow hashtags such as # f***Christians and #f***Muslims, yet it allows similar hashtags against Islam and Christianity, such as #f***jesus, #f***christ, #f***Allah, and #f***Islam. On the other hand, Twitter allows all of these hashtags to be used. When I compared similar insults against other religions, such as Judaism and Hinduism, I found the same patterns along Twitter and Instagram platforms, which is possibly due to the legal implications behind such policies. In this respect, many EU countries do not allow attacks against religious groups, but the laws permit criticism against religions to protect freedom of speech (
European Commission 2020). The problem, however, in this law is the legal challenges of distinguishing between attacks against individuals versus attacks against their faith. For example, Bleich stresses the “multidimensional nature of Islamophobia, and the fact that Islam and Muslims are often inextricably intertwined in individual and public perceptions” (
Bleich 2012, p. 182). In other words, it is not practically possible for social media platforms to distinguish between attacks on religions and on people adhering to these religions by simply allowing or blocking certain hashtags as more advanced moderation tools are needed.
5. Conclusions
This study offers some original insight into identifying and critically analyzing computer-mediated trolling against Christianity and Islam as well as hashtagged and emojified hate against Muslims. The findings show that the language used against Islam and Christianity is politically driven, but Islam receives far more negative content. The atheist online community is active in attacking both Islam (
Al-Rawi 2017) and Christianity; however, far-right and ultranationalist Hindu groups exclusively troll Islam and Muslims using very violent expressions. On the other hand, the anticonservative online community actively targets Christianity and trolls Trump as well as other US Republicans for their politics and religious affiliations. The implications of the study suggest that the two world religions examined here do not receive equal treatment, for they are constructed differently, which could be linked to geopolitics, stereotypes, conflicts, and other historical factors that are all linked to geographical contexts. Despite the ongoing discussions of improved community guidelines, advanced moderation techniques, and online safety measures enacted to protect minorities and vulnerable groups, this study shows, instead, one aspect of the problematic content, especially that which targets Muslims that is still thriving online. If social media platforms are serious about tackling bad actors, they need to do and invest much more to at least limit the amount of online hate.
In general, both Twitter and Instagram contain ample toxic content, though the former platform allows posting hateful content against Christians and Muslims. Additionally, both platforms provide ample avenues for white supremacists and other hate groups to express their views by using highly aggressive and militant language that encourages violence especially against Islam. Instead of expressing direct textual threats that can be identified by other users, we find Islamophobic groups who exploit the affordances of social media platforms by employing coded language that is communicated via emojis and onomatopoetic hashtags, such as #pewpew. This is a new online phenomenon that I call the weaponization of emojis. There is no doubt that freedom of speech must be largely protected, but when communication, even if it is packaged as funny memes or emojis, incites violence against religious and ethnic groups, then this kind of speech must be at least moderated.
Finally, this study is limited to search words in the English language targeting Christianity and Islam on Twitter and Instagram, and future studies need to take into account the inclusion of other languages that can provide more insight into possible cross-cultural and national comparisons of attacks against religions and cultural differences in the use of emojified hate. Finally, future empirical research is needed to focus on the nature of trolling against other world religions, such as Judaism and their followers, on other social media outlets, such as Telegram, TikTok, and YouTube. Another venue of hate expression is related to mobile apps, such as WhatsApp, which remains very popular in India and elsewhere.