A Survey on Troll Detection
Abstract
:1. Introduction
2. Social Media and the Trolling Phenomenon
2.1. Troll Definition and Features
- Deception: within a community like Usenet, as in any other question-and-answer (Q&A) forum, if a troll wants to have some chances of success, he must keep his real intent of trolling hidden. He will attempt to disrupt the group, trying to stay undercover. In fact, it is not possible to determine with certainty whether someone is causing problems intentionally and to label that person as a troll, because it may be simply a novice user or a discordant voice. Perhaps it is easier (for a user or for a supervisor) to identify ambiguous behaviours and then to assess whether they are maintained over time. An example may be the “pseudo-naïve” behaviour, that occurs when a troll intentionally disseminates false or inaccurate advice or pretends to ask for help, to provoke an emotional response in the other group members [15].
- Aggression: a troll who is aiming at generating a conflict can use a provocative tone towards other users. These are malicious or aggressive behaviours undertaken with the sole purpose to annoy or provoke others, using ridiculous rants, personal insults, offensive language or attempts to hijack the conversation onto a different topic.
- Disruption: it is the act of causing a degradation of the conversation without necessarily attacking a specific individual. A behaviour of this type includes sending senseless, irrelevant or repetitive messages aimed at seeking attention. This has also been referred to as trolling spam, linked to the common spam, but separate from it, as it is driven by the intention to provoke negative responses.
- Success: one of the most curious aspects of the problem is that, often, a troll is acclaimed by users for his success both in relation to the quality of his own joke, i.e., for being funny, and for the way others react to it. In fact, some responses to the provocation—whether they are angry, shocked or curious—are regarded as a “bait” to the troll’s joke or, in other words, a demonstration that those who are responding were unwittingly duped by the pseudo-intent of the troll without being aware of the troll’s real goal. The attention of the group is similarly drawn even when the quality of a troll’s joke is low and everybody can understand his real intent or when an experienced user can respond to a troll’s message in a manner that prevents him from falling into the prepared trap, possibly trying to unnerve the troll. So trolling, despite being a nuisance for users, may end up being the centre of attention of the group for its real purpose and not for his pseudo-intent. Therefore, this aspect is related to how the group reacts to the troll and not to its modalities.
“A troll is a CMC user who constructs the identity of sincerely wishing to be part of the group in question, including professing, or conveying pseudo-sincere intentions, but whose real intention(s) is/are to cause disruption and/or to trigger or exacerbate conflict for the purposes of their own amusement. Just like malicious impoliteness, trolling can (i) be frustrated if users correctly interpret an intent to troll, but are not provoked into responding, (ii) be thwarted, if users correctly interpret an intent to troll, but counter in such a way as to curtail or neutralize the success of the troll, (iii) fail, if users do not correctly interpret an intent to troll and are not provoked by the troll, or, (iv) succeed, if users are deceived into believing the troll’s pseudo-intention(s), and are provoked into responding sincerely. Finally, users can mock troll. That is, they may undertake what appears to be trolling with the aim of enhancing or increasing affect, or group cohesion”.[8]
2.2. Troll’s Damages
2.3. Coping with Trolls
3. Troll Detection Methods
3.1. Post-Based Methods
3.2. Thread-Based Methods
3.3. User-Based Methods
- Post content: word count, readability metrics, actual content, etc.
- User activity: daily number of posts, maximum number of published posts in a thread, total number of posts in response to specific comments, etc.
- Reactions of the community: votes per post, total number of indicated posts, total number of responses, etc.
- Moderator’s actions: number of removed comments, etc.
- Rating: percentage of user comments in each level of evaluation (very positive, positive, average, negative and very negative).
- Consistency of the comments with respect to the topic: cosine similarity between the comments of the same thread.
- Order of comments: number of times in which a user is among the first ones to comment on a discussion thread.
- Comments most loved/hated: number of times that a comment is among the most loved or hated in a thread (with various thresholds).
- Answers: number of responses to other comments, number of responses to other answers, etc. Other features are then generated by fusing these values with those based on votes.
- Time: number of comments made at different times of the day and on daily and weekly basis.
3.4. Community Based Methods
- Negated Number of Freaks (NNF): negated number of total foes of a node.
- Fans Minus Freaks (FMF): a user is called a “fan” of a friend and “freak” of an enemy. By subtracting the two values, it is possible to determine the reputation of an individual or a possible measure for the popularity.
- Page Rank (PR): measure that denotes the tendency of a person to be central, making no distinction between friends and enemies. It is therefore useful to define its popularity.
- Signed Spectral Ranking (SR): extended version of PageRank but aimed at measuring the popularity of a user on the network.
- Signed Symmetric Spectral Ranking (SSR): popularity measurement based on the idea that a popular user has few enemies and that the negative arches are more common among unpopular users.
- Negative Rank (NR): given the high correlation between the PR and SR measures, this additional metric is obtained as their difference.
4. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Larosiliere, G.; Carter, L.; Meske, C. How does the world connect? Exploring the global diffusion of social network sites. J. Assoc. Inf. Sci. Technol. 2017, 68, 1875–1885. [Google Scholar] [CrossRef]
- Meske, C.; Stieglitz, S. Adoption and Use of Social Media in Small and Medium-sized Enterprises. In Practice Driven Research on Enterprise Transformation (PRET), Proceedings of the 6th Working Conference Lecture Notes in Business Information Processing (LNBIP), Utrecht, The Netherlands, 6 June 2013; Springer: Berlin, Heidelberg, 2013; pp. 61–75. [Google Scholar]
- Meske, C.; Wilms, K.; Stieglitz, S. Enterprise Social Networks as Digital Infrastructures - Understanding the Utilitarian Value of Social Media at the Workplace. Inf. Syst. Manag. 2019, 36, 350–367. [Google Scholar] [CrossRef] [Green Version]
- Meske, C.; Junglas, I.; Stieglitz, S. Explaining the emergence of hedonic motivations in enterprise social networks and their impact on sustainable user engagement - A four-drive perspective. J. Enterp. Inf. Manag. 2019, 32, 436–456. [Google Scholar] [CrossRef]
- Chinnov, A.; Meske, C.; Kerschke, P.; Stieglitz, S.; Trautmann, H. An Overview of Topic Discovery in Twitter Communication through Social Media Analytics. In Proceedings of the 21st Americas Conference on Information Systems (AMCIS), Fajardo, Puerto Rico, 13–15 August 2015; pp. 4096–4105. [Google Scholar]
- Stieglitz, S.; Meske, C.; Roß, B.; Mirbabaie, M. Going Back in Time to Predict the Future - The Complex Role of the Data Collection Period in Social Media Analytics. Inf. Syst. Front. 2018, 1–15. [Google Scholar] [CrossRef]
- Meske, C.; Junglas, I.; Schneider, J.; Jakoonmäki, R. How Social is Your Social Network? Toward A Measurement Model. In Proceedings of the 40th International Conference on Information Systems, Munich, Germany, 15–18 December 2019; pp. 1–9. [Google Scholar]
- Hardaker, C. Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. J. Politeness Res. Language Behav. Culture 2010, 6, 215–242. [Google Scholar] [CrossRef]
- Mihaylov, T.; Georgiev, G.; Nakov, P. Finding opinion manipulation trolls in news community forums. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, Beijing, China, 30–31 July 2015; pp. 310–314. [Google Scholar]
- Badawy, A.; Addawood, A.; Lerman, K.; Ferrara, E. Characterizing the 2016 Russian IRA influence campaign. Social Netw. Anal. Min. 2018, 9, 31. [Google Scholar] [CrossRef] [Green Version]
- Badawy, A.; Lerman, K.; Ferrara, E. Who falls for online political manipulation? In Proceedings of the Web Conference 2019—Companion of the World Wide Web Conference, San Francisco, CA, USA, 13–17 May 2019; pp. 162–168. [Google Scholar]
- Chun, S.A.; Holowczak, R.; Dharan, K.N.; Wang, R.; Basu, S.; Geller, J. Detecting political bias trolls in Twitter data. In Proceedings of the 15th International Conference on Web Information Systems and Technologies, WEBIST 2019, Vienna, Austria, 18–20 September 2019; pp. 334–342. [Google Scholar]
- Zannettou, S.; Sirivianos, M.; Caulfield, T.; Stringhini, G.; De Cristofaro, E.; Blackburn, J. Disinformation warfare: Understanding state-sponsored trolls on twitter and their influence on the web. In Proceedings of the Web Conference 2019—Companion of the World Wide Web Conference, WWW 2019, San Francisco, CA, USA, 13–17 May 2019; pp. 218–226. [Google Scholar]
- Fornacciari, P.; Mordonini, M.; Poggi, A.; Sani, L.; Tomaiuolo, M. A holistic system for troll detection on Twitter. Comput. Hum. Behav. 2018, 89, 258–268. [Google Scholar] [CrossRef]
- Donath, J.S. Identity and deception in the virtual community. In Communities in Cyberspace; Routledge: Abingdon-on-Thames, UK, 2002; pp. 37–68. [Google Scholar]
- Kirman, B.; Lineham, C.; Lawson, S. Exploring mischief and mayhem in social computing or: How we learned to stop worrying and love the trolls. In CHI’12 Extended Abstracts on Human Factors in Computing Systems; ACM: New York, NY, USA, 2012; pp. 121–130. [Google Scholar]
- Buckels, E.E.; Trapnell, P.D.; Paulhus, D.L. Trolls just want to have fun. Personal. Individ. Differ. 2014, 67, 97–102. [Google Scholar] [CrossRef]
- Morrissey, L. Trolling is an art: Towards a schematic classification of intention in internet trolling. Griffith Work. Pap. Pragmat. Intercult. Commun. 2010, 3, 75–82. [Google Scholar]
- Pfaffenberger, B. “If I Want It, It’s OK”: Usenet and the (Outer) Limits of Free Speech. Inf. Soc. 1996, 12, 365–386. [Google Scholar] [CrossRef]
- Herring, S.; Job-Sluder, K.; Scheckler, R.; Barab, S. Searching for safety online: Managing “trolling” in a feminist forum. Inf. Soc. 2002, 18, 371–384. [Google Scholar] [CrossRef]
- Galán-García, P.; Puerta, J.G.; Gómez, C.L.; Santos, I.; Bringas, P.G. Supervised machine learning for the detection of troll profiles in twitter social network: Application to a real case of cyberbullying. Log. J. IGPL 2016, 24, 42–53. [Google Scholar]
- Cambria, E.; Chandra, P.; Sharma, A.; Hussain, A. Do not Feel the Trolls; ISWC: Shanghai, China, 2010; Volume 664. [Google Scholar]
- Derczynski, L.; Bontcheva, K. Pheme: Veracity in Digital Social Networks. In Proceedings of the User Modelling and Personalisation (UMAP) Project Synergy workshop, CEUR Workshop Proceedings, Aalborg, Denmark, 7–11 July 2014; Volume 1181. [Google Scholar]
- Dellarocas, C. Strategic manipulation of internet opinion forums: Implications for consumers and firms. Manag. Sci. 2006, 52, 1577–1593. [Google Scholar] [CrossRef] [Green Version]
- King, G.; Pan, J.; Roberts, M.E. How the Chinese government fabricates social media posts for strategic distraction, not engaged argument. Am. Polit. Sci. Rev. 2017, 111, 484–501. [Google Scholar] [CrossRef] [Green Version]
- Luceri, L.; Giordano, S.; Ferrara, E. Don’t Feed the Troll: Detecting Troll Behavior via Inverse Reinforcement Learning. arXiv 2001, arXiv:2001.10570. [Google Scholar]
- Romero-Rodríguez, L.M.; de-Casas-Moreno, P.; Torres-Toukoumidis, Á. Dimensions and indicators of the information quality in digital media. Comunicar. Media Educ. Res. J. 2016, 24, 91–100. [Google Scholar] [CrossRef] [Green Version]
- Ortega, F.J.; Troyano, J.A.; Cruz, F.L.; Vallejo, C.G.; Enríquez, F. Propagation of trust and distrust for the detection of trolls in a social network. Comput. Netw. 2012, 56, 2884–2895. [Google Scholar] [CrossRef]
- Seah, C.W.; Chieu, H.L.; Chai, K.M.A.; Teow, L.N.; Yeong, L.W. Troll detection by domain-adapting sentiment analysis. In Proceedings of the 2015 18th IEEE International Conference on Information Fusion, Washington, DC, USA, 6–9 July 2015; pp. 792–799. [Google Scholar]
- Dollberg, S. The Metadata Troll Detector, Swiss Federal Institute of Technology, Zurich, Distributed Computing Group, Computer Engineering and Networks Laboratory. Tech. Rep. Semester Thesis. 2015. Available online: https://pub.tik.ee.ethz.ch/students/2014-HS/SA-2014-32.pdf (accessed on 7 January 2020).
- Younus, A.; Qureshi, M.A.; Saeed, M.; Touheed, N.; O’Riordan, C.; Pasi, G. Election trolling: Analysing sentiment in tweets during Pakistan elections 2013. In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Korea, 7–11 April 2014; pp. 411–412. [Google Scholar]
- Hallman, J.; Lokk, A. Viability of sentiment analysis for troll detection on Twitter: A Comparative Study Between the Naive Bayes and Maximum Entropy Algorithms. KTH Royal Institute of Technology - School of Computer Science and Communication - Degree Project in Computing Engineering, Stockholm, Sweden. 2016. Available online: https://kth.diva-portal.org/smash/get/diva2:927326/FULLTEXT01.pdf (accessed on 7 January 2020).
- de-la-Pena-Sordo, J.; Santos, I.; Pastor-López, I.; Bringas, P.G. Filtering Trolling Comments through Collective Classification. In International Conference on Network and System Security; Springer: Berlin, Heidelberg, 2013; pp. 707–713. [Google Scholar]
- Bharati, P.; Lee, C.; Syed, R. Trolls and Social Movement Participation: An Empirical Investigation. 2018. Available online: https://pdfs.semanticscholar.org/fbd4/dc4eec69e6114cfd9011576f1f64c1bfbefc.pdf (accessed on 7 January 2020).
- Kunegis, J.; Lommatzsch, A.; Bauckhage, C. The SlashDot zoo: Mining a social network with negative edges. In Proceedings of the 18th International Conference on World Wide Web, Madrid, Spain, 20–24 April 2009; pp. 741–750. [Google Scholar]
- Dlala, I.O.; Attiaoui, D.; Martin, A.; Yaghlane, B. Trolls identification within an uncertain framework. In Proceedings of the 2014 IEEE 26th International Conference on Tools with Artificial Intelligence, Limassol, Cyprus, 10–12 November 2014; pp. 1011–1015. [Google Scholar]
- Cheng, J.; Danescu-Niculescu-Mizil, C.; Leskovec, J. Antisocial behavior in online discussion communities. In Proceedings of the Ninth International AAAI Conference on Web and Social Media, Oxford, UK, 26–29 May 2015; pp. 61–70. [Google Scholar]
- Kumar, S.; Spezzano, F.; Subrahmanian, V.S. Accurately detecting trolls in slashdot zoo via decluttering. In Proceedings of the 2014 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, Beijing, China, 17–20 August 2014; pp. 188–195. [Google Scholar]
- Atanasov, A.; Morales, G.D.F.; Nakov, P. Predicting the Role of Political Trolls in Social Media. 2019. Available online: https://arxiv.org/pdf/1910.02001.pdf (accessed on 7 January 2020).
- Machová, K.; Kolesár, D. Recognition of Antisocial Behavior in Online Discussions. In International Conference on Information Systems Architecture and Technology; Springer: Cham, Switzerland, 2019; pp. 253–262. [Google Scholar]
- Kincaid, J.P.; Fishburne, R.P., Jr.; Rogers, R.L.; Chissom, B.S. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel; University of Central Florida: Orlando, FL, USA, 1975; Available online: https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=1055&context=istlibrary (accessed on 7 January 2020).
- Lombardo, G.; Fornacciari, P.; Mordonini, M.; Tomaiuolo, M.; Poggi, A. A Multi-Agent Architecture for Data Analysis. Future Internet 2019, 11, 49. [Google Scholar] [CrossRef] [Green Version]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tomaiuolo, M.; Lombardo, G.; Mordonini, M.; Cagnoni, S.; Poggi, A. A Survey on Troll Detection. Future Internet 2020, 12, 31. https://doi.org/10.3390/fi12020031
Tomaiuolo M, Lombardo G, Mordonini M, Cagnoni S, Poggi A. A Survey on Troll Detection. Future Internet. 2020; 12(2):31. https://doi.org/10.3390/fi12020031
Chicago/Turabian StyleTomaiuolo, Michele, Gianfranco Lombardo, Monica Mordonini, Stefano Cagnoni, and Agostino Poggi. 2020. "A Survey on Troll Detection" Future Internet 12, no. 2: 31. https://doi.org/10.3390/fi12020031
APA StyleTomaiuolo, M., Lombardo, G., Mordonini, M., Cagnoni, S., & Poggi, A. (2020). A Survey on Troll Detection. Future Internet, 12(2), 31. https://doi.org/10.3390/fi12020031