Next Article in Journal
Semiconductors at the Intersection of Geoeconomics, Technonationalism, and Global Value Chains
Previous Article in Journal
Healthcare-Seeking Behaviors of Homeless Substance Users During the COVID-19 Lockdowns in Gauteng, South Africa: A COREQ-Based Report
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review

by
Claudiu Gabriel Ionescu
*,† and
Monica Licu
Medical Ethics Department, Carol Davila University of Medicine and Pharmacy, 050474 Bucharest, Romania
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Soc. Sci. 2023, 12(8), 465; https://doi.org/10.3390/socsci12080465
Submission received: 30 May 2023 / Revised: 9 August 2023 / Accepted: 16 August 2023 / Published: 21 August 2023
(This article belongs to the Special Issue Applying AI to Social Science and Social Science to AI)

Abstract

:
The use of TikTok is more widespread now than ever, and it has a big impact on users’ daily lives, with self-perceived identity and personal values being topics of interest in light of the algorithmically curated content. This mini-review summarizes current findings related to the TikTok algorithm, and the impact it has on self-perceived identity, personal values, or related concepts of the Self. We pass through the contents of algorithmic literacy and emphasize its importance along with users’ attitudes toward algorithms. In the first part of our results, we show conceptual models of algorithms like the crystal framework, platform spirit, and collective imaginaries. In the second part, we talk about the degree of impact a social media algorithm may exert over an individual’s sense of self, understanding how the algorithmized self and domesticated algorithm are trying to sum up the dual development of this relationship. In the end, with the concept of Personal Engagement and the role of cognitive biases, we summarize the current findings and discuss the questions that still need to be addressed. Performing research on the topic of social media, especially TikTok, poses ethical, cultural, and regulatory challenges for researchers. That is why we will discuss the main theoretical frameworks that were published with their attached current studies and their impact on the current theoretical models as well as the limitations within these studies. Finally, we discuss further topics of interest related to the subject and possible perspectives, as well as recommendations regarding future research in areas like impact on personal values and identity, cognitive biases, and algorithmic literacy.

1. Introduction

Many individuals use social media, especially TikTok, with many of them scoring highly on various scales of social media use. With the platform being declared the most addictive one (Smith and Short 2022), TikTok is also the most used social media platform by generation Z in the world, with over 1 billion users monthly (Gu et al. 2022; Boeker and Urman 2022). Its implications for human behavior are not completely understood yet (Smith and Short 2022). It is already shown that a TikTok personal experience is shaped by the individuals’ motivations and behaviors which, in the perspective of values-aligned and personalized content bought by the algorithm, can enhance positive experiences and promote psychological wellbeing (Ostic et al. 2021; Naslund et al. 2020) while in the same time can lead through cognitive biases which are flaws or distortions in judgment and decision making that can lead to poor outcomes, to the “echo-chamber” effect to isolation, rigid policy attitudes or radicalization in various aspects of personal values and beliefs with a major potential societal impact not yet fully addressed (Binder and Kenyon 2022; Cinelli et al. 2021; Boucher et al. 2021; Sasahara et al. 2021).
The role of human self-perceived identity and personal values has been a prominent area of theorization in relation to social media algorithms. Self-perceived identity refers to an individual’s subjective interpretation and understanding of themselves, their environment, and various aspects of their lives and encompasses cognitive, emotional, and behavioral consistency in how they perceive and define themselves (Ersanlı and Şanlı 2015). Little academic analysis tried to deepen the knowledge of this topic throughout time, thus becoming overly simplified (Boyd et al. 2021; Silva et al. 2020; Hynes and Wilson 2016; Chen et al. 2014). Human–algorithm interaction has emerged as a pressing area in social media research, probably because algorithms curate and govern most forms of communication on these platforms. Social media algorithms are computational models for transforming data into personalized content that populates a user’s social feeds, such as TikTok’s “ForYouPage”, but may be influenced by users’ perceptions (Bucher 2020). TikTok has an extremely advanced and sophisticated algorithm system, especially in terms of participation, content, and types of interaction, which makes the addiction problem more severe than on all the other popular social media platforms, especially among younger generations (Smith and Short 2022). Due to the opaqueness or “blackbox-ed” nature of algorithms, users experience them just through their perceptions. Because what people see on social media is largely personalized, it shapes how people see themselves and others (Bhandari and Bimo 2022) but also impacts their behavior on social media platforms (DeVito 2021). Research on human–algorithm interaction involves investigations along multiple fronts: how social media algorithms function, how individuals perceive them, and the effects of social media algorithms on the Self. Firstly, research focused on theories on how social media algorithms operate and their relationship with marginalized identities (DeVito 2021; Simpson and Semaan 2021). This area of research suggests that identity maintenance and development are performed through the curated feed of the algorithm. It is important to note that individual differences and personal circumstances also play a significant role in developing personal values and attitudes within social media usage and may develop a sense of authenticity and coherence in one’s self-perceived identity.
With algorithms mediating communications and changing relationships and society, we support the urgent need to better understand and assess the current knowledge on its impact not only on individual personal values and identities but also on public attitudes, polarization, developer transparency, and user privacy, and give cohesive points of discussion for the future. Effects on individuals will remain a topic of special interest, with emergent questions regarding algorithm access, transparency, and accountability waiting for answers to be given. Our study aimed to present the current state of research, give a broad overview of algorithm concepts and their impact, address questions that still needed to be answered, and highlight perspectives on future studies regarding both psychology, social ethics, and artificial intelligence research.

2. Materials and Methods

Studies included in the review consisted of published English-language peer-reviewed articles focused on examining social media algorithms and their relationship with the Self, including our variables of interest: self-perceived identity and personal values. We utilized the following databases: Google Scholar, Elicit, PsycINFO, PubMed, and Science Direct conducted electronic searches from March 2023 to April 2023, examining articles published at any time prior to 25 March 2023. Searches were conducted with terms related to personal values (i.e., “personal values”, “moral values”, “self”, “ethics”, “moral attitudes”, or “ethical values”) and TikTok (i.e., “social media”, “TikTok”, “Facebook”, “Instagram”, “algorithm”, or “social networking sites”). The concepts of “self-perceived identity” and “personal values” were very scarcely mentioned throughout the literature, thus we included broader concepts like “self-representation”, “self-image”, “self-making” to attain a broader perspective on our primary objectives. All published articles found using the above search terms and deemed to be related to the topic of focus were included based on their relevance.

3. How Users Understand and Relate to the Algorithm Has an Impact by Itself

3.1. The Role of Algorithm Awareness among Users

The algorithm awareness among users is of major importance in the overall questioning of the topic. It is shown that the degree of algorithmic literacy impacts the overall experience and influences the personal social media experience (Taylor and Choi 2022). Due to the opaque nature of algorithms, users experience them through their perceptions as they interact with them. A study focused its research on how the users perceive the interaction with the algorithm, following the hypothesis that users make judgments about how responsive and insensitive the algorithmically curated content is and how well the algorithm “listens” to information. In this study, all the participants were, of course, algorithm-aware. As a result, the novel concepts of PAR (perceived algorithm responsiveness) and PAI (perceived algorithm insensitivity) were only weakly correlated to algorithmic literacy, and PAR was a significant predictor of people’s social media enjoyment (Taylor and Choi 2022). Another study showed that only 61% of users were aware or had low awareness of the algorithm, with a cluster analysis creating an algorithm-awareness typology: the unaware, the uncertain, the affirmative, the neutral, the sceptic, and the critical (Gran et al. 2021). Also, the higher the levels of education, the more negative the attitudes towards algorithms are, with the digital divide being exacerbated by smart machine learning infrastructure. This may have an important impact on informed decision-making processes in the context of data-driven arrangements and social injustice (Dencik et al. 2016, 2018, 2019). However, algorithmic literacy was studied by identifying context-specific sense-making strategies of algorithms, expectancy violations, and explicit personalization cues as users had intuitive and experience-based insights into feed personalization that did not automatically enable them to verbalize these (Swart et al. 2018; Swart 2021). Also, personality factors and other demographic variables that may impact the degree of awareness, still need to be studied (Lutz and Hoffmann 2019; Dutton and Graham 2019).

3.2. From Folk Theories to Personal Engagement and Identity Strainer

Folk Theories “are intuitive, informal theories that individuals develop to explain the outcomes, effects, or consequences of technological systems, which guide reactions to and behavior towards said systems” (DeVito 2021), with one of the most common being Personal Engagement Theory. It means social media feeds are curated through personal engagement metrics, which are based on digital traces, including what content the user previously liked, commented on, and viewed the most (Eslami et al. 2016). Another theory worth mentioning, the Identity Strainer Theory highlighted that the curation happens as an identity strainer, which means that the algorithm curates the social feed by suppressing content related to marginalized social identities based on race and ethnicity, body size, and physical appearance, LGBTQ identity, and political and social justice group affiliation (Karizat et al. 2021). In reverse, the concepts of algorithmic privilege and algorithmic representational harm refer to the harm users experience when they lack algorithmic privilege and are subjected to algorithmic symbolic annihilation, leading them to shape their algorithmic identities, thus aligning with how they understand themselves, as well as resisting the suppression of marginalized social identities (Karizat et al. 2021). In the same study, the interaction is viewed as a co-production or a bidirectional relationship, which in turn can be treated as input data for the algorithmic decisions themselves. These findings were similar to other studies concerning LGBTQ users (DeVito 2022; Simpson and Semaan 2021).

3.3. The Concept of the Crystal Framework

“The Crystal Framework” is a conceptual model metaphor of crystals and their properties put into an analytic frame that includes the reflection of self-concepts that are both multifaceted and multidirectional, shaping perspectives on others by orienting them to recognize parts of themselves refracted in others and to experience ephemeral, diffracted connections with various groups as a virtual community. This qualitative study identified more positive attitudes towards algorithms, especially TikTok’s’ (Lee et al. 2022). The main characteristics of the “crystal” were: reflective (parts of their self-concept were reflected back to them in the feed); multifaceted; has a refinement strategy; is diffractive. It is one of the first conceptual models out there to point out that the algorithm is shaping self-concepts without specifically addressing how this may happen. Anywise, an interesting point of the study was that when users view algorithms as exploiting users for monetary gain or manipulating attention, they may not believe the content is an accurate reflection of themselves, which may be changing and aligning more with the “crystal”. Causes of this may be internal or external; the latter resulted from a “broken crystal” translated into the influences companies, financial interests, politics, and even TikTok policy may adopt.

3.4. Users’ Attitudes towards TikTok Algorithms

Regarding users’ attitudes towards algorithms, a study identified algorithms as being perceived as confining, practical, reductive, intangible, and exploitative. The authors emphasized “digital irritation” as a central emotional response with a small but significant potential to inspire future political actions against datafication (Ytre-Arne and Moe 2021; Ytre-Arne 2023). In the opposite direction, users describe TikTok as a safe space where users can be themselves, feel included in a community, and engage meaningfully. But, contradictory, the algorithm is perceived as harmful because it tries to manipulate and drive users towards specific videos that increase their “addiction” to the platform. Users consider some of the recommendations on the ForYou Page to be questionable because they aim at persuading or nudging in favor of hashtags and social causes. This contradiction may partly be explained by the fact that participants report their rationalizations in a performative manner to avoid feelings of dissonance while attempting to relate to their own identity (Scalvini 2020).

3.5. Additional Concepts: Algorithm Gossip, Collective Algorithmic Imagineries, and Platform Spirit

“Algorithm gossip” addresses folk theories with a focus on content creators who understand that informed theories and strategies pertaining to algorithms can help financial consistency and visibility and, thus, gain more public attention (Bishop 2019). “Collective algorithmic imagineries”, or ways of thinking about what algorithms are, what they should be, how they function, and what these imaginations, in turn, make possible, meaning that if users think the algorithm behaves in a particular way according to a specific logic, users tend to adapt their behavior to that belief (Siles et al. 2020; Bucher and Helmond 2018). Also, the algorithm works beyond the interpretative realm of humans in the sense that they are recursively modeled back into algorithmic systems, thus they can influence the decision-making process of the algorithm. Studying this concept, we find that algorithms are perceived as meritocratic, dynamic, and unpredictable, have an experimental attitude with short attention spans, and “selling” the information through emotions rather than facts, all this leading to a “techno-communitarian sentiment” (Øllgard 2023).
In the end, the “Platform Spirit”, represented by the constant stream of relatable content, the sense of community among creators and users, and the ability to easily go viral and gain popularity within digital and real life, covers the realm of addiction the algorithm has. Also, adaptation to the spirit and algorithm literacy, even structural theorization, not necessarily structural knowledge of the algorithm, impacts the understanding of the user related to its own decision-making process and the algorithm’s as well (DeVito 2021). The response we obtain from this work is that algorithm literacy has a mandatory role in the future influence that constantly changing social media platforms will have on their users.

4. Are Self-Perceived Identity and Personal Values Shaped by TikTok Algorithms?

4.1. Concept of Self, Algorithmized Identity, and Affective Capitalism

Personal values and self-perceived identity were represented in some studies under the broader relevant concepts of “self-representation”, “self-identification”, “self-making”, or “networked self”, referring to the act of identifying oneself as a particular kind of person (Papacharissi 2011; Strimbu and O’Connell 2019; Tiidenberg and Whelan 2017; Bamberg 2011; Thumim 2012). That is why we have chosen to use these concepts when referring to our studied variables. Another school of thought was “self-symbolizing”, with a study showing that those who publicly presented themselves on ephemeral social media internalized their portrayed personality and development over time (Choi et al. 2020). But how in-depth can this process affect self-perception of identity and personal values of users? A study shows the concept of “affective capitalism”, where all the desires, emotions, and forms of expressivity are becoming raw materials in a wider economic infrastructure (Hearn 2017, 2019). Through aggregation, abstraction, and categorization, they become consumer profiles, which are ultimately projected back onto them, leading to an “algorithmized identity”, which is “an identity formation that works through mathematical inferences on otherwise anonymous beings” (Cheney-Lippold 2011). The challenge arising from this is considering shaping: through what kind of studies can we assess the “Algorithmically Shifted Self”?

4.2. Is the Self Algorithmized or the Algorithm Domesticated?

Users may vary in the extent to which they feel that the algorithm is reflective of who they truly are. Then, they may employ a variety of strategies to try to bring the algorithm’s recommendations into line with who they are or want to be. (Lee et al. 2022). One study brought the concept of “Algorithmized Self” as a definition for TikTok, as an extension and complication of the previously discussed “Networked Self”; while the latter posits that the self is created through the “reflexive process of fluid associations with social circles”, the former understands the self as deriving primarily from a reflexive engagement with previous self-representations rather than with one’s social connections. The same study argues that users interact most with aspects of their own personas (Bhandari and Bimo 2022). In contrast, there were described processes as “depuration”, in which users consciously train the algorithm to remove content from their feeds (González-Anta et al. 2021), or organic and strategic refinement (Lee et al. 2022). Moreover, as an extended view of the “Networked Self”, the Extended Mind Theory (Clark and Chalmers 1998) envisions that the environment plays an active role in how cognitive processes are driven. In this regard, non-biological systems such as algorithms may be considered a part of a cognitive process in which our brain performs some operations while the algorithm also plays a causal role in governing an individual’s personal values and behavior, as an interactive system or cognitive integration (Chakravarty 2021; Menary 2010). In the light of another theory, “taken as networked and dynamic ecologies that form parts of minds, the environment and all the things in it may together function as a collection of interconnected cognitive systems. This is not only illustrated through human examples but also reinforced by behavior-based artificial intelligence” (Crippen and Rolla 2022), like social media algorithms. The same authors highlight that situations are primary as they may change what we see on faces, but that such cases are not merely psychological projections, that is, consequences of how we represent things in our heads (Crippen 2022). The remaining question is: how will this extended socio-technological self frame situations and make decisions in our best interests regarding our own personal values and behaviors?
Users responded that they were never fully able to control their digital selves and thus integrate it into their routine lives as the TikTok algorithm was constantly misaligned with their personal moral economy (Simpson and Semaan 2021). In conclusion, there is a constant feedback loop between the two concepts, but the degree to which one primarily impacts the other has yet to be studied.

4.3. How May the Algorithm Influence Users’ Self-Perceived Identity and Personal Values?

4.3.1. Personal Engagement and the Role of Cognitive Biases

The TikTok algorithm does not have direct knowledge of a user’s personal values, but it does make assumptions based on users’ behavior and interactions. For example, following specific creators was the strongest factor in algorithm personalization, with watching certain videos for a long amount of time and liking posts following in terms of influence (Boeker and Urman 2022). In terms of the level of perceived influence, a study identified that the follow-feature has the strongest role. It also discussed the context of the formation of filter bubbles on TikTok and the proliferation of problematic content in this regard (Boeker and Urman 2022; Holone 2016), with different aspects underlined for content creators (Klug et al. 2021). The exposure is limited to narrow perspectives related to any content that they may follow. This may lead to more prone cognitive biases such as confirmation, availability heuristics, or bandwagon effect, which may become a topic of concern by itself in the context of problematic use. At the same time, users can shape their own experiences by not following accounts or by providing feedback by pushing the button “Not interested” (Scalvini 2020). Here is a key point of questioning: if cognitive biases and curated feeds may lead indirectly to the shaping of personal values and identity, and if so, how can we analyze this?

4.3.2. TikTok Algorithm May Adjust Self-Beliefs and Self-Perceived Identity

We found that popular media has reported on user experiences related to thinking about the self, speculating that the videos shown to users say something about the type of person they are or even allow users to better understand their own social, sexual, and cultural identity and want to change it (Tiffany 2021; MacGowan 2020; Boseley 2021; French 2018). At the same time, changes in algorithms that conflict with goals spark emotional outrage, dissatisfaction, or platform exodus (DeVito 2021). But until now, no reference has been made through studies to reveal the deeper layers of one’s identity and personal values.

5. Discussion and Conclusions

We found a few studies, most of which were qualitative semi-structured interviews or focus groups, exploratory, with an average of 20–40 participants, that used online or live interview techniques on the general consumption and experience of TikTok and other social media platforms, highlighting similar concepts and models of thinking related to self-perceived identity and personal values that aligned with what our review has intended to study. Also, most of the participants included in the studies were teenagers and young adults, content consumers, which may represent a concern considering that content creators have a different perspective.
Most of the studies agree that the influence of the algorithm acts as a fluid and dynamic gatekeeper for users accessing or sharing information about virtually any topic. While the algorithm can contribute through Personal Engagement to create a filter bubble effect, it does not have direct knowledge of an individual’s values or consciously aim to shape them. There is a need for developing alternative methodologies, testing correlations, performing quantitative studies, and further developing theoretical and methodological synergies. Even if it is ultimately up to users to critically evaluate and reflect on the content they consume and make informed decisions about their personal values and self-perceived identity, more structural knowledge and critical thinking programs may help users at being mindful of their media consumption habits.
Of course, academic analysis of social media algorithm ontology needs to examine and interpret users not as isolated entities but rather as dynamic and fluid nodes in a network who show and hide valuable data at the same time, like a crystal temporarily lightened and darkened in a more extensive social and cultural but opaque ecosystem that waits to be discovered. All the elements that comprise these digital spaces influence one another and co-evolve, changing the identity, personal values, and ultimately the behaviors of the users. Our aim is to deepen our understanding of each node influencing another and the whole network itself, as the possible consequences of the data invisibility or filter bubbles on radicalization, hyper-individualism, and their derived concepts of cult-like movements, post-humanitarian sensibility, or the digital savior complex, among others, may be unexpected.

Author Contributions

C.G.I. and M.L. contributed to the conception, structure of the paper, contributed to analysis, and interpretation of available literature. C.G.I. contributed to the development of the initial draft. M.L. reviewed and critiqued the output for important intellectual content. Both authors contributed to the article equally and approved the sbmitted version. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bamberg, Michael. 2011. Who am I? Narration and its contribution to self and identity. Theory & Psychology 21: 3–24. [Google Scholar]
  2. Bhandari, Aparajita, and Sara Bimo. 2022. Why’s Everyone on TikTok Now? The Algorithmized Self and the Future of Self-Making on Social Media. Social Media + Society 8: 20563051221086241. [Google Scholar] [CrossRef]
  3. Binder, Jens F., and Jonathan Kenyon. 2022. Terrorism and the internet: How dangerous is online radicalization? Frontiers in Psychology 13: 997390. [Google Scholar] [CrossRef] [PubMed]
  4. Bishop, Sophie. 2019. Managing visibility on YouTube through algorithmic gossip. New Media & Society 21: 2589–606. [Google Scholar]
  5. Boeker, Maximilian, and Aleksandra Urman. 2022. An Empirical Investigation of Personalization Factors on TikTok. Paper presented at WWW’22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25–29; New York: Association for Computing Machinery, pp. 2298–309. [Google Scholar]
  6. Boseley, Matilda. 2021. ‘I Found My Identity’: How TikTok Is Changing the Lives of Its Popular Indigenous Creators. Available online: https://www.theguardian.com/australia-news/2021/jul/10/i-found-my-identity-how-tiktok-is-changing-thelives-of-its-popular-indigenous-creators (accessed on 25 May 2023).
  7. Boucher, Jean-Christophe, Kirsten Cornelson, Jamie L. Benham, Madison M. Fullerton, Theresa Tang, Cora Constantinescu, Mehdi Mourali, Robert J. Oxoby, Deborah A. Marshall, Hadi Hemmati, and et al. 2021. Analyzing Social Media to Explore the Attitudes and Behaviors Following the Announcement of Successful COVID-19 Vaccine Trials: Infodemiology Study. JMIR Infodemiology 1: e28800. [Google Scholar] [CrossRef]
  8. Boyd, Ryan, Steven Wilson, James Pennebaker, Michal Kosinski, David Stillwell, and Rada Mihalcea. 2021. Values in Words: Using Language to Evaluate and Understand Personal Values. Paper presented at the International AAAI Conference on Web and Social Media, Virtually, June 7–10; vol. 9, pp. 31–40. [Google Scholar] [CrossRef]
  9. Bucher, Taina. 2020. Nothing to disconnect from? Being singular plural in an age of machine learning. Media, Culture & Society 42: 610–17. [Google Scholar] [CrossRef]
  10. Bucher, Taina, and Anne Helmond. 2018. The affordances of social media platforms. In The SAGE Handbook of Social Media 1. London and New York: SAGE Publications Ltd., pp. 233–53. [Google Scholar]
  11. Chakravarty, Panya. 2021. The Mind Leak: An Analysis of Clark & Chalmers Extended Mind Hypothesis. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3794727 (accessed on 25 May 2023).
  12. Chen, Jilin, Gary Hsieh, Jalal U. Mahmud, and Jeffrey Nichols. 2014. Understanding individuals’ personal values from social media word use. Paper presented at the 17th ACM Conference on Computer Supported Cooperative Work & Social Computing—CSCW’14, Baltimore, MD, USA, February 15–19. [Google Scholar]
  13. Cheney-Lippold, John. 2011. A New Algorithmic Identity. Theory, Culture and Society 28: 164–81. [Google Scholar] [CrossRef]
  14. Choi, Sukyoung, Dmitri Williams, and Hyeok Kim. 2020. A snap of your true self: How self-presentation and temporal affordance influence self-concept on social media. New Media & Society, 1461444820977199. [Google Scholar] [CrossRef]
  15. Cinelli, Matteo, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2021. The echo chamber effect on social media. Proceedings of the National Academy of Sciences of the United States of America 118: e2023301118. [Google Scholar] [CrossRef]
  16. Clark, Andy, and David Chalmers. 1998. The Extended Mind. Analysis 58: 7–19. [Google Scholar] [CrossRef]
  17. Crippen, Matthew. 2022. Emotional Environments: Selective Permeability, Political Affordances and Normative Settings. Topoi 41: 917–29. [Google Scholar] [CrossRef]
  18. Crippen, Matthew, and Giovanni Rolla. 2022. Faces and situational Agency. Topoi 41: 659–70. [Google Scholar] [CrossRef]
  19. Dencik, Lina, Arne Hintz, and Jonathan Cable. 2016. Towards data justice? The ambiguity of anti-surveillance resistance in political activism. Big Data & Society 3: 1–12. [Google Scholar] [CrossRef]
  20. Dencik, Lina, Arne Hintz, and Zoe Carey. 2018. Prediction, pre-emption and limits to dissent: Social media and big data uses for policing protests in the United Kingdom. New Media & Society 20: 1433–50. [Google Scholar] [CrossRef]
  21. Dencik, Lina, Arne Hintz, Joanna Redden, and Emiliano Treré. 2019. Exploring data justice: Conceptions, applications and directions. Information, Communication & Society 22: 873–81. [Google Scholar]
  22. DeVito, Michael Ann. 2021. Adaptive folk theorization as a path to algorithmic literacy on changing platforms. Proceedings of the ACM Human-Computer Interaction 5: 1–38. [Google Scholar] [CrossRef]
  23. DeVito, Michael Ann. 2022. How Transfeminine TikTok Creators Navigate the Algorithmic Trap of Visibility Via Folk Theorization. Proceedings of the ACM on Human-Computer Interaction 6: 1–31. [Google Scholar] [CrossRef]
  24. Dutton, William H., and Mark Graham. 2019. Introduction. In Society and the Internet: How Networks of Information and Communication Are Changing Our Lives, 2nd ed. Edited by Mark Graham and William H. Dutton. Oxford: Oxford University Press. [Google Scholar] [CrossRef]
  25. Ersanlı, Kurtman, and Esat Şanlı. 2015. Self-perceived identity scale: A scale development study. American International Journal of Social Science 4: 184–94. [Google Scholar]
  26. Eslami, Motahhare, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I “like” it, then I hide it: Folk Theories of Social Feeds. Paper presented at the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, May 7–12. [Google Scholar]
  27. French, Megan Rebecca. 2018. Algorithmic Mirrors: An Examination of How Personalized Recommendations Can Shape Self-Perceptions and Reinforce Gender Stereotypes. Ph.D. dissertation, Stanford University, Stanford, CA, USA. Available online: https://www.proquest.com/dissertations-theses/algorithmic-mirrors-examination-how-personalized/docview/2436884514/se-2?accountid=14026 (accessed on 23 May 2023).
  28. González-Anta, Baltasar, Virginia Orengo, Ana Zornoza, Vicente Peñarroja, and Vicente Martínez-Tur. 2021. Understanding the sense of community and continuance intention in virtual communities: The role of commitment and type of community. Social Science Computer Review 39: 335–52. [Google Scholar] [CrossRef]
  29. Gran, Anne-Britt, Peter Booth, and Taina Bucher. 2021. To Be or Not to Be Algorithm Aware: A Question of a New Digital Divide? Information, Communication & Society 24: 1779–96. [Google Scholar] [CrossRef]
  30. Gu, Li, Xun Gao, and Yong Li. 2022. What drives me to use TikTok: A latent profile analysis of users’ motives. Frontiers in Psychology 13: 992824. [Google Scholar] [CrossRef]
  31. Hearn, Alison. 2017. Verified: Self-presentation, identity management, and selfhood in the age of big data. Popular Communication 15: 62–77. [Google Scholar] [CrossRef]
  32. Hearn, Alison. 2019. Chapter, Verified: Self-Presentation, Identity Management, and Selfhood in the Age of Big Data in Self-(Re)presentation Now. London: Routledge. [Google Scholar]
  33. Holone, Harald. 2016. The filter bubble and its effect on online personal health information. Croatian Medical Journal 57: 298–301. [Google Scholar] [CrossRef] [PubMed]
  34. Hynes, Niki, and Juliette Wilson. 2016. I do it, but don’t tell anyone! Personal values, personal and social norms: Can social media play a role in changing pro-environmental behaviours? Technological Forecasting and Social Change 111: 349–59. [Google Scholar] [CrossRef]
  35. Karizat, Nadia, Dan Delmonaco, Motahhare Eslami, and Nazanin Andalibi. 2021. Algorithmic Folk Theories and Identity: How TikTok Users Co-Produce Knowledge of Identity and Engage in Algorithmic Resistance. Proceedings of the ACM on Human-Computer Interaction 5: 1–44. [Google Scholar] [CrossRef]
  36. Klug, Daniel, Yiluo Qin, Morgan Evans, and Geoff Kaufman. 2021. Trick and Please. A Mixed-Method Study On User Assumptions About the TikTok Algorithm. Paper presented at the 13th ACM Web Science Conference 2021 (WebSci’21), Virtual Event, Online, June 21–25; New York: Association for Computing Machinery, pp. 84–92. [Google Scholar] [CrossRef]
  37. Lee, Angela Y., Hannah Mieczkowski, Nicole B. Ellison, and Jeffrey T. Hancock. 2022. The Algorithmic Crystal: Conceptualizing the Self through Algorithmic Personalization on TikTok. Proceedings of the ACM on Human-Computer Interaction 6: 543. [Google Scholar] [CrossRef]
  38. Lutz, Christoph, and Christian Pieter Hoffmann. 2019. How Privacy Concerns and Social Media Platform Use Affect Online Political Participation in Germany. Paper presented at Weizenbaum Conference, Berlin, Germany, May 16–17. [Google Scholar]
  39. MacGowan, Amalie. 2020. The TikTok Algorithm Knew My Sexuality Better than I Did. Available online: https://repeller.com/tiktokalgorithm-bisexual/ (accessed on 22 May 2023).
  40. Menary, Richard, ed. 2010. Cognitive Integration and the Extended Mind. In The Extended Mind. Cambridge: MIT Press. [Google Scholar] [CrossRef]
  41. Naslund, John A., Ameya Bondre, John Torous, and Kelly A. Aschbrenner. 2020. Social Media and Mental Health: Benefits, Risks, and Opportunities for Research and Practice. Journal of Technology in Behavioral Science 5: 245–57. [Google Scholar] [CrossRef]
  42. Øllgard, Daniel Moller. 2023. The Social Power of AI: Algorithmic Imaginaries & Social Media. Recording from AI Lund Lunch Seminar 23 March 2023. Available online: https://www.ai.lu.se/2022-03-23 (accessed on 27 May 2023).
  43. Ostic, Dragana, Sikandar Ali Qalati, Belem Barbosa, Syed Mir Muhammad Shah, Esthela Galvan Vela, Ahmed Muhammad Herzallah, and Feng Liu. 2021. Effects of Social Media Use on Psychological Well-Being: A Mediated Model. Frontiers in Psychology 12: 678766. [Google Scholar] [CrossRef]
  44. Papacharissi, Zizi, ed. 2011. A networked self. In A Networked Self: Identity, Community, and Culture on Social Network Sites. London: Routledge, pp. 304–18. [Google Scholar]
  45. Sasahara, Kazutoshi, Wen Chen, Hao Peng, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. 2021. Social influence and unfollowing accelerate the emergence of echo chambers. Journal of Computational Social Science 4: 381–402. [Google Scholar] [CrossRef]
  46. Scalvini, Marco. 2020. Negotiating morality and ethics: The post-Millennial perspective on TikTok. Advance Preprint. [Google Scholar] [CrossRef]
  47. Siles, Ignacio, Andrés Segura-Castillo, Ricardo Solís, and Mónica Sancho. 2020. Folk theories of algorithmic recommendations on Spotify: Enacting data assemblages in the global South. Big Data & Society 7: 2053951720923377. [Google Scholar] [CrossRef]
  48. Silva, Amila, Pei-Chi Lo, and Ee Peng Lim. 2020. On Predicting Personal Values of Social Media Users using Community-Specific Language Features and Personal Value Correlation. International Conference on Web and Social Media 15: 680–90. [Google Scholar] [CrossRef]
  49. Simpson, Ellen, and Bryan C. Semaan. 2021. For You, or For”You”?: Everyday LGBTQ+ Encounters with TikTok. Proceedings of the ACM on Human-Computer Interaction 4: 1–34. [Google Scholar] [CrossRef]
  50. Smith, Troy, and Andy Short. 2022. Needs affordance as a key factor in likelihood of problematic social media use: Validation, latent Profile analysis and comparison of TikTok and Facebook problematic use measures. Addictive Behaviors 129: 107259. [Google Scholar] [CrossRef] [PubMed]
  51. Strimbu, Nicole, and Michael O’Connell. 2019. The Relationship Between Self-Concept and Online Self-Presentation in Adults. Cyberpsychology, Behavior and Social Networking 22: 804–7. [Google Scholar] [CrossRef]
  52. Swart, Joëlle. 2021. Experiencing Algorithms: How Young People Understand, Feel About, and Engage With Algorithmic News Selection on Social Media. Social Media + Society 7: 20563051211008828. [Google Scholar] [CrossRef]
  53. Swart, Joëlle, Chris Peters, and Marcel Broersma. 2018. Shedding Light on the Dark Social: The Connective Role of News and Journalism in Social Media Communities. New Media & Society 20: 4329–45. [Google Scholar]
  54. Taylor, Samuel Hardman, and Mina Choi. 2022. An Initial Conceptualization of Algorithm Responsiveness: Comparing Perceptions of Algorithms Across Social Media Platforms. Social Media + Society 8: 20563051221144322. [Google Scholar] [CrossRef]
  55. Thumim, Nancy. 2012. Self-Representation and Digital Culture. Berlin and Heidelberg: Springer. [Google Scholar]
  56. Tiffany, Kaitlyn. 2021. I’m Scared of the Person TikTok Thinks I Am. Available online: https://www.theatlantic.com/technology/archive/2021/06/your-tiktok-feed-embarrassing/619257/ (accessed on 23 May 2023).
  57. Tiidenberg, Katrin, and Andrew Whelan. 2017. Sick bunnies and pocket dumps: “Not-selfies” and the genre of self-representation. Popular Communication 15: 141–53. [Google Scholar] [CrossRef]
  58. Ytre-Arne, Brita. 2023. Introduction: Media Use and Everyday Life in Digital Societies. In Media Use in Digital Everyday Life. Bingley: Emerald Publishing Limited, pp. 1–16. [Google Scholar] [CrossRef]
  59. Ytre-Arne, Brita, and Hallvard Moe. 2021. Folk theories of algorithms: Understanding digital irritation. Media, Culture & Society 43: 807–24. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ionescu, C.G.; Licu, M. Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review. Soc. Sci. 2023, 12, 465. https://doi.org/10.3390/socsci12080465

AMA Style

Ionescu CG, Licu M. Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review. Social Sciences. 2023; 12(8):465. https://doi.org/10.3390/socsci12080465

Chicago/Turabian Style

Ionescu, Claudiu Gabriel, and Monica Licu. 2023. "Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review" Social Sciences 12, no. 8: 465. https://doi.org/10.3390/socsci12080465

APA Style

Ionescu, C. G., & Licu, M. (2023). Are TikTok Algorithms Influencing Users’ Self-Perceived Identities and Personal Values? A Mini Review. Social Sciences, 12(8), 465. https://doi.org/10.3390/socsci12080465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop