Early Detection of Mental Health Crises through Artifical-Intelligence-Powered Social Media Analysis: A Prospective Observational Study
Abstract
:1. Introduction
- Depressive episodes: characterized by persistent feelings of sadness, hopelessness, and loss of interest in daily activities.
- Manic episodes: marked by abnormally elevated mood, increased energy, and impulsive behavior.
- Suicidal ideation: thoughts about taking one’s own life, ranging from fleeting considerations to detailed planning.
- Anxiety crises: intense periods of fear or panic that may be accompanied by physical symptoms and may significantly impact daily functioning.
Novelty and Significance of the Study
- Multimodal approach: Unlike previous studies, which primarily focused on textual analysis, our model integrates linguistic and temporal patterns, providing a more comprehensive analysis of social media data.
- Multilingual capability: We address a critical gap in existing research by developing a model that performs consistently across multiple languages (English, Spanish, Mandarin, and Arabic), enhancing its potential for global application.
- Early detection focus: While many studies have concentrated on identifying specific mental health conditions, our research emphasizes the early detection of impending crises, potentially allowing for more timely interventions.
- Detection of diverse crisis types: Our model is designed to identify a range of mental health crises, including depressive episodes, manic episodes, suicidal ideation, and anxiety crises, offering a more comprehensive tool for mental health monitoring.
- Integration of ethical considerations: We explicitly address and evaluate ethical challenges throughout the development and testing of our model, contributing to the ongoing discourse on responsible AI use in mental health contexts.
2. Materials and Methods
2.1. Detailed Data Collection Process
- Only public posts were collected, excluding any private or restricted content.
- User identifiers were immediately hashed and anonymized upon collection.
- Posts were filtered to remove any identifying information, such as names or contact details.
- Data were stored in an encrypted database with access restricted to authorized research team members.
2.2. Multimodal Deep Learning Approach
- Text analysis: We used the Transformers library [27] to implement a (Bidirectional Encoder Representations from Transformers (BERT) model for natural language processing. This component analyzes the linguistic content of social media posts.
- Temporal analysis: Long short-term memory (LSTM) networks were implemented using PyTorch to capture temporal patterns in posting behavior. This component analyzes the frequency, timing, and sequence of posts.
- Multi-head attention mechanism: We implemented a custom attention mechanism inspired by Vaswani et al. [28] to integrate insights from textual and temporal analyses.
2.3. Exploratory Factor Analysis
2.4. Ethical Safeguards
3. Results
3.1. Model Performance
3.1.1. Overall Performance
3.1.2. Performance by Language and Social Media Platform
3.2. Digital Markers of Mental Health Crises
Linguistic Markers
- Linguistic markers: We observed an increased use of first-person singular pronouns (e.g., “I”, “me”, “myself”) in posts preceding a crisis, consistent with previous findings linking self-focused language to depressive symptoms [31]. There was also a higher frequency of negative emotion words and a decrease in linguistic diversity, as measured by the type–token ratio. Interestingly, we noted sudden changes in sentiment polarity within short time frames, often preceding manic or mixed episodes.
- Behavioral markers: Significant changes in posting frequency were strong indicators of potential crises. We found that both substantial increases (>50% from baseline) and decreases in posting activity were associated with higher crisis risk. Shifts in posting time patterns, particularly increases in late-night activity, were also notable predictors. Additionally, we observed reduced engagement with other users, manifesting as fewer replies, likes, or shares, often preceding depressive episodes.
- Temporal patterns: Our model identified cyclical patterns in mood-related language, particularly in cases later diagnosed as bipolar disorder. We also noted a gradual increase in expressions of hopelessness or worthlessness over time, often preceding major depressive episodes or suicidal ideation.
3.3. Model Insights
Crisis Type Detection
3.4. Error Analysis
3.4.1. False Positives
3.4.2. False Negatives
3.5. External Validation Results
4. Discussion
4.1. Summary of Key Findings
4.2. Comparison with Existing Literature
4.3. Implications for Mental Health Practice and Policy
4.4. Limitations and Ethical Considerations
- Reliance on public social media posts: Our study only analyzed publicly available posts, which may not represent all individuals experiencing mental health crises. Those who maintain private accounts or are less active on social media may be under-represented.
- Potential for bias: The model’s performance showed a slight decrease in accuracy when analyzing posts from non-Western cultural contexts, highlighting the need for more diverse training data and ongoing cultural adaptation.
- Privacy and consent: While we took measures to anonymize data, the use of social media data for mental health monitoring raises important questions about user privacy and consent. Future implementations must carefully consider balancing public health benefits with individual privacy rights.
- Risk of stigmatization: Our ethical audit revealed that 7% of flagged posts contained content that could lead to unintended stigmatization if misinterpreted. This underscores the need for human oversight and careful interpretation of AI-generated alerts.
- Interpretation and intervention challenges: While our model can detect the early signs of crises, translating these detections into effective interventions remains challenging. There is a risk of false positives leading to unnecessary interventions or false negatives missing critical cases.
4.5. Practical Applications and Intervention Strategies
- Early warning system for mental health professionals: Our model could serve as an early warning system for mental health services. When the AI detects signs of an impending crisis, it could alert designated mental health professionals. These professionals would then review the flagged content and decide whether further action is necessary. This human-in-the-loop approach ensures that clinical judgment is always part of decision making.
- Integration with existing crisis response systems: Social media platforms could integrate our AI model into their existing systems for flagging concerning content. When the AI identifies potential crisis indicators, it could trigger the platform’s standard protocols for connecting users with crisis support resources or helplines.
- Targeted public health campaigns: On a broader scale, our system could identify emerging trends in mental health crises within specific communities or demographics. This information could guide public health officials in designing and implementing targeted mental health awareness campaigns or allocating resources to the areas of greatest need.
- Opt-in monitoring for at-risk individuals: An opt-in service could be developed for individuals with a history of mental health crises. With explicit user consent, the AI could monitor their social media activity and alert designated caregivers or mental health providers if concerning patterns emerge.
- Enhanced screening in clinical settings: With patient consent, our tool could provide additional context to mental health professionals in clinical settings by analyzing a patient’s social media history. This could offer insights into patterns or triggers that may not be apparent in traditional clinical interactions.
- Self-monitoring tools: A version of our AI model could be developed into a personal mental health monitoring app. Users could voluntarily track their social media patterns and receive personalized insights and resources about their mental well-being.
4.6. Future Research Directions
- Longitudinal studies to assess the long-term impact of early AI-enabled intervention on mental health outcomes.
- Investigation of how to ethically integrate AI-detected crisis signals with existing mental health services and crisis response systems.
- Development of personalized models that can account for individual baseline behaviors and cultural contexts.
- Exploration of multimodal analysis incorporating image and video data, which were excluded from the current study.
- Research into user perspectives on AI-powered mental health monitoring, including issues of consent, privacy, and perceived benefits.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- World Health Organization. Global Burden of Mental Disorders and the Need for a Comprehensive, Coordinated Response from Health and Social Sectors at the Country Level; WHO: Geneva, Switzerland, 2023. [Google Scholar]
- Smith, K.A.; Blease, C.; Faurholt-Jepsen, M.; Firth, J.; Van Daele, T.; Moreno, C.; Carlbring, P.; Ebner-Priemer, U.W.; Koutsouleris, N.; Riper, H.; et al. Digital mental health: Challenges and next steps. BMJ Ment. Health 2023, 26, e300670. [Google Scholar] [CrossRef] [PubMed]
- Skaik, R.; Inkpen, D. Using social media for mental health surveillance: A review. ACM Comput. Surv. (CSUR) 2020, 53, 1–31. [Google Scholar] [CrossRef]
- Pew Research Center. Social Media Use in 2023; Pew Research Center: Washington, DC, USA, 2023. [Google Scholar]
- Azucar, D.; Marengo, D.; Settanni, M. Predicting the Big 5 Personality Traits from Digital Footprints on Social Media: A Meta-analysis. Pers. Individ. Differ. 2018, 124, 150–159. [Google Scholar] [CrossRef]
- Berryman, C.; Ferguson, C.J.; Negy, C. Social media use and mental health among young adults. Psychiatr. Q. 2018, 89, 307–314. [Google Scholar] [CrossRef]
- Graham, S.; Depp, C.; Lee, E.E.; Nebeker, C.; Tu, X.; Kim, H.C.; Jeste, D.V. Artificial intelligence for mental health and mental illnesses: An overview. Curr. Psychiatry Rep. 2019, 21, 1–8. [Google Scholar]
- Babu, N.V.; Kanaga, E.G. Sentiment analysis in social media data for depression detection using artificial intelligence: A review. SN Comput. Sci. 2022, 3, 74. [Google Scholar] [CrossRef]
- Laacke, S.; Mueller, R.; Schomerus, G.; Salloch, S. Artificial Intelligence, Social Media and Depression. A New Concept of Health-Related Digital Autonomy. Am. J. Bioeth. 2021, 21, 4–20. [Google Scholar] [CrossRef]
- Owusu, P.N.; Reininghaus, U.; Koppe, G.; Dankwa-Mullan, I.; Bärnighausen, T. Artificial intelligence applications in social media for depression screening: A systematic review protocol for content validity processes. PLoS ONE 2021, 16, e0259499. [Google Scholar] [CrossRef]
- Martins, R.; Almeida, J.; Henriques, P.; Novais, P. Identifying Depression Clues using Emotions and AI. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence, ICAART, Online, 4–6 February 2021; 2021; Volume 2, pp. 1137–1143. [Google Scholar]
- Spruit, M.; Verkleij, S.; de Schepper, K.; Scheepers, F. Exploring Language Markers of Mental Health in Psychiatric Stories. Appl. Sci. 2022, 12, 2179. [Google Scholar] [CrossRef]
- Di Blasi, M.; Salerno, L.; Albano, G.; Caci, B.; Esposito, G.; Salcuni, S.; Gelo, O.C.; Mazzeschi, C.; Merenda, A.; Giordano, C.; et al. A three-wave panel study on longitudinal relations between problematic social media use and psychological distress during the COVID-19 pandemic. Addict. Behav. 2022, 134, 107430. [Google Scholar] [CrossRef] [PubMed]
- Linthicum, K.P.; Schafer, K.M.; Ribeiro, J.D. Machine learning in suicide science: Applications and ethics. Behav. Sci. Law 2019, 37, 214–222. [Google Scholar] [CrossRef] [PubMed]
- Jeste, D.V.; Alexopoulos, G.S.; Bartels, S.J.; Cummings, J.L.; Gallo, J.J.; Gottlieb, G.L.; Halpain, M.C.; Palmer, B.W.; Patterson, T.L.; Reynolds, C.F.; et al. Consensus statement on the upcoming crisis in geriatric mental health: Research agenda for the next 2 decades. Arch. Gen. Psychiatry 1999, 56, 848–853. [Google Scholar] [CrossRef] [PubMed]
- Kovács, G.; Alonso, P.; Saini, R. Challenges of hate speech detection in social media: Data scarcity, and leveraging external resources. SN Comput. Sci. 2021, 2, 95. [Google Scholar] [CrossRef]
- Geetha, G.; Saranya, G.; Chakrapani, K.; Ponsam, J.G.; Safa, M.; Karpagaselvi, S. Early detection of depression from social media data using machine learning algorithms. In Proceedings of the 2020 International Conference on Power, Energy, Control and Transmission Systems (ICPECTS), Chennai, India, 10–11 December 2020; pp. 1–6. [Google Scholar]
- Smys, S.; Raj, J.S. Analysis of deep learning techniques for early detection of depression on social media network-a comparative study. J. Trends Comput. Sci. Smart Technol. (TCSST) 2021, 3, 24–39. [Google Scholar]
- Lawrence, H.R.; Schneider, R.A.; Rubin, S.B.; Matarić, M.J.; McDuff, D.J.; Bell, M.J. The opportunities and risks of large language models in mental health. JMIR Ment. Health 2024, 11, e59479. [Google Scholar] [CrossRef]
- Benrimoh, D.; Fisher, V.; Mourgues, C.; Sheldon, A.D.; Smith, R.; Powers, A.R. Barriers and solutions to the adoption of translational tools for computational psychiatry. Mol. Psychiatry 2023, 28, 2189–2196. [Google Scholar] [CrossRef]
- American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders: DSM-5, 5th ed.; American Psychiatric Association: Washington, DC, USA, 2013. [Google Scholar]
- CrowdTangle Team. CrowdTangle. Facebook, Menlo Park, California, United States. 2023. Available online: https://www.crowdtangle.com (accessed on 22 May 2024).
- Twitter, Inc. Twitter API v2. 2023. Available online: https://developer.twitter.com/en/docs/twitter-api (accessed on 22 May 2024).
- Reddit, Inc. Reddit API. 2023. Available online: https://www.reddit.com/dev/api/ (accessed on 10 January 2024).
- Van Rossum, G.; Drake, F.L. Python 3 Reference Manual; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
- Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; pp. 38–45. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Conneau, A.; Khandelwal, K.; Goyal, N.; Chaudhary, V.; Wenzek, G.; Guzman, F.; Grave, E.; Ott, M.; Zettlemoyer, L.; Stoyanov, V. Unsupervised Cross-lingual Representation Learning at Scale. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. arXiv 2019, arXiv:1911.02116. [Google Scholar]
- Factor Analyzer Contributors. Factor Analyzer [Computer Software]. 2023. Available online: https://pypi.org/project/factor-analyzer/ (accessed on 25 May 2024).
- Zimmermann, J.; Brockmeyer, T.; Hunn, M.; Schauenburg, H.; Wolf, M. First-person pronoun use in spoken language as a predictor of future depressive symptoms: Preliminary evidence from a clinical sample of depressed patients. Clin. Psychol. Psychother. 2017, 24, 384–391. [Google Scholar] [CrossRef]
- De Choudhury, M.; Gamon, M.; Counts, S.; Horvitz, E. Predicting depression via social media. In Proceedings of the International AAAI conference on Web and Social Media, Cambridge, MA, USA, 8–11 July 2013; Volume 7, pp. 128–137. [Google Scholar]
- Eichstaedt, J.C.; Smith, R.J.; Merchant, R.M.; Ungar, L.H.; Crutchley, P.; Preoţiuc-Pietro, D.; Asch, D.A.; Schwartz, H.A. Facebook language predicts depression in medical records. Proc. Natl. Acad. Sci. USA 2018, 115, 11203–11208. [Google Scholar] [CrossRef] [PubMed]
- Ji, S.; Yu, C.P.; Fung, S.F.; Pan, S.; Long, G. Supervised learning for suicidal ideation detection in online user content. Complexity 2018, 2018, 6157249. [Google Scholar] [CrossRef]
- Shen, G.; Jia, J.; Nie, L.; Feng, F.; Zhang, C.; Hu, T.; Chua, T.S.; Zhu, W. Depression detection via harvesting social media: A multimodal dictionary learning solution. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), Melbourne, Australia, 19–25 August 2017; pp. 3838–3844. [Google Scholar]
- Coppersmith, G.; Dredze, M.; Harman, C. Quantifying mental health signals in Twitter. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, Baltimore, MD, USA, 27 June 2014; pp. 51–60. [Google Scholar]
- Bantjes, J.; Iemmi, V.; Coast, E.; Channer, K.; Leone, T.; McDaid, D.; Palfreyman, A.; Stephens, B.; Lund, C. Poverty and suicide research in low-and middle-income countries: Systematic mapping of literature published in English and a proposed research agenda. Glob. Ment. Health 2016, 3, e32. [Google Scholar] [CrossRef] [PubMed]
- Onnela, J.P.; Rauch, S.L. Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health. Neuropsychopharmacology 2016, 41, 1691–1696. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mansoor, M.A.; Ansari, K.H. Early Detection of Mental Health Crises through Artifical-Intelligence-Powered Social Media Analysis: A Prospective Observational Study. J. Pers. Med. 2024, 14, 958. https://doi.org/10.3390/jpm14090958
Mansoor MA, Ansari KH. Early Detection of Mental Health Crises through Artifical-Intelligence-Powered Social Media Analysis: A Prospective Observational Study. Journal of Personalized Medicine. 2024; 14(9):958. https://doi.org/10.3390/jpm14090958
Chicago/Turabian StyleMansoor, Masab A., and Kashif H. Ansari. 2024. "Early Detection of Mental Health Crises through Artifical-Intelligence-Powered Social Media Analysis: A Prospective Observational Study" Journal of Personalized Medicine 14, no. 9: 958. https://doi.org/10.3390/jpm14090958
APA StyleMansoor, M. A., & Ansari, K. H. (2024). Early Detection of Mental Health Crises through Artifical-Intelligence-Powered Social Media Analysis: A Prospective Observational Study. Journal of Personalized Medicine, 14(9), 958. https://doi.org/10.3390/jpm14090958