Breaking (Fake) News: No Personal Relevance Effect on Misinformation Vulnerability
Abstract
:1. Introduction
2. Methods
2.1. Participants
2.2. Material
2.3. Procedure
2.4. Data Analysis
- For the reference model based on the political news data, we fitted a Bayesian multiple regression model to predict either truth discernment or overall belief from 14 candidate predictors.
- After fitting the reference model, we used predictive projection to find the smallest possible submodel that would predict belief in FN (truth discernment or overall belief) almost as well as the reference model.
- The submodel selected from the political news data was then validated by using the COVID-19 news data. We considered three different scenarios [25].
- Scenario 1 (the null scenario). The model is completely accurate, that is, both the model structure (i.e., the specification of the predictors) and the parameter set θ0 are accurate. In other words, for predicting truth discernment or overall belief in the COVID-19 news data, the model uses the same subset of predictors and the same coefficients that were estimated from the political news data.
- Scenario 2 (faulty prior information on the model’s parameters ). The model structure is accurate, but information on one or more of the parameters is erroneous. In other words, for predicting truth discernment or overall belief in the COVID-19 news data, the model uses the same subset of predictors that were selected from the political news data, but different coefficients.
- Scenario 3 (faulty model structure). The model structure itself is wrong, which means that the best subset of predictors that were selected from the political news data does not correspond to the best subset of predictors that can be selected from the COVID-19 news data.
3. Results
3.1. External Validation for News Truth Discernment
- For the political news data, we fitted a Bayesian multiple regression reference model to predict news truth discernment from 14 candidate predictors. For this model, we obtained a Bayesian R2 = 0.35, 95% CI [0.29, 0.41] and a model standard deviation = 0.81, 95% CI [0.76, 0.87], which indicate a moderately good predictive performance—see Supplementary Material for details.
- By using predictive projection (i.e., by decreasing the Kullback–Leibler divergence from the reference model to the projected submodel), we found that a submodel which included only eight covariates produced a predictive performance similar (according to the 1 SE rule) to that of the reference model with all 14 covariates (Figure 1A). The 1 SE-submodel had a Bayesian R2 of 0.33, 95% CI [0.27, 0.39] and a model standard deviation of 0.81, 95% CI [0.76, 0.87]. The predicted values of the 1 SE-submodel were strongly associated with the predicted values of the reference model (Pearson r = 0.98; see Figure 1B). The eight predictors in the optimal submodel, in the order in which they were entered into the submodel, were age, gender, education, political orientation, RWAS conventionalism, FNSS conspiratorial beliefs, FNSS critical news consumption, and MIS.
- The crucial point of the statistical analysis was to evaluate the predictive performance of the selected submodel by external validation. The eight covariates selected from the political news data were used as predictors for truth discernment in the COVID-19 news data set. According to Scenario 1 (accuracy of the model structure and of the parameter set), external validation failed: the expected log-predictive density leave-one-out cross-validation difference (Δ ELPD LOO) worsened when it was computed for the COVID-19 news relative to when it was computed for the political news data, Δ ELPD LOO = 994.8 ± 25.7 SE. Instead, according to Scenario 2 (accuracy of model structure but not of parameter set), we found that the predictive performance of the selected submodel generalized well to the new data set. For the COVID-19 news data, we found no evidence of a decrease in predictive accuracy when using the eight predictors chosen from the political news data rather than the whole set of 14 predictors in the sample, Δ ELPD LOO = −0.02 ± 3.34 SE. For the projected model (eight predictors selected from the political news data), the Bayesian R2 = 0.32, 95% CI [0.27, 0.37] and the model standard deviation = 0.75, 95% CI [0.71, 0.79] (see Figure 2). When using all the 14 predictors on the COVID-19 news data, the Bayesian R2 = 0.34, 95% CI [0.29, 0.38] and the model standard deviation = 0.75, 95% CI [0.71, 0.79]. Therefore, for the COVID-19 news data, the performance measures (Δ ELPD LOO, Bayesian R2) of the projected model (predictors chosen from the political news data) are well-within the uncertainty bounds of the complete model (14 predictors; see Supplementary Material for details).
3.2. External Validation for Overall News Belief
- The Bayesian regression model with 14 covariates (see above) provided the reference model for the political news data; Bayesian R2 = 0.11, 95% CI [0.06, 0.16], model standard deviation = 1.01, 95% CI [0.95, 1.08]. The 95% CI did not cross the zero for four covariates: age, β = 0.10, 95% CI [0.00, 0.21]; education, β = 0.14, 95% CI [0.04, 0.24]; MIS, β = 0.15, 95% CI [0.01, 0.29]; and FNSS self-belief, β = 0.23, 95% CI [0.13, 0.32].
- By using a forward stepwise addition procedure, we identified the minimal subset of these covariates that had similar predictive power as the reference model. For this submodel, the Bayesian R2 = 0.05, 95% CI [0.01, 0.08] and the model standard deviation = 1.02, 95% CI [0.96, 1.09]. The only covariate included in the projected model was FNSS self-belief, β = 0.22, 95% CI [0.13, 0.32]—see Figure 4.
- For the COVID-19 news data, however, the optimal submodel included three predictors (FNSS self-belief, β = 0.15, 95% CI [0.07, 0.22]; RWAS conventionalism, β = −0.20, 95% CI [−0.32, −0.09]; and RWAS aggression–submission, β = −0.16, 95% CI [−0.26, −0.06]); with an R2 of 0.05, 95% CI [0.02, 0.08] and a model standard deviation = 0.95, 95% CI [0.90, 1.00]. For comparison, the full model with 14 covariates had a Bayesian R2 of 0.10, 95% CI [0.06, 0.14] and a model standard deviation of 0.94, 95% CI [0.89, 0.99], whereas the model with FNSS self-belief as the only covariate (β = 0.13, 95% CI [0.06, 0.20]) had a Bayesian R2 = 0.02, 95% CI [0.00, 0.04] and a model standard deviation of 0.96, 95% CI [0.91, 1.01]—see Figure 5.
4. General Discussion
4.1. Determinants of Misinformation
4.2. Vulnerability Factors
4.3. Protective Factors
4.4. Implications of Present Results for Cognitive Interventions
4.5. Limitations
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake news detection on social media: A data mining perspective. Explor. Newsl. 2017, 19, 22–36. [Google Scholar] [CrossRef]
- Lazer, D.M.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Zittrain, J.L. The science of fake news. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef] [PubMed]
- Walter, N.; Cohen, J.; Holbert, R.L.; Morag, Y. Fact-checking: A meta-analysis of what works and for whom. Polit. Commun. 2020, 37, 350–375. [Google Scholar] [CrossRef]
- Pennycook, G.; Rand, D.G. The psychology of fake news. Trends Cogn. Sci. 2020, 5, 388–402. [Google Scholar] [CrossRef] [PubMed]
- Van Bavel, J.J.; Pereira, A. The partisan brain: An identity-based model of political belief. Trends Cogn. Sci. 2018, 22, 213–224. [Google Scholar] [CrossRef] [PubMed]
- Bronstein, M.V.; Pennycook, G.; Bear, A.; Rand, D.G.; Cannon, T.D. Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. J. Appl. Res. Mem. Cogn. 2019, 8, 108–117. [Google Scholar] [CrossRef]
- Pennycook, G.; Rand, D.G. Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking. J. Pers. 2020, 88, 185–200. [Google Scholar] [CrossRef] [PubMed]
- Pennycook, G.; Rand, D.G. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 2019, 188, 39–50. [Google Scholar] [CrossRef]
- Linden S van der Panagopoulos, C.; Roozenbeek, J. You are fake news: Political bias in perceptions of fake news. Media Cult. Soc. 2020, 42, 460–470. [Google Scholar] [CrossRef]
- Sindermann, C.; Cooper, A.; Montag, C. A short review on susceptibility to falling for fake political news. Curr. Opin. Psychol. 2020, 36, 44–48. [Google Scholar] [CrossRef]
- Petty, R.E.; Cacioppo, J.T.; Schumann, D. Central and peripheral routes to advertising effectiveness: The moderating role of involvement. J. Consum. Res. 1983, 10, 135–146. [Google Scholar] [CrossRef]
- Petty, R.E.; Cacioppo, J.T.; Goldman, R. Personal involvement as a determinant of argument-based persuasion. J. Pers. Soc. Psychol. 1981, 41, 847. [Google Scholar] [CrossRef]
- Jia, L.; Shan, J.; Xu, G.; Jin, H. Influence of individual differences in working memory on the continued influence effect of misinformation. Cogn. Psychol. 2020, 32, 494–505. [Google Scholar] [CrossRef]
- Stone, A.R.; Marsh, E.J. Belief in COVID-19 misinformation: Hopeful claims are rated as truer. Appl. Cogn. Psychol. 2023, 37, 399–408. [Google Scholar] [CrossRef]
- Rossi, A.A.; Panzeri, A.; Taccini, F.; Parola, A.; Mannarini, S. The rising of the shield hero. Development of the Post Traumatic Symptom Questionnaire (PTSQ) and assessment of the protective effect of self-esteem from trauma-related anxiety and depression. J. Child Adolesc. Trauma 2022, 16, 1–19. [Google Scholar] [CrossRef] [PubMed]
- Sharot, T.; Sunstein, C.R. How people decide what they want to know. Nat. Hum. Behav. 2020, 4, 14–19. [Google Scholar] [CrossRef] [PubMed]
- Shahsavari, S.; Holur, P.; Wang, T.; Tangherlini, T.R.; Roychowdhury, V. Conspiracy in the time of corona: Automatic detection of emerging COVID-19 conspiracy theories in social media and the news. J. Comput. Soc. Sci. 2020, 3, 279–317. [Google Scholar] [CrossRef] [PubMed]
- Rubin, V.L.; Conroy, N.; Chen, Y.; Cornwell, S. Fake news or truth? Using satirical cues to detect potentially misleading news. In Proceedings of the Second Workshop on Computational Approaches to Deception Detection, San Diego, CA, USA, 17 June 2016; pp. 7–17. [Google Scholar]
- Giuntoli, L.; Capuozzo, P.; Ceccarini, F.; Colpizzi, I.; Caudek, C. Development and validation of the Fake News Supsceptibility Scale; University of Florence: Firenze, Italy, 2020; manuscript in preparation. [Google Scholar]
- Rattazzi, A.M.M.; Bobbio, A.; Canova, L. A short version of the Right-Wing Authoritarianism (RWA) Scale. Pers. Individ. Differ. 2007, 43, 1223–1234. [Google Scholar] [CrossRef]
- Sinclair, A.H.; Stanley, M.L.; Seli, P. Closed-minded cognition: Right-wing authoritarianism is negatively related to belief updating following prediction error. Psychon. Bull. Rev. 2020, 27, 1348–1361. [Google Scholar] [CrossRef]
- Eckblad, M.; Chapman, L.J. Magical ideation as an indicator of schizotypy. J. Consult. Clin. Psychol. 1983, 51, 215–225. [Google Scholar] [CrossRef]
- Martin, J.G.; Westie, F.R. The tolerant personality. Am. Sociol. Rev. 1959, 24, 521–528. [Google Scholar] [CrossRef]
- Bürkner, P.C. brms: An R package for Bayesian multilevel models using Stan. J. Stat. Softw. 2017, 80, 1–28. [Google Scholar] [CrossRef]
- Dasgupta, S.; Moore, M.R.; Dimitrov, D.T.; Hughes, J.P. Bayesian validation framework for dynamic epidemic models. Epidemics 2021, 37, 100514. [Google Scholar] [CrossRef] [PubMed]
- Pennycook, G.; Cheyne, J.A.; Barr, N.; Koehler, D.J.; Fugelsang, J.A. On the reception and detection of pseudo-profound bullshit. Judgm. Decis. Mak. 2015, 10, 549–563. [Google Scholar] [CrossRef]
- Spohr, D. Fake news and ideological polarization: Filter bubbles and selective exposure on social media. Bus. Inf. Rev. 2017, 34, 150–160. [Google Scholar] [CrossRef]
- Mourão, R.R.; Robertson, C.T. Fake news as discursive integration: An analysis of sites that publish false, misleading, hyperpartisan and sensational information. J. Stud. 2019, 20, 2077–2095. [Google Scholar] [CrossRef]
- Rizeq, J.; Flora, D.B.; Toplak, M.E. An examination of the underlying dimensional structure of three domains of contaminated mindware: Paranormal beliefs, conspiracy beliefs, and anti-science attitudes. Think. Reason. 2021, 27, 187–211. [Google Scholar] [CrossRef]
- Clarke, E.J.; Klas, A.; Dyos, E. The role of ideological attitudes in responses to COVID-19 threat and government restrictions in australia. Pers. Individ. Differ. 2021, 175, 110734. [Google Scholar] [CrossRef]
- Sica, C.; Caudek, C.; Cerea, S.; Colpizzi, I.; Caruso, M.; Giulini, P.; Bottesi, G. Health anxiety predicts the perceived dangerousness of COVID-19 over and above intrusive illness-related thoughts, contamination symptoms, and state and trait negative affect. Int. J. Environ. Res. Public Health 2021, 18, 1933. [Google Scholar] [CrossRef]
- Sica, C.; Perkins, E.R.; Latzman, R.D.; Caudek, C.; Colpizzi, I.; Bottesi, G.; Patrick, C.J. Psychopathy and COVID-19: Triarchic model traits as predictors of disease-risk perceptions and emotional well-being during a global pandemic. Pers. Individ. Differ. 2021, 176, 110770. [Google Scholar] [CrossRef]
- Greene, C.M.; Murphy, G. Quantifying the effects of fake news on behavior: Evidence from a study of COVID-19 misinformation. J. Exp. Psychol. 2021, 27, 773. [Google Scholar] [CrossRef] [PubMed]
- Rossi, A.A.; Marconi, M.; Taccini, F.; Verusio, C.; Mannarini, S. From fear to hopelessness: The buffering effect of patient-centered communication in a sample of oncological patients during COVID-19. Behav. Sci. 2021, 11, 87. [Google Scholar] [CrossRef] [PubMed]
- Vlachos, A.; Riedel, S. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, Baltimore, MD, USA; pp. 18–22.
- Klayman, J. Varieties of confirmation bias. Psychol. Learn. Motiv. 1995, 32, 385–418. [Google Scholar]
- Altemeyer, B. The other “authoritarian personality”. Adv. Exp. Soc. Psychol. 1998, 30, 47–92. [Google Scholar]
- Garzitto, M.; Picardi, A.; Fornasari, L.; Gigantesco, A.; Sala, M.; Fagnani, C.; Stazi, M.A.; Ciappolino, V.; Fabbro, F.; Altamura, A.C.; et al. Normative data of the Magical Ideation Scale from childhood to adulthood in an Italian cohort. Compr. Psychiatry 2016, 69, 78–87. [Google Scholar] [CrossRef] [PubMed]
- Chapman, L.J.; Chapman, J.P. The search for symptoms predictive of schizophrenia. Schizophr. Bull. 1987, 13, 497–503. [Google Scholar] [CrossRef] [PubMed]
- Carlucci, L.; Tommasi, M.; Saggino, A. Factor structure of the Italian version of the religious fundamentalism scale. Psychol. Rep. 2013, 112, 6–13. [Google Scholar] [CrossRef] [PubMed]
- R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021; Available online: https://www.R-project.org/ (accessed on 19 October 2020).
- Wickham, H.; Averick, M.; Bryan, J.; Chang, W.; McGowan, L.D.; François, R.; Grolemund, G.; Hayes, A.; Henry, L.; Hester, J.; et al. Welcome to the tidyverse. J. Open Source Softw. 2019, 4, 1686. [Google Scholar] [CrossRef]
- Bürkner, P.-C. Advanced Bayesian multilevel modeling with the R package brms. R J. 2018, 10, 395–411. [Google Scholar] [CrossRef]
- Green, D.M.; Swets, J.A. Signal Detection Theory and Psychophysics; Wiley: New York, NY, USA, 1966. [Google Scholar]
- Metzger, M.J.; Hartsell, E.H.; Flanagin, A.J. Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to partisan news. Commun. Res. 2020, 47, 3–28. [Google Scholar] [CrossRef]
- Ross, R.M.; Rand, D.G.; Pennycook, G. Beyond “fake news”: Analytic thinking and the detection of false and hyperpartisan news headlines. Judgm. Decis. Mak. 2021, 16, 484–504. [Google Scholar] [CrossRef]
- Zrnec, A.; Poženel, M.; Lavbič, D. Users’ ability to perceive misinformation: An information quality assessment approach. Inf. Process. Manag. 2022, 59, 102739. [Google Scholar] [CrossRef]
- Gelman, A.; Goodrich, B.; Gabry, J.; Vehtari, A. R-squared for bayesian regression models. Am. Stat. 2019, 73, 307–309. [Google Scholar] [CrossRef]
- Stagnaro, M.; Pennycook, G.; Rand, D.G. Performance on the cognitive reflection test is stable across time. Judgm. Decis. Mak. 2018, 13, 260–267. [Google Scholar] [CrossRef]
- Engelhardt, A.M.; Feldman, S.; Hetherington, M.J. Advancing the measurement of authoritarianism. Political Behav. 2021, 45, 1–24. [Google Scholar] [CrossRef]
- Meyer, T.D.; Hautzinger, M. Two-year stability of psychosis proneness scales and their relations to personality disorder traits. J. Personal. Assess. 1999, 73, 472–488. [Google Scholar] [CrossRef] [PubMed]
- Lai, K.; Xiong, X.; Jiang, X.; Sun, M.; He, L. Who falls for rumor? Influence of personality traits on false rumor belief. Personal. Individ. Differ. 2020, 152, 109520. [Google Scholar] [CrossRef]
- Brashier, N.M.; Schacter, D.L. Aging in an era of fake news. Curr. Dir. Psychol. Sci. 2020, 29, 316–323. [Google Scholar] [CrossRef]
- Pennycook, G.; Cheyne, J.A.; Koehler, D.J.; Fugelsang, J.A. On the belief that beliefs should change according to evidence: Implications for conspiratorial, moral, paranormal, political, religious, and science beliefs. Judgm. Decis. Mak. 2020, 15, 476–498. [Google Scholar] [CrossRef]
- Alper, S.; Bayrak, F.; Yilmaz, O. Psychological correlates of COVID-19 conspiracy beliefs and preventive measures: Evidence from turkey. Curr. Psychol. 2020, 40, 1–10. [Google Scholar] [CrossRef]
- Piironen, J.; Paasiniemi, M.; Vehtari, A. Projective inference in high-dimensional problems: Prediction and feature selection. Electron. J. Stat. 2020, 14, 2155–2197. [Google Scholar] [CrossRef]
- Vehtari, A.; Gelman, A.; Gabry, J. Practical bayesian model evaluation using leave-one-out cross-validation and WAIC. Stat. Comput. 2017, 27, 1413–1432. [Google Scholar] [CrossRef]
- Gelman, A.; Hill, J.; Vehtari, A. Regression and Other Stories; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar]
- Skitka, L.J.; Mullen, E.; Griffin, T.; Hutchinson, S.; Chamberlin, B. Dispositions, scripts, or motivated correction? Understanding ideological differences in explanations for social problems. J. Personal. Soc. Psychol. 2002, 83, 470–487. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ceccarini, F.; Capuozzo, P.; Colpizzi, I.; Caudek, C. Breaking (Fake) News: No Personal Relevance Effect on Misinformation Vulnerability. Behav. Sci. 2023, 13, 896. https://doi.org/10.3390/bs13110896
Ceccarini F, Capuozzo P, Colpizzi I, Caudek C. Breaking (Fake) News: No Personal Relevance Effect on Misinformation Vulnerability. Behavioral Sciences. 2023; 13(11):896. https://doi.org/10.3390/bs13110896
Chicago/Turabian StyleCeccarini, Francesco, Pasquale Capuozzo, Ilaria Colpizzi, and Corrado Caudek. 2023. "Breaking (Fake) News: No Personal Relevance Effect on Misinformation Vulnerability" Behavioral Sciences 13, no. 11: 896. https://doi.org/10.3390/bs13110896
APA StyleCeccarini, F., Capuozzo, P., Colpizzi, I., & Caudek, C. (2023). Breaking (Fake) News: No Personal Relevance Effect on Misinformation Vulnerability. Behavioral Sciences, 13(11), 896. https://doi.org/10.3390/bs13110896