1. Introduction
The coronavirus disease pandemic (COVID-19) has triggered a “misinfo-demic” that warrants continuous research efforts [
1]. Misinformation involving various topics has emerged and posed harm to people’s lives [
2,
3]. Notably, social media has facilitated the spread of misinformation in this unprecedented pandemic [
4,
5]. Over a quarter of social media information has been found to contain medical falsehoods and unverified, low-quality content about COVID-19 [
6]. Many mechanisms afforded by social media platforms have been argued to support such a misinformation pandemic [
7,
8].
In this study, we aim to offer a new theoretical framework to explain the misinfo-demic by focusing on internal sources of misinformation and factors that facilitate the misinformation spread on social media. First, we theorize the rapid dissemination of preliminary scientific evidence on social media as a context where scientific misinformation can arise due to individuals’ naïve understanding of science. For example, the release of evidence that a pet dog tested “weak positive” has fueled the widespread misinformation that pets can transmit COVID-19 [
9]. Though this issue is not new, the COVID-19 pandemic has amplified the adverse consequences of such a hasty science communication process on social media. Research has expressed a worry about the surge in the use of preliminary COVID-19-related evidence by media outlets and its role in driving the ongoing COVID-19 discourse [
10].
Second, we aim to examine how communicating uncertainty about preliminary evidence affects the spread of COVID-19 misinformation and its refutations. Given the fast-developing nature of science during the pandemic, the communication of preliminary evidence is often swiftly evolving, from communicating with no evidence, evidence that is first released with uncertainty on its interpretations, to evidence that has reached some consensus from the science community. In addition, scientists often approach preliminary evidence with caution, which may signal uncertainty of the evidence. Therefore, it is essential to investigate whether and how uncertainty communicated surrounding preliminary evidence will be a factor that facilitates or inhibits the spread of misinformation.
1.1. Misinformation Generation and Spreading
Research has found different sources of misinformation. Lewandowsky et al. extensively discussed four sources of misinformation, including rumors and fiction, governments and politicians, vested interests, and the media [
11]. In the COVID-19 pandemic, low-quality preprints are also found to be an important driver of the misinfo-demic [
10]. While studies focused largely on the above external resources from the environment that disseminate false information, Coronel, Poulsen, and Sweitzer reveal that memory biases and distortions of accurate information are an internal source of misinformation [
12], in which misinformation is generated through individuals’ memory processes after their exposure to factually accurate information [
13].
Studies also reveal various content, sender, and network factors that can fuel misinformation spread. For example, content topic and novelty facilitate the spread of false news online [
14]. Senders’ feelings of uncertainty, anxiety, trust in the information source, perceived importance of the information, cultural beliefs, and motivations are all associated with misinformation spreading, corresponding to and extending Allport and Postman’s “basic law” of rumor transmission [
15,
16] (see a review in [
11]). Research also finds that echo chambers play an important role in misinformation diffusion networks [
17,
18].
While contributions of existing scholarship are essential, they suffer from two significant gaps. First, research on internal sources of misinformation is scarce. Though Coronel et al. [
12] reveal memory biases as an internal source of misinformation, they assume a context where people are exposed to factually accurate information from the environment. However, in health crises such as the COVID-19 pandemic, much information is preliminary and uncertain when it is first communicated with the public, and it is observed that misinformation can naturally arise from the preliminary information. For example, when Hong Kong released the scientific evidence that a dog tested “weak positive” for the virus without confirmation of its infection, the misinformation that pets could transmit COVID-19 to humans rapidly spread and triggered irrational actions such as abandoning or killing pets as a precaution [
9]. Within the example, the information from the external source is from an authority and is factually accurate but preliminary, and the misinformation is unlikely derived from individuals’ memory biases. The current scholarship of misinformation has yet to offer what internal processes can cause such a phenomenon to occur and thereby the corresponding solutions.
Second and relatedly, as most studies assume sources of misinformation as external, they focus on attributes of misinformation and individual characteristics that facilitate the misinformation spread. As such, there is a lack of answers to questions about what are the contextual prerequisites for individuals to produce and spread misinformation. Though Rosnow pointed out as early as in 1988 that it is vital “to decipher the experiential contexts that ‘invite’ or ‘allow’ rumors to flourish [
19] (p. 16),” this proposal has not attracted much research attention thus far in the misinformation literature. However, the proposal is fundamental, as it will direct a new research approach upon strategies for building an appropriate communication environment that reduce internal sources of misinformation. Such an approach can prevent misinformation from being generated and spread, and thereby complement the current correction-based approach on misinformation after the misinformation has been spreading. This prevention-based approach is significant for health policymakers and crisis communicators as it puts communication efforts before the misinformation emerges and spreads and can reduce the potential detrimental impacts of misinformation on the society.
To fill the above gaps, this study focuses on another internal source of misinformation: the instance in which individuals are exposed to preliminary scientific evidence from authorities, but naïve understanding of science causes them to misinterpret the preliminary information. We focus on the spread of this type of misinformation especially in the COVID-19 pandemic, where individuals lack prior knowledge on the disease and are motivated to reduce uncertainty. In the following sections, we aim to outline the theoretical foundation of how preliminary evidence can become a prompt for individuals to generate misinformation from internal processes and what information context can facilitate such processes of misinformation generation and spreading.
1.2. Misinterpreting Preliminary Evidence Based on Naïve Theories of Science
We theorize the phenomenon of misinformation arising from preliminary scientific evidence from a socio-cognition perspective on the public understanding of science. According to the concept of epistemic cognition, intuitive and naïve theories of science provide a basic orientation toward scientific information [
20]. Specifically, the general public can often process only one-sided evidence and lack an understanding of how much evidence is needed to justify a scientific claim [
21,
22]. As such, when a piece of preliminary evidence emerges, the public may infer scientific claims that can be unjustified by the evidence. This type of inferred misinformation might be particularly prominent during a novel health crisis such as the COVID-19 pandemic, as people rely more on general epistemic beliefs in science when their understanding of the subject is lacking [
23,
24]. Though the inferred information may not always be false, communication about early science and preliminary research can potentially form a context where scientific misinformation can arise and thus needs imperative research attention.
In addition, refuting the inferred misinformation is often challenging. Debunking messages of the evidence-inferred misinformation will often require detailed explanations of mechanisms that are still unclear and under investigation. Additionally, as the misinformation is often an unjustified scientific claim, it cannot be truly refuted as false. Instead, science can only argue that there is no support for such a claim on most occasions. Furthermore, the refutation will easily elicit the audiences’ negative feelings, as it will unavoidably interrupt a logical and coherent inference based on lay beliefs in science [
25].
The above presents a highlighted paradox of science communication in the pandemic. When the preliminary science is communicated, audiences will potentially infer logically sound (at least from epistemic beliefs of the general public) but unjustified scientific misinformation for guiding future actions. As such, an initial investigation on whether and how communication of preliminary evidence can motivate people to generate and spread the inferred misinformation offers valuable insight into strategies that can tackle this type of misinformation.
1.3. Evidence Uncertainty as an Information Context for Misinformation Spread
As mentioned, few studies have examined what information context can facilitate the generation and spreading of misinformation. We propose that evidence uncertainty is one of the possible facilitators. Uncertainty “exists when details of situations are ambiguous, complex, unpredictable, or probabilistic; when information is unavailable or inconsistent; and when people feel insecure about their own state of knowledge or the stage of knowledge in general [
26] (p. 478).” Crisis communication research has emphasized the importance of uncertainty reduction and timely communication [
27,
28]. However, balancing certainty with urgency is challenging. This is particularly the case in the COVID-19 pandemic as it is common, and often expected, to communicate timely, preliminary evidence that has not gained scientific consensus and certainty [
29,
30].
Here, at least two types of uncertainty will be involved in the communication of preliminary findings. First, preliminary evidence often lacks sufficient data and has limitations in addressing a novel crisis phenomenon and thus signals deficient uncertainty about a known gap [
31,
32]. Second, agreement in interpreting the evidence may not be reached by scientists and other stakeholders as it was first released, which creates consensus uncertainty. Past literature has found that the communication of deficient uncertainty and consensus uncertainty can affect people’s trust in science and intention to follow recommendations [
29,
33]. However, no studies have investigated how the communication of evidence uncertainty can be related to the spread of misinformation.
According to the concept of motivated reasoning [
34,
35], ambiguous risk messages will motivate people to access and construct information in a heuristic way to reduce uncertainty. Research has supported this conceptual notion by showing that the communication of evidence uncertainty accentuates the reliance on people’s own experiences, heuristics, and feelings about risk, and the disregard of institutional assessments [
36,
37]. For example, conflicting interpretations of research evidence regarding cancer risks were found to trigger people’s dispositional beliefs in cancer fatalism [
38]. In addition, a recent systematic review of 48 experimental studies revealed that the communication of uncertainty regarding the evidence’s deficiency and consensus yielded adverse effects in terms of decreasing belief in, perceived credibility of, or intentions to follow recommendations of the message in most risk communication research [
39]. Importantly, the crisis environment may further provoke people’s inclination to “seize” and “freeze” the certainty developed through their heuristic cognition [
25]. Anxiety and aversion induced by the environment can heighten people’s desire to form and maintain a quick and clear-cut judgment that can protect them from the crisis even when such a judgment may be false [
40,
41].
1.4. Research Framework
Based on the above review, we propose a new framework that the uncertainty communicated with preliminary evidence can promote internal motivated reasoning based on a naïve understanding of science that produces misinformation inferred from the evidence. That is, given that people rely more on heuristic reasoning when facing uncertainty in a crisis, communication about the preliminary evidence with uncertainty should motivate people to interpret the evidence based on their naïve beliefs, thus likely resulting in the inferred misinformation. As discussed, refutations of the inferred misinformation often contain detailed explanations that require high-level processing efforts. Additionally, refuting messages may even introduce more uncertainty as mechanisms underlying the evidence may still be unknown. In comparison, inferring (mis)information from the evidence based on epistemic beliefs requires fewer cognitive efforts, which may be a better way for people to achieve some extent of certainty in a novel crisis. Therefore, people may favor misinformation and be averse to its refutations in the context where the uncertainty of preliminary evidence is communicated.
1.5. Research Hypotheses
To test our theoretical framework, we examine if the communication of preliminary evidence could be a prompt for individuals’ internal processes to generate misinformation and if communicating the uncertainty of the evidence would facilitate misinformation spread based on social media data. First of all, if the communication of preliminary evidence indeed prompts individuals’ processes to generate misinformation, social media users’ attitudes toward the evidence should be associated with their attitudes toward the inferred misinformation. Therefore, we hypothesize that:
Hypothesis 1 (H1). Users’ attitudes toward preliminary evidence will be associated with their attitudes toward misinformation.
In addition, we examine the communication of evidence uncertainty in two manifestations. First, we test if social media users’ attitude ambiguity towards the evidence would be associated with the inferred misinformation spread. Scientists and health professionals often caution with the preliminary evidence and thereby demonstrate attitude ambiguity when it is first released. This is especially the case during the COVID-19 pandemic. However, it is unknown how such attitude ambiguity toward the evidence may affect users’ responses to the inferred misinformation and the refutations. Based on the literature reviewed above, we hypothesize that:
Hypothesis 2 (H2). Attitude ambiguity on preliminary evidence will be associated with users’ preferences for misinformation messages and aversion to refutations.
Second, we compare social media users’ responses to misinformation and refutation messages across different evidence communication stages. Notably, preliminary evidence is often released with cautions about its interpretations by authorities or scientific professionals. Thus, uncertainty is inherent in the preliminary evidence when it is first communicated. It is only based on further investigations or recognitions from the science community that the preliminary evidence can gain some scientific consensus. In this case, early science communication will naturally unfold three communication stages, from no evidence, uncertain evidence, to evidence consensus. We hypothesize that:
Hypothesis 3 (H3). The uncertain-evidence stage will be associated with users’ preferences for misinformation messages and aversion to refutations.
Third, we explore how the two manifestations of evidence uncertainty communication interact to affect users’ responses to misinformation and refutation messages. We do not assume the two forms of evidence uncertainty would have the same effect as they may indicate different levels and types of uncertainty. Thus, we ask a research question (RQ):
RQ: How do the two forms of evidence uncertainty communication interact to affect people’s responses to misinformation and refutation messages on social media?
We tested the above hypotheses using social media data on Weibo. Social media offers a valuable data source that can naturally unfold the generation and spread of misinformation. For our investigation, we focus on the scientific misinformation about pets transmitting COVID-19 to people. No evidence has been found today to support the misinformation. However, evidence that pets could be infected with the virus was accumulating during the early stage of the pandemic and has fueled misinformation and irrational public panic. Particularly, Hong Kong reported the first instance that a dog of a COVID-19 patient tested “weakly positive” for the virus on 28 February 2020. Notably, when this news was first released, the Hong Kong scientists emphasized that they were still unsure if the dog was actually infected or just contaminated by the environment. It was only on 4 March that scientists from the WHO concluded it is a case of human-to-animal transmission of the virus. This context manifests the communication process of early science from communication with no evidence (i.e., before 28 February), uncertain evidence (i.e., 28 February to 3 March), and evidence consensus (i.e., 4 March onward). Thus, it offers an appropriate research setting for our examination.
Particularly, we examine users’ responses regarding their liking and reposting of misinformation posts. Numbers of liking and reposting have been found to be associated with rumor-spreading behaviors on social media platforms such as Twitter. Alhabash and McAlister defined the number of retweets as a manifestation of the viral reach of a message, and the number of likes as a manifestation of the affective evaluation of the message [
42]. Both indexes can serve as normative cues that increase a given rumor’s perceived creditability and induce users’ intention to share the rumor [
43]. In addition, we also examine users’ attitudes toward misinformation in a repost as an indicator of misinformation responses. Past misinformation literature focused predominantly on numbers of reposts regardless of their authors’ attitudes toward the misinformation [
43]. However, a high number of reposts debunking the misinformation can help misinformation rebuttal instead.
4. Discussion
We set out to examine a new theoretical framework that the uncertainty communicated with preliminary evidence can promote internal motivated reasoning based on a naïve understanding of science that produces misinformation inferred from the evidence. To fulfil the aim, we tested if the communication of preliminary evidence could be a prompt for individuals’ internal processes to generate misinformation and if communicating the uncertainty of the evidence would facilitate misinformation spread based on social media data. We examined evidence uncertainty communication in two forms: users’ ambiguous attitudes toward the evidence and the stage when the evidence was communicated with uncertainty. This study contributes to the literature of science communication and misinformation in several important ways.
First of all, we gained empirical support for our theoretical framework that the uncertainty circulating around preliminary evidence can promote the generation and transmission of misinformation inferred from the evidence. As the results showed that users’ attitudes toward the evidence and the misinformation corresponded, this suggested that users indeed perceived an inherent link between the evidence and the misinformation based on their naïve understanding of science. Importantly, users’ ambiguous attitudes toward the evidence and the uncertain-evidence stage resulted in more likes and retweets of the misinformation and/or fewer likes and retweets of the refutations. This further indicates that the uncertainty signaled in the posts strengthens individuals’ beliefs in such an inherent link. This is likely because the uncertainty prompts individuals to seize and freeze an available short-cut to reduce the uncertainty. However, the current data were unable to demonstrate such a mechanism, and future experimental studies are required. Nevertheless, this study indicates that health policymakers should at least regulate the hasty communication of emerging evidence with inherent uncertainty during a novel health crisis for the purpose of misinformation control.
Second, our study highlights and extends Davis and Loftus’ framework [
13] on internal sources of misinformation that assumes an exposure to accurate information by focusing on an exposure to preliminary information. Future studies should continue this line of research by examining other possible internal sources of misinformation. For example, Lu found concurrence between the announcement of the Wuhan lockdown in early 2020 and the rise of fake news about government quarantine policies in China [
2]. It seems clear that there is another undiscovered internal cognitive process that causes misinformation after individuals’ exposure to the factual information of the city lockdown.
Third, this study revisits Rosnow’s proposal [
19] on examining information contexts that feed and fuel the misinformation. Our data supported that, besides misinformation and individual characteristics, contextual factors such as evidence uncertainty can promote misinformation. Together with the proposal, this study suggests future research on misinformation prevention strategies. This suggestion goes beyond corrections that are extensively studied in the current literature by advocating to build a misinformation-unfriendly context through ways such as strategic social media use and evidence framing, some of which will be discussed below.
Extending previous research, our investigation revealed that Weibo served as a platform that promoted misinformation and inhibited the propagation of refutations when the emerging evidence was communicated with uncertainty. Particularly, we found that authors endorsing the misinformation while expressing ambiguous attitudes toward the evidence in their posts received more likes and retweets than those rejecting or endorsing the evidence. In addition, users tended not to share the refutation posts when the evidence was first released and had not gained any consensus. These findings suggest that Weibo-like social media platforms may not be suitable for communicating uncertainty toward early evidence during novel crises. They warrant policy attention as studies have suggested a high prevalence in highlighting scientific uncertainty associated with COVID-19 preprints on digital media outlets [
10]. In addition, the interaction analysis between attitude ambiguity and the evidence stage further revealed that posts did not mention the evidence received sizeable refutation reposts across different evidence stages. This implies that when preliminary evidence is released, an appropriate communication practice is not to express uncertainty about the evidence but focus only on debunking the inferred misinformation.
Interestingly, and in contrast, we found that Weibo might be good for communicating scientific consensus. The interaction analysis between attitude ambiguity and the evidence stage revealed that users tended to debunk the misinformation in their reposts when the original posts supported the evidence consensus. In comparison, if the original posts signaled uncertainty about the evidence consensus, users tended to spread the misinformation instead. These findings are consistent with the well-documented knowledge that the communication of consensus uncertainty will lead to the endorsement of one’s heuristic beliefs in the inferred (mis)information [
38]. Nevertheless, contrary to previous experimental findings that consensus uncertainty could also lead to people disregarding authorities’ recommendations, we observed sizable refutations following such communication of consensus uncertainty. A closer examination of those refutations revealed that they were trying to restore the consensus and fight the misinformation. This suggests that building public consensus on scientific evidence may help tackle relevant misinformation on social media [
48].
In general, we found that expressing attitude ambiguity toward the evidence at different evidence stages was associated with different patterns in reposts of misinformation and refutation messages. Our analyses revealed that attitude ambiguity toward the evidence suppressed the dissemination of refutations only at the uncertain-evidence stage, but not at the evidence-consensus stage. In contrast, such ambiguity promoted misinformation to a greater extent at the later stage than at the former stage. A possible explanation of these findings may be that attitude ambiguity at different evidence stages may signal different levels and types of evidence uncertainty. For example, attitude ambiguity toward a piece of uncertain evidence may signal a strong deficient uncertainty about a known gap, and thus the evidence should be less convincible for rebuttals [
31]. In comparison, attitude ambiguity toward the evidence consensus should signal consensus uncertainty and induce the adoption of misinformation [
32]. Future research should explore the social–cognition mechanisms underlying these findings.
In this study, we included data before any evidence emerged in the analysis. At first glance, the data seem not so relevant to our research question. However, we think that they are of both theoretical and practical significance. Theoretically, these data provided a baseline for comparison to demonstrate the effect of communicating emerging evidence related to misinformation. We showed that original posts about the rejection of the misinformation received a decreased number of likes after the evidence emerged. Such a decrease was greater at the uncertain-evidence stage than at the evidence-consensus stage. In addition, those original posts received a decrease in rejection reposts on the misinformation only at the uncertain-evidence stage, but not at the evidence-consensus stage. These findings suggest that the effect was associated with evidence uncertainty rather than evidence consensus.
Practically, analyses on data from the no-evidence stage showed that posts that induced uncertainty about a piece of “fake” evidence would suppress the dissemination of misinformation rebuttals and promote ambiguous beliefs on the misinformation. This finding provides empirical support on how misinformation can be spread with groundless evidence [
49]. Our analyses also revealed that the refutation of such groundless evidence can be a good way to tackle misinformation spread as it can drive propagations of debunking messages.
This study has several limitations. First, we were not able to measure users’ cognitive processes using social media data. Second, we examined our hypotheses with only a circumstance of misinformation, which may be a constraint in result generalization. Third, limited by the method of ART analysis, we were not able to include any covariate to control for potential impacts of other relevant factors, such as the emotional tone of the posts and account attributes. Finally, Weibo is a Twitter-like social media platform where users’ relationships are asymmetrical and information is open and prosperous. It is quite different from Facebook and WeChat, where social interactions occur mainly among closed relationships and information is often private and exclusive. Therefore, comparisons between Weibo and other platforms are needed.