1. Introduction
At present, there is an active scholarly and public debate regarding the future prospect of artificial superintelligence (henceforth just
superintelligence), which is artificial intelligence (AI) that is significantly more intelligent than humans in all major respects. While much of the issue remains unsettled, some specific arguments are clearly incorrect, and as such can qualify as
misinformation. (As is elaborated below, arguments can qualify as misinformation even when the issues are unsettled.) More generally, misinformation can be defined as “false or inaccurate information” [
1], or as “information that is initially presented as true but later found to be false” [
2] (p. 1). This paper addresses the question of what can be done to reduce the spread of and belief in superintelligence misinformation.
While any misinformation is problematic, superintelligence misinformation is especially worrisome due to the high stakes involved. If built, superintelligence could have transformative consequences, which could be either massively beneficial or catastrophic. Catastrophe is more likely to come from a superintelligence built based on the wrong ideas—and it could also come from not building a superintelligence that would have been based on the right ideas, because a well-designed superintelligence could prevent other types of catastrophe, such that abstaining from building such a superintelligence could result in catastrophe. Thus, the very survival of the human species could depend on avoiding or rejecting superintelligence misinformation. Furthermore, the high stakes of superintelligence have the potential to motivate major efforts to attempt to build it or to prevent others from doing so. Such efforts could include massive investments or restrictive regulations on research and development (R&D), or plausibly even international conflict. It is important for these sorts of efforts to be based on the best available understanding of superintelligence.
Superintelligence is also an issue that attracts a substantial amount of misinformation. The abundance of misinformation may be due to the many high-profile portrayals of superintelligence in science fiction, the tendency for popular media to circulate casual comments about superintelligence made by various celebrities, and the relatively low profile of more careful scholarly analyses. Whatever the cause, experts and others often find themselves responding to some common misunderstandings [
3,
4,
5,
6,
7,
8,
9].
There is also potential for superintelligence
disinformation: misinformation with the intent to deceive. There is a decades-long history of private industry and anti-regulation ideologues promulgating falsehoods about socio-technological issues in order to avoid government regulations. This practice was pioneered by the tobacco industry in the 1950s and has since been adopted by other industries including fossil fuels and industrial chemicals [
10,
11]. AI is increasingly important for corporate profits and thus could be a new area of anti-regulatory disinformation [
12]. The history of corporate disinformation and the massive amounts of profit potentially at stake suggest that superintelligence disinformation campaigns could be funded at a large scale and could be a major factor in the overall issue. Superintelligence disinformation could potentially come from other sources as well, such as governments or even concerned citizens seeking to steer superintelligence debates and practices in particular directions.
Finally, there is the subtler matter of the information that has not yet been established as misinformation, but is nonetheless incorrect. This misinformation is the subject of ongoing scholarly debates. Active superintelligence debates consider whether superintelligence will or will not be built, whether it will or will not be dangerous, and a number of other conflicting possibilities. Clearly, some of these positions are false and thus can qualify as misinformation. For example, claims that superintelligence will be built and that it will not be built cannot both be correct. However, it is not presently known which positions are false, and there is often no expert consensus on which positions are likely to be false. While the concept of misinformation is typically associated with information that is more obviously false, it nonetheless applies to these subtler cases, which can indeed be “information that is initially presented as true but later found to be false”. Likewise, countering misinformation presents a similar challenge regardless of whether the misinformation is spread before or after expert consensus is reached (though, as discussed below, expert consensus can be an important factor).
In practical terms, the question then is what to do about it. There have been a number of attempts to reply to superintelligence misinformation in order to set the record straight [
3,
4,
5,
6,
7,
8,
9]. However, to the best of the present author’s knowledge, aside from a brief discussion in [
12], there have been no efforts to examine the most effective ways of countering superintelligence misinformation. Given the potential importance of the matter, a more careful examination is warranted. That is the purpose of this paper. The paper’s discussion is relevant to public debates about superintelligence, to AI education programs (e.g., in university computer science departments), and to efforts to build expert consensus about superintelligence.
In the absence of dedicated literature on superintelligence misinformation, this paper draws heavily on the more extensive research literature studying misinformation about other topics, especially global warming (e.g., [
10,
13,
14]), as well as the general literature on misinformation in psychology, cognitive science, political science, sociology, and related fields (for reviews, see [
2,
15]). This paper synthesizes insights from these literatures and applies them to the particular circumstances of superintelligence. The paper is part of a broader effort to develop the social science of superintelligence by leveraging insights from other issues [
12,
16].
The paper is organized as follows.
Section 2 presents some examples of superintelligence misinformation, in order to further motivate the overall discussion.
Section 3 surveys the major actors and audiences (i.e., the senders and receivers) of superintelligence misinformation, in order to provide some strategic guidance.
Section 4 presents several approaches for preventing the spread of superintelligence misinformation.
Section 5 presents approaches for countering superintelligence misinformation that has already spread.
Section 6 concludes.
2. Examples of Superintelligence Misinformation
It is often difficult to evaluate which information about superintelligence is false. This is because superintelligence is a possible future technology that may be substantially different from anything that currently exists, and because it is the subject of a relatively small amount of study. For comparison, other studies of misinformation have looked at such matters as whether Barack Obama was born in the United States, whether childhood vaccines cause autism, and whether Listerine prevents colds and sore throats [
17]. In each of these cases, there is clear and compelling evidence pointing in one direction or the other (the evidence clearly indicates that Obama was born in the US, that vaccines do not cause autism, and that Listerine does not prevent colds or sore throats, despite many claims to the contrary in all three cases). Therefore, an extra degree of caution is warranted when considering whether a particular claim about superintelligence qualifies as misinformation.
That said, some statements about superintelligence are clearly false. For example, this statement from Steven Pinker: “As far as I know, there are no projects to build an AGI, not just because it would be commercially dubious, but also because the concept is barely coherent” [
18]. The acronym AGI stands for artificial general intelligence, which is a form of AI closely associated with superintelligence. Essentially, AGI is AI that is capable of reasoning across a wide range of domains. AGI may be difficult to build, but the concept is very much coherent. Indeed, it has a substantial intellectual history and ongoing study [
19], including a dedicated research journal (
Journal of Artificial General Intelligence) and professional society (the Artificial General Intelligence Society). Furthermore, there are indeed projects to build AGI—one recent survey identifies 45, spread across many countries and institutions, including many for-profit corporations, the largest of which being DeepMind, acquired by Google in 2014 for £400 million, the Human Brain Project, an international project with
$1 billion in funding from the European Commission, and OpenAI, a nonprofit with
$1 billion in pledged funding [
20]. (DeepMind and OpenAI explicitly identify as working on AGI. The Human Brain Project does not, but it is working on simulating the human brain, which is considered to be a subfield of AGI [
19].) There is even an AGI project at Pinker’s own university. (Pinker and the AGI project MicroPsi [
21] are both at Harvard University.) Therefore, in the quoted statement, the “as far as I know” part may well be true, but the rest is clearly false. This particular point of misinformation is significant because it conveys the false impression that AGI (and superintelligence) is a nonissue, when in fact it is a very real and ongoing subject of R&D.
A more controversial matter is the debate on the importance of consciousness to superintelligence. Searle [
22] argues that computers cannot be conscious and therefore, at least in a sense, cannot be intelligent, and likewise cannot have motivation to destroy humanity. Similar arguments have been made by Logan [
23], for example. A counterargument is that the important part is not the consciousness a computer but its capacity to affect the world [
4,
24,
25]. It has also been argued that AI could be harmful to humanity even if it is not specifically motivated to do so, because the AI could assess humanity as being in the way of it achieving some other goal [
25,
26]. The fact that AI has already shown the capacity to outperform humans in some domains is suggestive of the possibility for it to outperform humans in a wider range of domains, regardless of whether the AI is conscious. However, this is an ongoing area of debate, and indeed Chalmers [
24] (p. 16) writes “I do not think the matter can be regarded as entirely settled”. Regardless, there must be misinformation on one side or the other: computers either can be conscious or they cannot, and consciousness either matters for superintelligence or it does not. Additionally, many parties to the debate maintain that those who believe that consciousness or conscious motivation matter are misinformed [
4,
5,
7,
8,
9], though it is not the purpose of this paper to referee this debate.
There are even subtler debates among experts who believe in the prospect of superintelligence. For example, Bostrom [
25] worries that it would be difficult to test the safety of a superintelligence because it could trick its human safety testers into believing it is safe (the “treacherous turn”), while Goertzel [
27] proposes that the safety testing for a superintelligence would not be so difficult because the AI could be tested before it becomes superintelligent (the “sordid stumble”; the term is from [
28]). Essentially, Bostrom argues that an AI would become capable of deceiving humans before humans realize it is unsafe, whereas Goertzel argues the opposite. Only one of these views can be correct; the other would qualify as misinformation. More precisely, only one of these views can be correct for a given AI system—it is possible that some AI systems could execute a treacherous turn while others would make a sordid stumble. Which view is more plausible is a matter of ongoing study [
28,
29]. This debate is important because it factors significantly into the riskiness of attempting to build a superintelligence.
Many more additional examples could be presented, such as on the dimensionality of intelligence [
3], the rate of progress in AI [
7,
8], the structure of AI goals [
6,
7,
8], and the relationship between human and AI styles of thinking [
6,
8]. However, this is not the space for a detailed survey. Instead, the focus of this paper is on what to do about the misinformation. Likewise, this paper does not wish to take positions on open debates about superintelligence. Some positions may be more compelling, but arguments for or against them are tangential to this paper’s aim of reducing the preponderance of misinformation. In other words, this paper strives to be largely neutral on which information about superintelligence happens to be true or false. The above remarks by Pinker will occasionally be used as an example of superintelligence misinformation because they are so clearly false, whereas the falsity of other claims is more ambiguous.
The above examples suggest two types of superintelligence misinformation: information that is already clearly false and information that may later be found to be false. In practice, there may be more of a continuum of how clearly true or false a piece of information is. Nonetheless, this distinction can be a useful construct for efforts to address superintelligence misinformation. The clearly false information can be addressed with the same techniques that are used for standard cases of misinformation, such as Obama’s place of birth. The not-yet-resolved information requires more careful analysis, including basic research about superintelligence, but it can nonetheless leverage some insights from the misinformation literature.
The fact that superintelligence is full of not-yet-resolved information is important in its own right, and it has broader implications for superintelligence misinformation. Specifically, the extent of expert consensus is an important factor in the wider salience of misinformation. This matter is discussed in more detail below. Therefore, while this paper is mainly concerned with the type of misinformation that is clearly false, it will consider both types. With that in mind, the paper now starts to examine strategies for countering superintelligence misinformation.
3. Actors and Audiences
Some purveyors of superintelligence misinformation can be more consequential than others. Ditto for the audiences for superintelligence misinformation. This is important to bear in mind because it provides strategic direction to any efforts to counter the misinformation. Therefore, this section reviews who the important actors and audiences may be.
Among the most important are the R&D groups that may be building superintelligence. While they can be influential sources of ideas about superintelligence, they may be especially important as audiences. For example, if they are misinformed regarding the treacherous turn vs. the sordid stumble, then they could fail to correctly assess the riskiness of their AI system.
Also important are the institutions that support the R&D. At present, most AGI R&D groups are based in either for-profit corporations or universities, and some also receive government funding [
20]. Regulatory bodies within these institutions could ensure that R&D projects are proceeding safely, such as via university research review boards [
30,
31]. Successful regulation depends on being well-informed about the nature of AGI and superintelligence and its prospects and risks. The same applies to R&D funding decisions by institutional funders, private donors, and others. Additionally, while governments are not presently major developers of AGI, except indirectly as funders, they could become important developers should they later decide to do so, and they meanwhile can play important roles in regulation and in facilitating discussion across R&D groups.
Corporations are of particular note due to their long history of spreading misinformation about their own technologies, in particular to convey the impression that the technologies are safer than they actually are [
10]. These corporate actors often wield enormous resources and have a correspondingly large effect on the overall issue, either directly or by sponsoring industry-aligned think tanks, writers, and other intermediaries. At this time, there are only hints of such behavior by AI corporations, but the profitability of AI and other factors suggest the potential for much more [
12].
Thought leaders on superintelligence are another significant group. In addition to the aforementioned groups, this also includes people working on other aspects of superintelligence, such as safety and policy issues, as well as people working on other (non-superintelligence) forms of AI, and public intellectuals and celebrities. These are all people who can have outsized influence when they comment on superintelligence. That influence can be on the broader public, as well as in quieter conversations with AGI/superintelligence R&D groups, would-be regulators, and other major decision-makers.
Finally, there is the lay public. The role of the public in superintelligence may be reduced due to the issue being driven by technology R&D that (for now at least) occurs primarily in the private sector. However, the public can play roles as citizens of governments that might regulate the R&D and as consumers of products of the corporations that host the R&D. The significance of the public for superintelligence is not well established at this time.
While the above groups are presented in approximate order of importance, it would not be appropriate to formally rank them. What matters is not the importance of the group but the quality of the opportunity that one has to reduce misinformation. This will tend to vary heavily by the circumstances of whoever is seeking to reduce the extent of superintelligence misinformation.
With that in mind, the paper now turns to strategies for reducing superintelligence misinformation.
4. Preventing Superintelligence Misinformation
The cliché “an ounce of prevention is worth a pound of cure” may well be an understatement for misinformation. An extensive empirical literature finds that once misinformation enters into someone’s mind, it can be very difficult to remove.
Early experiments showed that people can even make use of information that they acknowledge to be false. In these experiments, people were told a story and then were explained that some information in the story is false. When asked, subjects would correctly acknowledge the information to be false, but they would also use it in retelling the story as if it were true. For example, the story could be a fire caused by volatile chemicals, and then it is later explained that there were no volatile chemicals present. Subjects would acknowledge that the volatile chemicals were absent but then cite them as the cause of the fire. This is logically incoherent. The fact that people do this speaks to the cognitive durability that misinformation can have [
32,
33].
The root of the matter appears to be that human memory does not simply write and overwrite like computer memory. Corrected misinformation does not vanish. Ecker et al. [
15] trace this to the conflicting needs for memory stability and flexibility:
Human memory is faced with the conundrum of maintaining stable memory representations (which is the whole point of having a memory in the first place) while also allowing for flexible modulation of memory representations to keep up-to-date with reality. Memory has evolved to achieve both of these aims, and hence it does not work like a blackboard: Outdated things are rarely actually wiped out and over-written; instead, they tend to linger in the background, and access to them is only gradually lost.
There are some techniques for reducing the cognitive salience of misinformation; these are discussed in detail below. However, in many cases, it would be highly desirable to simply avoid the misinformation in the first place. Therefore, this section presents some strategies for preventing superintelligence misinformation.
The ideas for preventing superintelligence misinformation are inevitably more speculative than those for correcting it. There are two reasons for this. One is that the correction of misinformation has been the subject of a relatively extensive literature, while the prevention of misinformation has received fairly little scholarly attention. (Rare examples of studies on preventing misinformation are [
34,
35].) The other reason is that the correction of misinformation is largely cognitive and thus conducive to simple laboratory experiments, whereas the prevention of misinformation is largely sociological and thus requires a more complex and case-specific analysis. Nonetheless, given the importance of preventing superintelligence misinformation, it is important to consider potential strategies for doing so.
5. Correcting Superintelligence Misinformation
Correcting misinformation is sufficiently difficult that it will often be better to prevent it from spreading in the first place. However, when superintelligence misinformation cannot be prevented, there are strategies available for correcting it in the minds of those who are exposed to it. Correcting misinformation is the subject of a fairly extensive literature in psychology, political science, and related fields [
2,
15,
33,
49]. For readers unfamiliar with this literature, Cook et al. [
2] provide an introductory overview accessible to an interdisciplinary readership, while Ecker et al. [
15] provide a more detailed and technical survey. This section applies this literature to the correction of superintelligence misinformation.
5.2. Address Pre-Existing Motivations for Believing Misinformation
The human mind tends to not process new information in isolation, but instead processes it in relation to wider beliefs and understandings of the world. This can be very valuable, enabling us to understand the context behind new information and relate it to existing knowledge. For example, people would typically react with surprise and confusion upon seeing an object rise up to the ceiling instead of fall down to the floor. This new information is related to a wider understanding of the fact that objects fall downwards. People may even struggle to believe their own eyes unless there is a compelling explanation. (For example, perhaps the object and the ceiling are both magnetized.). Additionally, if people did not see it with their own eyes, but instead heard it reported by someone else, they may be even less likely to believe it. In other words, they are motivated to believe that the story is false, even if it is true. This phenomenon is known as motivated reasoning.
While generally useful, motivated reasoning can be counterproductive in the context of misinformation, prompting people to selectively believe misinformation over correct information. This occurs in particular when the misinformation accords better with preexisting beliefs than the correct information. In the above example, misinformation could be that the object fell down to the floor instead of rising to the ceiling.
Motivated reasoning is a major factor in the belief of misinformation about politically contentious issues such as climate change. The climate science consensus is rejected mainly by people who believe that government regulation of industry is generally a bad thing [
14,
59]. In principle, belief that humans are warming the planet should have nothing to do with belief that government regulations are harmful. It is logically coherent to believe in global warming yet argue that carbon emissions should not be regulated. However, in practice, the science of global warming often threatens people’s wider beliefs about regulations, and so they find themselves motivated to reject the science.
Motivated reasoning can also be a powerful factor for beliefs about superintelligence. A basic worldview is that humans are in control. Per this worldview, human technology is a tool; the idea that it could rise up against humanity is a trope for science fiction, not something to be taken seriously. The prospect of superintelligence threatens this worldview, predisposing people to not take superintelligence seriously. In this context, it may not help that media portrayals of the scholarly debate about superintelligence commonly include reference to science fiction, such as by using pictures of the Terminator. As one expert who is concerned about superintelligence states, “I think that at this point all of us on all sides of this issue are annoyed with the journalists who insist on putting a picture of the Terminator on every single article they publish of this topic” [
60].
Motivated reasoning has been found to be linked to people’s sense of self-worth. As one study puts it, “the need for self-integrity—to see oneself as good, virtuous, and efficacious—is a basic human motivation” [
61] (p. 415). When correct information threatens people’s self-worth, they are more motivated to instead believe misinformation, so as to preserve their self-worth. Furthermore, motivated reasoning can be reduced by having people consciously reaffirm their own self-worth, such as by recalling to themselves ways in which they successfully live up to their personal values [
61]. Essentially, with their sense of self-worth firmed up, they become more receptive to information that would otherwise threaten their self-worth.
As a technology that could outperform humans, superintelligence could pose an especially pronounced threat to people’s sense of self-worth. It may be difficult for people to feel good and efficacious if they would soon be superseded by computers. For at least some people, this could be a significant reason to reject information about the prospect of superintelligence, even if that information is true. At the same time, it may still be valuable for messages about superintelligence to be paired with messages of affirmation.
Another important set of motivations comes from the people active in superintelligence debates. Many people in the broader computer science field of AI have been skeptical of claims about superintelligence. These people may be motivated by a desire to protect the reputation and funding of the field of AI, and in turn protect their self-worth as AI researchers. AI has a long history of boom-bust cycles in which hype about superintelligence and related advanced AI falls flat and contributes to an “AI winter”. Peter Bentley, an AI computer scientist who has spoken out against contemporary claims about superintelligence, is explicit about this:
“Large claims lead to big publicity, which leads to big investment, and new regulations. And then the inevitable reality hits home. AI does not live up to the hype. The investment dries up. The regulation stifles innovation. And AI becomes a dirty phrase that no-one dares speak. Another AI Winter destroys progress” [
62] (p. 10). “Do not be fearful of AI—marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day” (p. 11).
While someone’s internal motivations can only be inferred from such text, the text is at least suggestive of motivations to protect self-worth and livelihood as an AI researcher, as well as a worldview in which AI is a positive force for society.
To take another example, Torres [
9] proposes that Pinker’s dismissal of AGI and superintelligence is motivated by Pinker’s interest in promoting a narrative in which science and technology bring progress—a narrative that could be threatened by the potential catastrophic risk from superintelligence.
Conversely, some people involved in superintelligence debates may be motivated to believe in the prospect of superintelligence. For example, researcher Jürgen Schmidhuber writes on his website that “since age 15 or so, the main goal of professor Jürgen Schmidhuber has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire.” [
63] Superintelligence is also sometimes considered the “grand dream” of AI [
64]. Other common motivations include a deep interest in transformative future outcomes [
65] and a deep concern about extreme catastrophic risks [
4,
66,
67]. People with these worldviews may be predisposed to believe certain types of claims about superintelligence. If it turns out that superintelligence will not be built, or would not have transformative or catastrophic effects, then this can undercut people’s deeply held beliefs in the importance of superintelligence, transformative futures, and/or catastrophic risks.
For each of these motivations for interest in superintelligence, there can be information that is rejected because it cuts against the motivations and misinformation that is accepted because it supports the motivations. Therefore, in order to advance superintelligence debates, it can be valuable to affirm people’s motivations when presenting conflicting information. For example, one could affirm that AI computer scientists are making impressive and important contributions to the world, and then explain reasons why superintelligence may nonetheless be a possibility worth considering. One could affirm that science and technology are bringing a great deal of progress, and then explain reasons why some technologies could nonetheless be dangerous. One could affirm that superintelligence is indeed a worthy dream, or that transformative futures are indeed important to pay attention to, and then explain reasons why superintelligence might not be built. Finally, one could affirm that extreme catastrophic risks are indeed an important priority for human society, and then explain reasons why superintelligence may not be such a large risk after all. These affirming messaging strategies could predispose participants in superintelligence debates to consider a wider range of possibilities and make more progress on the issue, including progress towards expert consensus.
Another strategy is to align motivations with accurate beliefs about superintelligence. For example, some AI computer scientists may worry that belief in the possibility of superintelligence could damage reputation and funding. However, if belief in the possibility of superintelligence would bring reputational and funding benefits, then the same people may be more comfortable expressing such belief. Reputational benefits could be created, for example, via slots in high-profile conferences and journals, or by association with a critical mass of reputable computer scientists who also believe in the possibility of superintelligence. Funding could likewise be made available. Noting that funding and space in conferences and journals are often scarce resources, it could be advantageous to target these resources at least in part toward shifting motivations of important actors in superintelligence debates. This example of course assumes that it is correct to believe in the possibility of superintelligence. The same general strategy of aligning motivations may likewise be feasible for other beliefs about superintelligence.
The above examples—concerning the reputations and funding of AI computer scientists, the possibility of building superintelligence, and the importance of transformative futures and catastrophic risks—all involve experts or other communities that are relatively attentive to the prospect of superintelligence. Other motivations could be significant for the lay public, policy makers, and other important actors. Research on the public understanding of science finds that cultural factors, such as political ideology, can factor significantly in the interpretation of scientific information [
68,
69]. Kahan et al. [
69] (p. 79) propose to “shield” scientific evidence and related information “from antagonistic cultural information”. For superintelligence, this could mean attempting to frame superintelligence (or, more generally, AI) as a nonpartisan social issue. At least in the US, if an issue becomes politically partisan, legislation typically becomes substantially more difficult to pass. Likewise, discussions of AI and superintelligence should, where reasonably feasible, attempt to avoid close association with polarizing ideologies and cultural divisions.
The fact that early US legislation on AI has been bipartisan is encouraging. For example, H.R.4625, FUTURE of Artificial Intelligence Act of 2017, sponsored by John Delaney (Democrat) and co-sponsored by Pete Olson (Republican), and H.R.5356, National Security Commission Artificial Intelligence Act of 2018, sponsored by Elise Stefanik (Republican) and co-sponsored by James Langevin (Democrat). This is a trend that should be praised and encouraged to continue.
5.4. Explain Misinformation and Corrections
When people are exposed to misinformation, it can be difficult to correct, as first explained in
Section 4. This phenomenon has been studied in great depth, with the terms “continued influence” and “belief perseverance” used for cases in which debunked information continues to influence people’s thinking [
72,
73]. There is also an “illusion of truth”, in which information explained to be false is later misremembered as true—essentially, the mind remembers the information but forgets its falsity [
74]. The difficulty of correcting misinformation is why this paper has emphasized strategies to prevent of misinformation from spreading in the first place.
Adding to the challenge is the fact that attempts to debunk misinformation can inadvertently reinforce it. This phenomenon is known as the “backfire effect” [
74]. Essentially, when someone hears “X is false”, it can strengthen their mental representation of X, thereby reinforcing the misinformation. This effect has been found to be especially pronounced among the elderly [
74]. One explanation is that correcting the misinformation (i.e., successfully processing “X is false”) requires the use of strategic memory, but strategic memory requires dedicated mental effort and is less efficient among the elderly [
15]. Unless enough strategic memory is allocated to processing “X is false”, the statement can end up reinforcing belief in X.
These findings about the backfire effect have important consequences for superintelligence misinformation. Fortunately, many important audiences for superintelligence misinformation are likely to have strong strategic memories. Among the prominent actors in superintelligence debates, relatively few are elderly, and many of them have intellectual pedigrees that may endow them with strong strategic memories. On the other hand, many of the prominent actors are busy people with limited mental energy available for processing corrections about superintelligence information. As a practical matter, people attempting to debunk superintelligence misinformation should generally avoid “X is false” messages, especially when their audience may be paying limited attention.
One technique that has been particularly successful at correcting misinformation is the use of refutational text, which provides detailed explanations of why the misinformation is incorrect, what the correct information is, and why it is correct. Refutational text has been used mainly as a classroom tool for helping students overcome false preexisting beliefs about course topics [
75,
76]. Refutational text has even been used to turn misinformation into a valuable teaching tool [
77]. A meta-analysis found refutational text to be the most effective technique for correcting misinformation in the context of science education—that is, for enabling students to overcome preexisting misconceptions about science topics [
78].
A drawback of refutational text is that it can require more effort and attention than simpler techniques. Refutational text may be a valuable option in classrooms or other settings in which one has an audience’s extended attention. Such settings include many venues of scholarly communication, which can be important for superintelligence debates. However, refutational texts may be less viable in other settings, such as social media and television news program interviews, in which one can often only get in a short sound bite. Therefore, refutational text may be relatively well-suited for interactions with experts and other highly engaged participants in superintelligence debates, and relatively poorly suited for much of the lay public and others who may only hear occasional passing comments about superintelligence. That said, it may still be worth producing and disseminating extended refutations for lay public audiences, such as in long-format videos and articles for television, magazines, and online. These may tend to only reach the most motivated segments of the lay public, but they can nonetheless be worthwhile.
6. Conclusions
Superintelligence is a high-stakes potential future technology as well as a highly contested socio-technological issue. It is also fertile terrain for misinformation. Making progress on the issue requires identifying and rejecting misinformation and accepting accurate information. Some progress will require technical research to clarify the nature of superintelligence. However, a lot of progress will likely also require the sorts of sociological and psychological strategies outlined in this paper. The most progress may come from interdisciplinary projects connecting computer science, social science, and other relevant fields. Computer science is a highly technical field, but as with all fields, it is ultimately composed of human beings. By appreciating the nuances of the human dimensions of the field, it may be possible to make better progress towards understanding superintelligence and acting responsibly about it.
As the first dedicated study of strategies for countering superintelligence misinformation, this paper has taken a broad view, surveying a range of options. Despite this breadth, there may still be additional options worth further attention. Indeed, this paper has only mined a portion of the insights contained within the existing literature on misinformation. There may also be compelling options that go beyond the literature. Likewise, because of this paper’s breadth, it has given relatively shallow treatment to each of the options. More detailed attention to the various option would be another worthy focus of future research.
An especially valuable focus would be the proposed strategies for preventing superintelligence misinformation. Because misinformation can be so difficult to correct, preventing it may be the more effective strategy. There is also less prior research on the prevention of misinformation. For these reasons, there is likely to be an abundance of important research opportunities on the prevention of misinformation, certainly for superintelligence misinformation and perhaps also for misinformation in general.
For the prevention of superintelligence misinformation, a strategy that may be particularly important to study further is dissuading AI corporations from using their substantial resources to spread superintelligence misinformation. The long history of corporations engaging in such tactics, with a major impact on the surrounding debates, suggests that this could be a highly important factor for superintelligence [
12]. It may be especially valuable to study this at an early stage, before such tactics are adopted.
For the correction of superintelligence misinformation, a particularly promising direction is on the motivations and worldviews of prominent actors and audiences in superintelligence debates. Essentially, what are people’s motivations with respect to superintelligence? Are AI experts indeed motivated to protect their field? Are superintelligence developers motivated by the “grand dream”? Are others who believe in the prospect of superintelligence motivated by beliefs about transformative futures or catastrophic risks? Can attention to these sorts of motivations help them overcome their divergent worldviews and make progress towards consensus on the topic? Finally, are people in general motivated to retain their sense of self-worth in the face of a technology that could render them inferior?
Most important, however, is not the research on superintelligence misinformation, but the efforts to prevent and correct it. It can often be stressful and thankless work, especially amidst the heated debates, but it is essential to ensuring positive outcomes. This paper is one effort towards helping this work succeed. Given the exceptionally high potential stakes, it is vital that decisions about superintelligence be well-informed.