1. Introduction
Attention to AI technologies and accompanying societal issues commonly clusters into groups focusing on either near-term or long-term AI, with some acrimonious debate between them over which is more important. Following Baum [
1], the near-term camp may be called “presentists” and the long-term camp “futurists”.
The current state of affairs suggests two reasons for considering the intermediate period between the near and long terms. First, the medium term (or, interchangeably, intermediate term or mid term) has gone neglected relative to its inherent importance. If there are important topics involving near-term and long-term AI, then perhaps the medium term has important topics as well. Second, the medium term may provide a common ground between presentists and futurists. Insofar as both sides consider the medium term to be important, it could offer a constructive topic to channel energy that may otherwise be spent on hashing out disagreements.
Rare examples of previous studies with dedicated attention to medium-term AI are Parson et al. [
2,
3]. (There is a lot of work that touches on medium-term AI topics, some of which is cited in this paper. However, aside from Parson et al. [
2,
3], I am not aware of any publications that explicitly identify medium-term AI as a topic warranting dedicated attention.) Both studies [
2,
3] recognize medium-term AI as important and neglected. Parson et al. [
2] acknowledges that some prior work in AI covers topics that are important across all time periods, and thus are also relevant to the medium term. It provides a definition of medium-term AI, which is discussed further below, and it provides some analysis of medium-term AI topics. Parson et al. [
3] posits that the neglect of the medium term may derive in part from the academic disciplines and methodologies of AI researchers, which may point the researchers toward either the near term or the long term but not the medium term. The present paper extends Parson et al.’s [
2] work on definitions and presents original analysis of a different mix of medium-term AI topics. The present paper also explores the medium term as a potential point of common ground between presentists and futurists.
Several previous attempts have been made to bridge the presentist–futurist divide [
1,
4,
5]. An overarching theme in this literature is that the practical steps needed to make progress are often (though not always) the same for both near-term and long-term AI. Instead of expending energy debating the relative importance of near-term and long-term AI, it may often be more productive to focus attention on the practical steps that both sides of the debate agree are valuable. This practical synergy can arise for two distinct reasons, both with implications for medium-term AI.
First, certain actions may improve near-term AI and the near-term conversation about long-term AI. Such actions will often also improve the near-term conversation about mid-term AI. For example, efforts to facilitate dialog between computer scientists and policymakers can improve the quality of policy discussions for near-, mid-, and long-term AI. Additionally, efforts encouraging AI developers to take more responsibility for the social and ethical implications of their work can influence work on near-, mid-, and long-term AI. For example, the ethics principles that many AI groups have recently established [
6] are often quite general and can apply to work on near-term and long-term AI, as can analyses of the limitations of these principles [
7]. Here it should be explained that there is near-term work aimed at developing systems that may only become operational over the mid or long term, especially work consisting of basic research toward major breakthroughs in AI capabilities.
Second, certain actions may improve near-term AI, and, eventually, long-term AI. These actions may often also eventually improve mid-term AI. For example, some research on how to design near-term AI systems more safely may provide a foundation for also making mid- and long-term AI systems safer. This is seen in the AI safety study of Amodei et al. [
8], which is framed in terms of near-term AI; lead author Amodei describes the work as also being relevant for long-term AI [
9]. Additionally, AI governance institutions established over the near term may persist into the mid and long term, given the durability of many policy institutions. Of course, AI system designs and governance institutions that persist from the near term to the long term would also be present throughout the mid-term. Furthermore, evaluating their long-term persistence may require understanding of what happens during the mid-term.
Dedicated attention to the medium term can offer another point of common ground between presentists and futurists: both sides may consider the medium term to be important. Presentists may find the medium term to be early enough for their tastes, while futurists find it late enough for theirs. As elaborated below, the reasons that presentists have for favoring near-term AI are different types of reasons than those of the futurists. Presentists tend to emphasize immediate feasibility, certainty, and urgency, whereas futurists tend to emphasize extreme AI capabilities and consequences. Potentially, the medium term features a widely appealing mix of feasibility, certainty, urgency, capabilities, and consequences. Or not: it is also possible that the medium term would sit in a “dead zone”, being too opaque to merit presentist interest and too insignificant to merit futurist interest. This matter will be a running theme throughout the paper and is worth expressing formally:
The medium-term AI hypothesis: There is an intermediate time period in which AI technology and accompanying societal issues are important from both presentist and futurist perspectives.
The medium-term AI hypothesis can be considered in either empirical or normative terms. As an empirical hypothesis, it proposes that presentists and futurists actually consider the medium term to be important, or that they would tend to agree that the medium term is important if given the chance to reflect on it. As a normative hypothesis, it proposes that presentists should agree that the medium term is important, given the value commitments of the presentist and futurist perspectives. Given the practical goal of bridging the presentist–futurist divide, the empirical form is ultimately more important: what matters is whether the specific people on opposite sides of the divide would, upon consideration, find common ground in the medium term. (It is unlikely that they currently do find common ground in the medium term, due to lack of attention to it.) Empirical study of presentist and futurist reactions to the medium term is beyond the scope of the present paper. Instead, the aim here is to clarify the nature of the presentist and futurist perspectives in terms of the attributes of the medium term that they should consider important and then to examine whether the medium term is likely to possess these attributes. The paper therefore proceeds mainly in normative terms, though grounded in empirical observation of the perspectives articulated by actual presentists and futurists.
More precisely, the medium-term AI hypothesis proposes that the perspectives underlying both groups should rate the medium term as important. This presumes that “perspectives” can rate things as important even when detached from the people who hold them. Such detachment is permitted here simply so that the analysis can proceed without going through the more involved (but ultimately important) process of consulting with the people who hold presentist and futurist perspectives.
Evaluating the medium-term AI hypothesis is one aim of this paper. First, though, more needs to be said on how the medium term is defined.
2. Defining the Medium Term
The medium term is, of course, the period of time between the near term and the long term. However, discussions of near-term and long-term AI often do not precisely specify what constitutes near-term and long-term. Some ambiguity is inevitable due to uncertainty about future developments in AI. Additionally, different definitions may be appropriate for different contexts and purposes—for example, what qualifies as near-term may be different for a programmer than for a policymaker. Nonetheless, it is worth briefly exploring how the near, mid, and long terms can be defined for AI. Throughout, it should be understood that the near, mid, and long terms are all defined relative to the vantage point of the time of this writing (2019–2020). As time progresses, what classifies as near-, mid-, and long-term can shift.
The first thing to note is that near- vs. mid- vs. long-term can be defined along several dimensions. The first is chronological: the near term goes from year A to year B, the mid term from year B to year C, and the long term from year C to year D. The second is in terms of the feasibility or ambitiousness of the AI: the near term is what is already feasible, the long term is the AI that would be most difficult to achieve, and the mid term is somewhere in between. Third, and related to the second, is the degree of certainty about the AI: the near term is what clearly can be built, the long term is the most uncertain and speculative, and the mid term is somewhere in between. Fourth is the degree of sophistication or capability of the AI: the near term is the least capable, the long term is the most capable, and the mid term is somewhere in between. Fifth, and related to the fourth, is with respect to impacts: the near term has (arguably; see below) the mildest impacts on human society and the world at large, the long term has the most extreme impacts, and the mid-term is somewhere in between. Sixth is urgency: the near term is (arguably) the most urgent, the long term the least urgent, and the mid term is somewhere in between.
The dimension of impacts is somewhat complex and worth briefly unpacking. Near-term AI may have the mildest impacts, in the sense that if AI continues to grow more capable and be used more widely and in more consequential settings it will tend to have greater impacts on the human society that exists at that time. Put differently, if A = the impacts of near-term AI on near-term society, B = the impacts of mid-term AI on mid-term society, and C = the impacts of long-term AI on long-term society, then (it is supposed) A < B < C. There are, however, alternative ways of conceptualizing impacts. One could take a certain presentist view and argue that only present people matter for purposes of moral evaluation, such as is discussed by Arrhenius [
10], or that future impacts should be discounted, as in many economic cost–benefit evaluations. In these cases, near-term AI may be evaluated as having the largest impacts because the impacts of mid- and long-term AI matter less or not at all. Or, one could consider the impacts of a period of AI on all time periods: the impact of near-term AI on the near, mid, and long terms, the impacts of mid-term AI on the mid- and long-terms, and the impact of long-term AI on the long term. This perspective recognizes the potential for durable impacts of AI technology, and would tend to increase the evaluated size of the impacts of near- and mid-term AI. While recognizing the merits of these alternative conceptions of impacts, this paper uses the first conception, involving A vs. B vs. C.
There may be no one correct choice of dimensions for defining the near/mid/long term. Different circumstances may entail different definitions. For example, Parson et al. [
2] are especially interested in societal impacts and implications for governance, and thus use definitions rooted primarily in impacts. They propose that, relative to near-term AI, medium-term AI has “greater scale of application, along with associated changes in scope, complexity, and integration” [
2] (pp. 8–9), and, relative to long-term AI, medium-term AI “is not self-directed or independently volitional, but rather is still to a substantial degree developed and deployed under human control” [
2] (p. 9). (One can quibble with these definitions. Arguably, near-term AI is already at a large scale of application, and there may be no clear demarcation in scale between near- and mid-term AI. Additionally, while it is proposed that long-term AI could escape human control, that would not necessarily be the case. Indeed, discussions of long-term AI sometimes focus specifically on the question of how to control such an AI [
11].) The medium term is a period with substantially greater use of AI in decision-making, potentially to the point in which “the meaning of governance” is challenged [
2] (p. 9), but humans remain ultimately in control. This is a reasonable definition of medium-term AI, especially for impacts and governance purposes.
The present paper is more focused on the presentist/futurist debate, and so it is worth considering the definitions used in the debate. Elements of each of the six dimensions can be found, but they are not found uniformly. Presentists often emphasize feasibility and degree of certainty. Computer scientist Andrew Ng memorably likened attention to long-term AI to worrying about “overpopulation on Mars” [
12], by which Ng meant that it might eventually be important, but it is too opaque and disconnected from current AI to be worth current attention. Another presentist theme is urgency, especially with respect to the societal implications of near-term AI. Legal scholar Ryan Calo [
13] (p. 27) argues that “AI presents numerous pressing challenges to individuals and society in the very short term” and therefore commands attention relative to long-term AI. For their part, futurists often emphasize capability and impacts. Commonly cited is the early remark of I.J. Good [
14] (p. 33) that “ultraintelligent” AI (AI with intelligence significantly exceeding that of humans) could be “the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control”. Chronological definitions are less common. One exception is Etzioni [
15], who downplays long-term AI on grounds that it is unlikely to occur within 25 years. (In reply, futurists Dafoe and Russell [
16] argue that potential future events can still be worth caring about even if they will not occur within the next 25 years.)
Taking the above into account, this paper will use a feasibility definition for near-term AI and a capability definition for long-term AI. The paper defines near-term AI as AI that already exists or is actively under development with a clear path to being built and deployed. Per this definition, near-term AI does not require any major research breakthroughs, but instead consists of straightforward applications of existing techniques. The terms “clear”, “major”, and “straightforward” are vague, and it may be reasonable to define them in different ways in different contexts. (This vagueness is relevant for the medium-term AI hypothesis; more on this below.) Nonetheless, this definition points to current AI systems plus the potential future AI systems that are likely to be built soon and do not depend on research breakthroughs that might or might not manifest.
The paper defines long-term AI as
AI that has at least human-level general intelligence. Interest in long-term AI often focuses on human-level artificial intelligence (HLAI), artificial general intelligence (AGI), strong AI, and artificial superintelligence (ASI). However, there may be narrow AI systems that are appropriate to classify as long-term. For example, Cave and ÓhÉigeartaigh [
4] (p. 5) include “wide-scale loss of jobs” as a long-term AI issue separately from the prospect of superintelligence. (Note that the most widespread loss of jobs may require AGI. For example, Ford [
17] (p. 3) writes “If, someday, machines can match or even exceed the ability of a human being to think and to conceive new ideas—while at the same time enjoying all the advantages of a computer in areas like computational speed and data access—then it becomes somewhat difficult to imagine just what jobs might be left for even the most capable human workers”.) A plausible alternative definition of long-term AI is
AI that achieves major intellectual milestones and/or has large and transformative effects. This is more of a catch-all definition that could include sufficiently important narrow AI systems such as those involved in job loss. In this definition, the terms “major”, “large”, and “transformative” are vague. Indeed, current AI systems arguably meet this definition. Therefore, the paper will define long-term AI in terms of HLAI, while noting the case for the alternative definitions.
The paper’s use of a feasibility definition for near-term and a capability definition for long-term may be consistent with common usage in AI discussions. However, the use of a different dimension for near-term (feasibility) than for long-term (capability) can induce some chronological blurring in two important respects.
First, AI projects that are immediately practical may have long time horizons. This may be especially common for projects in which AI is only one component of a more complex and durable system. Military systems are one domain with long lifespans. A 2016 report found that some US nuclear weapon systems were still using 1970s-era 8-inch floppy disks [
18]. AI is currently being used and developed for a wide variety of military systems [
19]. Some of these could conceivably persist for many decades into the future—perhaps in the B-52H bomber, which was built in the 1960s and is planned to remain in service through the 2050s [
20]. (AI is used in bombers, for example, to improve targeting [
21]. AI is used more extensively in fighters, which execute complex aerial maneuvers at rapid speeds and can gain substantial tactical advantage from increased computational power and autonomy from human pilots [
22].) One can imagine the B-52H being outfitted with current AI algorithms and retaining these algorithms into the 2050s, just as the 8-inch floppy disks have been retained in other US military systems. Per this paper’s definitions, this B-52H AI would classify as near-term AI that happens to remain in use over a long time period, well beyond the 25 years that Etzioni [
15] treats as the “foreseeable horizon” worthy of attention.
Second, AI systems with large and transformative effects, including AGI, could potentially be built over relatively short time scales. When AGI and related forms of AI will be built is a matter of considerable uncertainty and disagreement. Several studies have asked AI researchers—predominantly computer scientists—when they expect AI with human or superhuman capacity to be built [
23,
24,
25,
26]. (Note that these studies are generally framed as being surveys of experts, but it is not clear that the survey participants are expert in the question of when AGI will be built. Earlier predictions about AI have often been unreliable [
27]. This may be a topic for which there are no experts; on this issue, see Morgan [
28].) The researchers present estimates spanning many decades, with some estimates being quite soon.
Figure 1 presents median estimates from these studies. Median estimates conceal the range of estimates across survey participants, but the full range could not readily be presented in
Figure 1 because, unfortunately, only Baum et al. [
23] included the full survey data. If the early estimates shown in
Figure 1 are correct, then, by this paper’s definitions, long-term AI may be appearing fairly soon, potentially within the next 25 years.
3. The Medium-Term AI Hypothesis
With the above definitions in mind, it is worth revisiting the medium-term AI hypothesis. If presentists are, by definition, only interested in the present, then they would not care at all about the medium term. However, the line between the near term and the medium term is blurry. As defined above, near-term AI must have a clear path to being built and deployed, but “clearness” is a matter of degree. As the path to being built and deployed becomes less and less clear, the AI transitions from near-term to medium-term, and presentists may have less and less interest in it. From this standpoint, presentists may care somewhat about the medium term, especially the earlier portions of it, but not to the same extent as they care about the near term.
Alternatively, presentists might care about the medium term because the underlying things they care about also arise in the medium term. Some presentists are interested in the implications of AI for social justice, or for armed conflict, or for transportation, and so on. Whereas it may be difficult to think coherently about the implications of long-term AI for these matters, it may not be so difficult for medium-term AI. For example, a major factor in debates about autonomous weapons (machines that use AI to select and fire upon targets) is whether these weapons could adequately discriminate between acceptable and unacceptable targets (e.g., enemy combatants vs. civilians) [
29,
30]. Near-term AI cannot adequately discriminate; medium-term AI might be able to. Therefore, presentists concerned about autonomous weapons have reason to be interested in medium-term AI. Whether this interest extends to other presentist concerns (social justice, transportation, etc.) must be considered on a case-by-case basis.
For futurists, the medium term may be important because it precedes and influences the long term. If the long term begins with the advent of human-level AGI, then this AI will be designed and built during the medium term. Some work on AGI is already in progress [
31], but it may be at a relatively early stage.
Figure 1 illustrates the uncertainty: the earliest estimates for the onset of AGI (and similar forms of AI) may fall within the near term, whereas the latest estimates fall much, much later. Futurists may tend to be most interested in the period immediately preceding the long term because it has the most influence on AGI. Their interest in earlier periods may depend on the significance of its causal impact on AGI.
It follows that there are two bases for assessing the medium-term AI hypothesis. First, the hypothesis could hold if AI that resembles near-term AI also influences long-term AI. In that case, the technology itself may be of interest to both presentists and futurists. Alternatively, the hypothesis could hold if the societal implications of medium-term AI raise similar issues as near-term AI, and if the medium-term societal context also influences long-term AI. For example, medium-term autonomous weapon technology could raise similar target discrimination issues as is found for near-term technology, and it could also feed arms races for long-term AI. (To avoid confusion, it should be understood that discussions of long-term AI sometimes use the term “arms race” to refer to general competition to be the first to build long-term AI, without necessarily any connection to military armaments [
32]. Nonetheless, military arms races for long-term AI are sometimes posited [
33].)
Both of the above derive from some measure of continuity between the near, mid, and long terms. Continuity can be defined in terms of the extent of change in AI systems and related societal issues. If near-term AI techniques and societal dimensions persist to a significant extent through the end of the medium term (when long-term AI is built), then the medium-term AI hypothesis is likely to hold.
The chronological duration of the medium term may be an important factor.
Figure 1 includes a wide range of estimates for the start of the long term. If the later estimates prove correct, then the medium term could be quite long. A long duration would likely tend to mean less continuity across the near, mid, and long terms, and therefore less support for the medium-term AI hypothesis. That is not necessarily the case. One can imagine, for example, that AI just needs one additional technical breakthrough to go from current capabilities to AGI, and that it will take many decades for this breakthrough to be made. One can also imagine that the issues involving AI will remain fairly constant until this breakthrough is made. In that case, near-term techniques and issues would persist deep into the medium term. However, it is more likely that a long-lasting medium term would have less continuity and a larger dead zone period with no interest from either presentists or futurists. If AGI will not be built for, say, another 500 years, presentists are unlikely to take an interest.
Figure 2 presents two sketches of the degree of interest that presentists and futurists may hold in the medium term.
Figure 2a shows a period of overlap in which both presentists and futurists have some interest; here, the medium-term AI hypothesis holds.
Figure 2b shows a dead zone with no overlap of interest; here, the medium-term AI hypothesis does not hold.
Figure 2 is presented strictly for illustrative purposes and does not indicate any rigorously derived estimation of actual presentist or futurist interests. It serves to illustrate how presentists’ degree of interest could decline over time and futurists’ degree of interest could increase over time, with implications for the medium-term AI hypothesis.
Figure 2 shows presentist/futurist interest decreasing/increasing approximately exponentially over time. There is no particular basis for this, and the curves could just as easily have been drawn differently.
To sum up, assessing the medium-term AI hypothesis requires examining what medium-term AI techniques and societal dimensions may look like, and the extent of continuity between the near-, mid-, and long-term periods.