Next Article in Journal
Leveraging Distrust Relations to Improve Bayesian Personalized Ranking
Next Article in Special Issue
Superintelligence Skepticism as a Political Tool
Previous Article in Journal
First Steps towards Data-Driven Adversarial Deduplication
Previous Article in Special Issue
AI to Bypass Creativity. Will Robots Replace Journalists? (The Answer Is “Yes”)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Commentary

The Singularity May Be Near

by
Roman V. Yampolskiy
Department of Computer Engineering and Computer Science, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
Information 2018, 9(8), 190; https://doi.org/10.3390/info9080190
Submission received: 5 July 2018 / Revised: 24 July 2018 / Accepted: 25 July 2018 / Published: 27 July 2018
(This article belongs to the Special Issue AI AND THE SINGULARITY: A FALLACY OR A GREAT OPPORTUNITY?)

Abstract

:
Toby Walsh in “The Singularity May Never Be Near” gives six arguments to support his point of view that technological singularity may happen, but that it is unlikely. In this paper, we provide analysis of each one of his arguments and arrive at similar conclusions, but with more weight given to the “likely to happen” prediction.

1. Introduction

In February of 2016, Toby Walsh presented his paper “The Singularity May Never Be Near” at AAAI16 [1], which was archived on 20 February, 2016. In it, Walsh analyzes the concept of technological singularity. He does not argue that Artificial Intelligence (AI) will fail to achieve super-human intelligence; rather he suggests that it may not lead to the runaway exponential growth. Walsh notes that there is a lot of optimism and pessimism in the field of Artificial Intelligence (AI). Optimists are investing billions of dollars in AI. On the other hand, pessimists expect AI to end many things: jobs, wars, and even humanity. Both optimists and pessimists often turn to the idea of technological singularity: the time when AI will be able to take over AI research, and a new, much more intelligent species begins to populate the world. If optimists are right, it will be a moment that will fundamentally change our economy and society. If pessimists are right, it will also be a moment that will significantly change our economy and society. It is, therefore, worthwhile to invest some time to decide whether one of them may be right [1]. Walsh defends his view via six different arguments.
Almost exactly a year before, on 23 February 2015, Roman Yampolskiy archived his paper “From Seed AI to Technological Singularity via Recursively Self-Improving Software” [2] which was subsequently published as two peer-reviewed papers at AGI15 [3,4]. In it, Yampolskiy makes arguments similar to those made by Walsh, but also considers evidence in favor of intelligence explosion. Yampolskiy’s conclusion is that singularity may not happen but leans more toward it happening. In the next section, we present arguments from the original paper by Yampolskiy mapped to each of the six arguments given by Walsh in his work.

2. Contrasting Yampolskiy’s and Walsh’s Arguments

To make it easier to contrast arguments derived from On the Limits of Recursively Self-Improving Artificially Intelligent Systems [2], we use Walsh’s naming of arguments, even if our analysis does not rely on the same example (e.g., No dog).

2.1. Fast-Thinking Dog

Walsh argues: “… speed alone does not bring increased intelligence”. Yampolskiy says: “In practice, the performance of almost any system can be trivially improved by the allocation of additional computational resources such as more memory, higher sensor resolution, faster processor, or greater network bandwidth for access to information. This linear scaling does not fit the definition of recursive improvement, as the system does not become better at improving itself. To fit the definition, the system would have to engineer a faster type of memory, not just purchase more memory units of the type it already has access to. In general, hardware improvements are likely to speed up the system, while software improvements (novel algorithms) are necessary for achievement of meta-improvements.” It is clear from the original paper that performance in this context is the same as intelligence, and as most of our intelligence testing tools (IQ tests) are time-based, increased speed would in fact lead to higher Intelligence Quotient, at least in terms of how we currently access intelligence.

2.2. Anthropocentric

Walsh argues that “human intelligence is itself nothing special”, meaning it is not a point that “once passed, allows for rapid increases in intelligence”. Yampolskiy says: “We still do not know the minimum intelligence necessary for commencing the RSI (Recursive Self-Improvement) process, but we can argue that it would be on par with human intelligence, which we associate with universal or general intelligence [5], though in principal, a sub-human level system capable of self-improvement cannot be excluded [6]. One may argue that even human-level capability is not enough, because we already have programmers (people or their intellectual equivalence formalized as functions [7], or Human Oracles [8,9]) who have access to their own source code (DNA), but who fail to understand how DNA (nature) works to create their intelligence. This does not even include the additional complexity in trying to improve on existing DNA code or complicating factors presented by the impact of learning environment (nurture) on the development of human intelligence. Worse yet, it is not obvious how much above human ability an AI needs to be to begin overcoming the ‘complexity barrier’ associated with self-understanding.”

2.3. Meta-Intelligence

Walsh argues: “…strongest arguments against the idea of a technological singularity in my view is that it confuses intelligence to do a task with the capability to improve your intelligence to do a task” and cites a quote from Chalmers [6] as an example: “If we produce an AI by machine learning, it is likely that soon after, we will be able to improve the learning algorithm and extend the learning process, leading to AI+”. Yampolskiy says: “Chalmers [6] uses logic and mathematical induction to show that if an AI0 system is capable of producing an only slightly more capable AI1 system, a generalization of that process leads to a super-intelligent performance in AIn after n generations. He articulates that his proof assumes that the proportionality thesis, which states that increases in intelligence lead to proportionate increases in the capacity to design future generations of AIs, is true.”

2.4. Diminishing Returns

Walsh argues: “There is often lots of low hanging fruit at the start, but we then run into great difficulties to improve after this. … An AI system may be able to improve itself an infinite number of times, but the extent to which its intelligence changes overall could be bounded.” Yampolskiy says, “… the law of diminishing returns quickly sets in, and after an initial significant improvement phase, characterized by discovery of ‘low-hanging fruit’, future improvements are likely to be less frequent and less significant, producing a bell curve of valuable changes.”

2.5. Limits of Intelligence

Walsh argues: “There are many fundamental limits within the universe”. Yampolskiy outlines such limits in great detail: “First of all, any implemented software system relies on hardware for memory, communication, and information processing needs, even if we assume that it will take a non-Von Neumann (quantum) architecture to run such software. This creates strict theoretical limits to computation, which despite hardware advances predicted by Moore’s law will not be overcome by any future hardware paradigm. Bremermann [10], Bekenstein [11], Lloyd [12], Anders [13], Aaronson [14], Shannon [15], Krauss [16], and many others have investigated the ultimate limits to computation in terms of speed, communication, and energy consumption, with respect to such factors as speed of light, quantum noise, and gravitational constant.” “In addition to limitations endemic to hardware, software-related limitations may present even bigger obstacles for RSI systems. Intelligence is not measured as a standalone value, but with respect to the problems it allows to solve. For many problems such as playing checkers [17], it is possible to completely solve the problem (provide an optimal solution after considering all possible options) after which no additional performance improvement would be possible [18].”

2.6. Computational Complexity

Walsh argues: “… no amount of growth in performance will make undecidable problems decidable” and Yampolskiy says, “Other problems are known to be unsolvable regardless of level of intelligence applied to them [19]. Assuming separation of complexity classes (such as P vs. NP) holds [20], it becomes obvious that certain classes of problems will always remain only approximately solvable and any improvements in solutions will come from additional hardware resources, not higher intelligence.”

3. Response to Walsh’s Arguments

In this section, we provide novel analysis of all six arguments presented by Walsh and, via mapping provided in the previous section, revisit and critically analyze arguments made by Yampolskiy.

3.1. Fast-Thinking Dog

This argument intuitively makes sense, since nobody has ever managed to train a dog to play chess. However, intuition is no match for a scientific experiment. Animals have successfully been trained to understand and even use human (sign) language and do some basic math. People with mental and learning disabilities, who have been long considered a “lost cause”, have been successfully trained to perform very complex behaviors via alternative teaching methods and longer training spans. It is entirely possible that if one had thousands of years to train a dog, it would learn to play a decent game of chess; after all, it has a neural network very similar to the one used by humans and deep-learning AI. It may be argued that there is considerable evidence that language and other capabilities are functions of specific brain structures that are largely absent from a dog. Thus, thousands of years of training would not suffice, and one would need millions of years of evolution to get a human-level intelligent dog. However, some recent research has documented that people missing most of their brain can have near normal cognitive capacity [21], and even significant damage to parts of the brain can be overcome due to neuroplasticity [22], suggesting that brain structures are much more general. To transfer an analogy to another domain, an Intel286 processor is not fast enough to perform live speech recognition, but if you speed it up, it is. Until an actual experiment can be performed on an accurately simulated digital dog, this argument will remain speculative.

3.2. Anthropocentric

The reason some experts believe ([23], p. 339; [24], Chapter 3) that the human level of intelligence is special is not owing to an anthropocentric bias, but rather to the Church-Turing Thesis (CTT). The CTT states that a function over natural numbers is computable by a prototypical human being if and only if it is computable by some Turing Machine (TM), assuming that such a theoretical human has infinite computational resources similar to an infinite tape available to a TM. This creates an equivalence between human level intelligence and a Universal Turing Machine, which is a very special machine in terms of its capabilities. However, it is important to note that the debate regarding provability of the CTT remains open [25,26].

3.3. Meta-Intelligence

If the system is superior to human performance in all domains, as required by the definition of superintelligence, it would also be superior in the domain of engineering, computer science, and AI research. Potentially, it would be capable of improving intelligence of its successor up to any theoretical and physical limits, which might represent an upper bound on optimization power. In other words, if it were possible to improve intelligence, a super-intelligent system would do so; but as such possibility remains a speculation, this is probably the strongest of all presented objections to intelligence explosion.

3.4. Diminishing Returns

It is a mathematical fact that many functions, while providing diminishing returns, continue diverging. For example, the harmonic series (1 + 1/2 + 1/3 + 1/4 + 1/5 + … = ∞) is a highly counterintuitive result, yet is a proven mathematical fact. Additionally, as the system itself would be continuously improving, it is possible that the discoveries it would make with respect to future improvements would also improve in terms of their impact on the overall intelligence of the system. Thus, while it is possible that diminishing returns could be encountered, it is just as possible that returns would not be diminished.

3.5. Limits of Intelligence

While physical and theoretical limits to intelligence definitely exist, they may be far beyond our capacity to attain in practice, and so would have no impact on our perception of machine intelligence appearing to be undergoing intelligence explosion. It is also possible that physical constants are not permanently set, but dynamically changing, which has been demonstrated for some such physical “constants”. It is also possible that the speed of improvement in intelligence would be below the speed with which some such constants would change. To bring an example from another domain, our universe can be said to be expanding faster than the speed of light, with respect to distance between some selected regions, so even with travel at a maximum theoretical speed (of light), we would never hit a threshold. Therefore, this is another open question, and a limit may or may not be encountered in the process of self-improvement.

3.6. Computational Complexity

While it is certainly true that undecidable problems remain undecidable, it is not a limitation on intelligence explosion, as it is not a requirement to qualify as super-intelligent. Moreover, plenty of solvable problems exists at all levels of difficulty. Walsh correctly points out that most limitations associated with computational complexity are only problems with our current models of computation and are avoided by switching to different paradigms of computation, such as quantum computing and some, perhaps not yet discovered, implementations of hypercomputation.

4. Conclusions

Careful side-by-side analysis of papers by Walsh and Yampolskiy shows an almost identical set of arguments against the possibility of technological singularity. This level of successful replication in analysis is an encouraging fact in science and gives additional weight to shared conclusions. Nevertheless, in this paper we provide a novel analysis of Walsh’s/Yampolskiy’s arguments which shows that they may not be as strong as they might initially appear. Future productive directions of analysis may concentrate on a number of inherent advantages, which may permit AI to recursively self-improve [27] and possibly succeed in this challenging domain: the ability to work uninterruptedly (no breaks, sleep, vocation, etc.), omniscience (complete and cross-disciplinary knowledge), greater speed and precision (brain vs. processor, human memory vs. computer memory), intersystem communication speed (chemical vs. electrical), duplicability (intelligent software can be copied), editability (source code unlike DNA can be quickly modified), near-optimal rationality (if not relying on heuristics) [28], advanced communication (ability to share cognitive representations complex concepts), new cognitive modalities (sensors for source code), ability to analyze low level hardware (e.g., individual registers), and addition of hardware (ability to add new memory, processors, etc.) [29]. The debate regarding possibility of technological singularity will continue. Interested readers are advised to read the full paper by Yampolskiy [2], as well as a number of excellent relevant chapters in Singularity Hypothesis [30] which address many arguments not considered in this paper.

Funding

This research received no external funding.

Acknowledgments

The author wishes to thank Toby Walsh for encouraging and supporting work on this paper, as well as reviewers who provided feedback on an early draft and by doing so, made the arguments presented in the paper much stronger.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Walsh, T. The Singularity May Never be Near. In Proceedings of the 2nd International Workshop on AI, Ethics and Society (AIEthicsSociety2016) & 30th AAAI Conference on Artificial Intelligence (AAAI-2016), Phoenix, AZ, USA, 12–13 February 2016. [Google Scholar]
  2. Yampolskiy, R.V. From Seed AI to Technological Singularity via Recursively Self-Improving Software. arXiv, 2015; arXiv:1502.06512. [Google Scholar]
  3. Yampolskiy, R.V. Analysis of Types of Self-Improving Software. In Proceedings of the Artificial General Intelligence: 8th International Conference (AGI 2015), Berlin, Germany, 22–25 July 2015; Volume 9205, p. 384. [Google Scholar]
  4. Yampolskiy, R.V. On the Limits of Recursively Self-Improving AGI. In Proceedings of the Artificial General Intelligence: 8th International Conference (AGI 2015), Berlin, Germany, 22–25 July 2015; Volume 9205, p. 394. [Google Scholar]
  5. Loosemore, R.; Goertzel, B. Why an Intelligence Explosion Is Probable, Singularity Hypotheses; Springer: Berlin/Heidelberg, Germany, 2012; pp. 83–98. [Google Scholar]
  6. Chalmers, D. The Singularity: A Philosophical Analysis. J. Conscious. Stud. 2010, 17, 7–65. [Google Scholar]
  7. Shahaf, D.; Amir, E. Towards a theory of AI completeness. In Proceedings of the 8th International Symposium on Logical Formalizations of Commonsense Reasoning (Commonsense 2007), Stanford, CA, USA, 26–28 March 2007. [Google Scholar]
  8. Yampolskiy, R. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computing and Metaheuristics; Yang, X.-S., Ed.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 3–17. [Google Scholar]
  9. Yampolskiy, R.V. AI-Complete, AI-Hard, or AI-Easy–Classification of Problems in AI. In Proceedings of the 23rd Midwest Artificial Intelligence and Cognitive Science Conference, Cincinnati, OH, USA, 21–22 April 2012. [Google Scholar]
  10. Bremermann, H.J. Quantum noise and information. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, USA, 27 December 1966–7 January 1967; pp. 15–22. [Google Scholar]
  11. Bekenstein, J.D. Information in the holographic universe. Sci. Am. 2003, 289, 58–65. [Google Scholar] [CrossRef] [PubMed]
  12. Lloyd, S. Ultimate Physical Limits to Computation. Nature 2000, 406, 1047–1054. [Google Scholar] [CrossRef] [PubMed]
  13. Sandberg, A. The physics of information processing superobjects: Daily life among the Jupiter brains. J. Evol. Technol. 1999, 5, 1–34. [Google Scholar]
  14. Aaronson, S. Guest column: NP-complete problems and physical reality. ACM Sigact News 2005, 36, 30–52. [Google Scholar] [CrossRef]
  15. Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  16. Krauss, L.M.; Starkman, G.D. Universal limits on computation. arXiv, 2004; arXiv:astro-ph/0404510. [Google Scholar]
  17. Schaeffer, J.; Burch, N.; Bjornsson, Y.; Kishimoto, A.; Muller, M.; Lake, R.; Lu, P.; Sutphen, S. Checkers is Solved. Science 2007, 317, 1518–1522. [Google Scholar] [CrossRef] [PubMed]
  18. Mahoney, M. Is There a Model for RSI? SL4. 20 June 2008. Available online: http://www.sl4.org/archive/0806/19028.html (accessed on 26 July 2018).
  19. Turing, A. On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 1936, 2, 230–265. [Google Scholar]
  20. Yampolskiy, R.V. Construction of an NP Problem with an Exponential Lower Bound. arXiv 2011, arXiv:1111.0305. [Google Scholar]
  21. Feuillet, L.; Dufour, H.; Pelletier, J. Brain of a white-collar worke. Lancet 2007, 370, 262. [Google Scholar] [CrossRef]
  22. Johansson, B.B. Brain plasticity and stroke rehabilitation The Willis lecture. Stroke 2000, 31, 223–230. [Google Scholar] [CrossRef] [PubMed]
  23. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  24. Kurzweil, R.; Schneider, M.L.; Schneider, M.L. The Age of intelligent Machines; MIT Press: Cambridge, MA, USA, 1990. [Google Scholar]
  25. Smith, P. An Introduction to Gödel's Theorems; Cambridge University Press: Cambridge, UK, 2013. [Google Scholar]
  26. Bringsjord, S.; Arkoudas, K. On the Provability, Veracity, and Al-Relevance of the Church—Turing Thesis. In Church’s Thesis after 70 Years; Ontos Gmbh: Leipzig, Germany, 2006; Volume 1, p. 66. [Google Scholar]
  27. Sotala, K. Advantages of artificial intelligences, uploads, and digital minds. Int. J. Mach. Conscious. 2012, 4, 275–291. [Google Scholar] [CrossRef]
  28. Muehlhauser, L.; Salamon, A. Intelligence Explosion: Evidence and Import, Singularity Hypotheses; Springer: Berlin/Heidelberg, Germany, 2012; pp. 15–42. [Google Scholar]
  29. Yudkowsky, E. Levels of Organization in General Intelligence, Artificial General Intelligence; Springer: Berlin/Heidelberg, Germany, 2007; pp. 389–501. [Google Scholar]
  30. Eden, A.H.; Moor, J.H.; Soraker, J.H.; Steinhart, E. Singularity Hypotheses: A Scientific and Philosophical Assessment; Springer Science & Business Media: New York, NY, USA, 2013. [Google Scholar]

Share and Cite

MDPI and ACS Style

Yampolskiy, R.V. The Singularity May Be Near. Information 2018, 9, 190. https://doi.org/10.3390/info9080190

AMA Style

Yampolskiy RV. The Singularity May Be Near. Information. 2018; 9(8):190. https://doi.org/10.3390/info9080190

Chicago/Turabian Style

Yampolskiy, Roman V. 2018. "The Singularity May Be Near" Information 9, no. 8: 190. https://doi.org/10.3390/info9080190

APA Style

Yampolskiy, R. V. (2018). The Singularity May Be Near. Information, 9(8), 190. https://doi.org/10.3390/info9080190

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop