Metacomputable
Abstract
:Goals
- A.
- If the consciousness-stream is metacomputable, it may be incomputable, but projectors of consciousness may still be computable. In particular, those projectors must be effectively computable in operation (to use a broad analogy, they may be like projectors of holograms instantiated in brain matter or, maybe, in something else).
- B.
- If consciousness-stream is meta-metacomputable then it can be non-computable, and projectors of consciousness (such as brains) may be non-computable as well; yet, machines able to produce projectors of consciousness can still be computable. For instance, such machines could be designed in AutoCad.
- C.
- Consciousness can be viewed in the following manner: (a) functional consciousness is a higher functional level of cognition; and (b) first-person phenomenal consciousness can be grasped solely from the epistemic first-person viewpoint. This view of consciousness seems to avoid the collapse of non-reductive naturalism (physicalism) into Cartesian dualism. Introduction of first-person epistemic consciousness may be facilitated by early Russelian monism viewed as a complementary approach to the ontic and epistemic viewpoints. It was later toyed with by Tom Nagel in The View from Nowhere and in his even earlier essay. It is important to be clear that our point is not to explain out non-reductive consciousness—what Chalmers calls the Hard Problem of Consciousness—the point is to provide a naturalistic account of how we can design a machine, very broadly understood, able to produce first-person consciousness. The notion of the metacomputable makes this distinction clearer, thus elucidating the Engineering Thesis in Machine Consciousness, to which we proceed in the last section of the article.
- D.
- The Engineering Thesis in Machine Consciousness is an argument that if we know how first-person consciousness is generated in the brain, we should be able to engineer it. In this article it is re-formulated in the context of meta-computability (points A and B above) and in the context of early Russelian monism.
1. Introduction
- Assumptions: AI should be able, eventually, to replicate functionalities of a conscious human being. Within non-reductive physicalism first-person consciousness is one such functionality. AI should also be able to replicate the content of conscious experience; produce a movie of what one sees, feels, and thinks. This is often identified, by philosophers, with phenomenal consciousness. However, if we record the content of one’s visual experiences on a tape, e.g., based on fMRI of one’s visual cortex [1,2] and other parts of the brain, this replicates the content of one’s inner experiences without facing the so-called hard problem of consciousness [3].
- A philosophical claim: Since phenomenal content can be read from visual cortex (thus undermining the so-called privileged access claim, important to some philosophers), then the problem of privileged access is not the gist of the hard problem of consciousness. Moreover, contra David Chalmers, the creator of the notion of the Hard Problem [3,4], the Hard Problem is not the problem of experience, of its phenomenal qualities. The gist of the actual hard problem of consciousness, which lends plausibility to Chalmers’ broadly appreciated argument, is the problem of the epistemic subject to whom those experiences appear. The very possibility of the epistemic subject is the gist of the hard problem, not specificity of phenomenal qualia that are (features of) internal objects in one’s mind. Those internal objects, as Nagel, Husserl, and classical German philosophy observed, are necessary elements co-constituting the epistemic relationship of subject and object [5].
- The claim that consciousness is more like hardware than software points out that the carrier of the content of consciousness is not more content; it is more like a carrier-wave. In this sense, it is more like a piece of hardware than software; such software constitutes a condition of first-person epistemicity.
- The possibility of non-reductive physicalism: Due to feuding philosophical approaches to first-person consciousness, ranging from eliminative materialism [6] to substance-dualism [7], the issue of the first-person consciousness, and its locus, has not been formulated well enough for AI to tackle. Many philosophers do not distinguish between dualism and non-reductionism of the subject to the object, a deficiency which would make non-reductive physicalism conceptually impossible to even formulate.
- A brief version of the argument Engineering Thesis in Machine Consciousness: Having adopted the framework of broad non-reductive physicalism, I argue the following:
- If we understand how the stream of first-person consciousness is generated in animals, including humans, we would be able to present the process generating it at the level of mathematical equations [8,9]. Contra to most forms of computationalism, consciousness is not just such equations, like a weather front is not just its computational simulation [10]. Yet, once we have the equations that describe correctly and sufficiently the way first-person consciousness gets generated, we would be able to use them to engineer a stream of consciousness [8,9,11,12]. Like in engineering of an artificial heart, whose function we understand already, we need to inlay those mathematically-grasped functionalities in the right kind of matter [8], which is not identical as running them in a program. This last point shows the difference between the simulation of a process (e.g., running a computational simulation of a nuclear explosion) and the actual physical process (producing a nuclear explosion). In order for a process to be designed, either as a mathematical simulation, or as an experiment in nature, there needs to be a way to establish some kind of effectively computable conditions for such a process to occur [13]. Once we have such computation, and material science allows us to inlay the program for first-person consciousness in physical substance, we should be able to design a projector of first-person non-reductive consciousness. Such a projector may, or may not, be able to be inlayed only in organic matter. This naturalistic view on the subject of first-person consciousness seems to run counter to the point that the subject of consciousness is something akin to a soul, or at least the gist of a conscious living being [14]. My understanding is that identification of the gist of an animal, or a human being, with the locus of consciousness—is unfortunate and unnecessary. Maybe there is a gist of a living being, or actually a soul, but its identification with any specific functionality—such as having a beating hearth, breathing, thinking, speaking, or perceiving phenomenal qualia—is unnecessary and unfortunate. Those functionalities, if properly and narrowly defined, may be necessary for a living being like us, but they can also be engineered outside those beings. They may also function there, should such an artificial environment allow for their functioning. Having first-person consciousness, which we share with rats and frogs, seems to be one such functionality. One has to have some capacities to make any use of it, but there is no obvious need to assume that it is the gist of one’s existence, whatever this last term would mean.
2. Part I
2.1. Metacomputable
2.1.1. Metacomputable Holograms
2.1.2. Metacomputable Consciousness
2.1.3. Ontological Implications of Metacomputable Revisited
2.1.4. Conclusions of Part I and Heads Up to Part II
- The issue provides an interesting application of metacomputable. In fact, my work on metacomputable systems originated from a discussion of non-reductive consciousness; and
- The topic of non-reductive consciousness is complex and quick and easy attempts to cover it lead to confusion. Thus, we are trying to explain our terms well enough to avoid any major misunderstandings.
3. Part II
3.1. Stream of Consciousness and Complementarity of the Subject and Object
3.1.1. The Stream of Consciousness
3.1.2. Subject and Object
3.1.3. Complementary Monism: Russelian Analysis of Mind
3.1.4. Summary of Section 3
4. The Engineering Thesis in Machine Consciousness Revisited
4.1. Consciousness as the Epistemic Locus
The Engineering Thesis
- Step I.
- If (1) humans have non-reductive consciousness, and if (2) science can, in principle, explain the world, then (3) science should, in principle, be able to explain non-reductive consciousness.
- Step II.
- To explain some process scientifically, in a strict sense, means to provide a mathematical description of that process (4). As it happens, such a description constitutes an engineering blueprint for creating that process (5).
- Step III.
- Hence, if some day science explains how non-reductive consciousness is generated, it would thereby provide an engineering blueprint of non-reductive consciousness (6). It would be a matter of standard engineering to turn such a blueprint into the engineering product (7).
- Step IV.
- If engineering non-reductive machine consciousness does not solve, or butcher, the hard problem of consciousness—this is because we assume the “black box approach”. We should be able to build projectors of consciousness even if we are unable to explain it in an ontological sense; this is a practical advantage of the engineering approach over more philosophical attempts (8). The ontological status of non-reductive consciousness (whatever this means), is distinct from the way it can be engineered. If it is engineered properly, the ontological status of engineered first-person consciousness would be relevantly similar to such a ‘natural’ consciousness, though its genesis and social role may be different from that of a biological and social being.
- Step V.
- People raise an epistemic problem how would we know that a being, e.g., a machine, has non-reductive consciousness? This is a problem since there are reasons to believe that functional consciousness can be engineered without first-person non-reductive phenomenal consciousness. This may be answered in terms of hard- and soft-AI, and even more clearly by strong physical interpretation of the Church-Turing thesis [15]. Yet, the main answer is simpler: this is a special version of the problem of other minds. Few, even among philosophers, seriously doubt the existence of other minds anymore. If we have a good engineering blueprint we should have decent epistemic reasons to believe that, by following it, we would attain first-person consciousness. A philosophically more interesting answer would be based on a version of Chalmers’ ‘dancing qualia’ argument [51]. Having established what functionality makes a human being first-person aware, a future scientist surgically removes that part (say, a thalamus) and replaces it with the artificial one, then records behavior, which includes self-reporting and neural correlates in the rest of CNR. Finally, the experimenter removes the artificial implant and implants back the natural part of the brain. If there is no significant difference in self-reporting, behavior, the main measurements of neural correlates, and the feel based on the memory of the periods when the natural and, the other time when the artificial generator of consciousness was in use—then there are good reasons to believe that the experiment of creating a generator of consciousness has succeeded. This point is valid provided that the shift among the centers of first-person perspective in the revised dancing qualia thought experiment does not cause the changes in the memory of events lived with different projectors of consciousness.
4.2. Stream of First-Person Consciousness and Computability
4.3. Consciousness as Hardware and Epistemicity
5. Conclusions
Acknowledgments
Conflicts of Interest
References
- Nishimoto, S.; Vu, A.T.; Naselaris, T.; Benjamini, Y.; Yu, B.; Gallant, J.L. Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Curr. Biol. 2011, 21, 1641–1646. [Google Scholar] [CrossRef] [PubMed]
- Kay, K.N.; Naselaris, T.; Prenger, R.J.; Gallant, J.L. Identifying natural images from human brain activity. Nature 2008, 452, 352–355. [Google Scholar] [CrossRef] [PubMed]
- Chalmers, D. Facing Up to the Problem of Consciousness. J. Conscious. Stud. 1995, 2, 200–219. [Google Scholar]
- Chalmers, D. Moving Forward on the Problem of Consciousness. J. Conscious. Stud. 1997, 4, 3–46. [Google Scholar]
- Boltuc, P. Reductionism and Qualia. Epistemologia 1998, 4, 111–130. [Google Scholar]
- Wilkes, K. Losing Consciousness. In Consciousness and Experience; Metzinger, T., Ed.; Ferdinand Schoningh: Meppen, Germany, 1995. [Google Scholar]
- Lowe, E.J. Non-Cartesian substance dualism and the problem of mental causation. Erkenntnis 2006, 65, 5–23. [Google Scholar] [CrossRef]
- Boltuc, N.; Boltuc, P. Replication of the Hard Problem of Consciousness. In AI and Consciousness: Theoretical Foundations and Current Approaches; Chella, A., Manzotti, R., Eds.; AAAI Press: Merlo Park, CA, USA, 2007; pp. 24–29. [Google Scholar]
- Boltuc, P. The Philosophical Issue in Machine Consciousness. Int. J. Mach. Conscious. 2009, 1, 155–176. [Google Scholar] [CrossRef]
- Piccinini, G. The Resilience of Computationalism. Philos. Sci. 2010, 77, 852–861. [Google Scholar] [CrossRef]
- Boltuc, P. The Engineering Thesis in Machine Consciousness. Techne Res. Philos. Technol. 2012, 16, 187–207. [Google Scholar] [CrossRef]
- Boltuc, P. A Philosopher’s Take on Machine Consciousness. In Philosophy of Engineering and the Artifact in the Digital Age; Guliciuc, V.E., Ed.; Cambridge Scholar’s Press: Cambridge, UK, 2010; pp. 49–66. [Google Scholar]
- Copeland, J.; Boltuc, P. Three Senses of ‘Effective’. 2017; in press. [Google Scholar]
- Searle, J. Mind, a Brief Introduction; Oxford University Press: Oxford, UK, 2004. [Google Scholar]
- Deutsch, D. Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer. Proc. R. Soc. Ser. A 1985, 400, 97–117. [Google Scholar] [CrossRef]
- Chihara, C. The Semantic Paradoxes: A Diagnostic Investigation. Philos. Rev. 1979, 88, 590–596. [Google Scholar] [CrossRef]
- Barker, J. Truth and Inconsistent Concepts. APA Newslett. Philos. Comput. 2013, 2–12. [Google Scholar]
- Russell, B. The Analysis of Mind; Allen and Unwin: Crows Nest, Australia, 1921. [Google Scholar]
- Nagel, T. The View from Nowhere; Oxford University Press: Oxford, UK, 1986. [Google Scholar]
- Block, N. On a Confusion about a Function of Consciousness. Behav. Brain Sci. 1995, 18, 227–287. [Google Scholar] [CrossRef]
- Price, H.H. Perception; Methuen & Company Limited: London, UK, 1932. [Google Scholar]
- Parfit, D. Reasons and Person; Clarendon Press: Oxford, UK, 1984. [Google Scholar]
- Descartes, R. Discourse on the Method, Optics, Geometry and Meteorology, Revised edition; Olscamp, P.J., Ed.; Hackett: Indianapolis, IN, USA, 2001. [Google Scholar]
- Descartes, R. Meditations on First Philosophy; Cottingham, J., Ed.; Cambridge University Press: Cambridge, UK, 1996. [Google Scholar]
- Fodor, J. The Modularity of Mind; MIT Press: Cambridge, MA, USA, 1983. [Google Scholar]
- Franklin, S.; Baars, B.; Ramamurthy, U. A Phenomenally Conscious Robot. APA Newslett. 2008, 8, 2–4. [Google Scholar]
- Shalom, A. Body/Mind Conceptual Framework and the Problem of Personal Identity: Some Theories in Philosophy, Psychoanalysis and Neurology; Prometheus Books: Amherst, NY, USA, 1989. [Google Scholar]
- Berkeley, G. Treatise Concerning the Principles of Human Knowledge; Jacob Tonson: London, UK, 1734. [Google Scholar]
- The Philosophical Works of Leibnitz; Duncan, G.M. (Ed.) Tutle Morehouse Taylor: New Heaven, CT, USA, 1890. [Google Scholar]
- Kant, I. Critique of Pure Reason; Cambridge University Press: Cambridge, UK, 1781. [Google Scholar]
- Fichte, J.G. The Science of Knowledge: With the First and Second Introductions; Heath, P., Lachs, J., Eds.; Cambridge University Press: Cambridge, UK, 1970. [Google Scholar]
- Siemek, M.J. Die Idee des Transzendentalismus bei Fichte und Kant; Felix Meiner Verlag: Hamburg, Germany, 1984. [Google Scholar]
- Husserl, E. Ideas: General Introduction to Pure Phenomenology; Routledge: Abingdon, UK, 1931. [Google Scholar]
- Ingarden, R. Der Streit um die Existenz der Welt: Existentialontologie; Niemeyer, M., Ed.; The University of California: Oakland, CA, USA, 1964. [Google Scholar]
- Boltuc, P. First-Person Consciousness as Hardware. APA Newslett. Philos. Comput. 2015, 14, 11–15. [Google Scholar]
- Damasio, A. Self Comes to Mind: Constructing the Conscious Brain; Vintage Books: New York, NY, USA, 2010. [Google Scholar]
- Russell, B. The Analysis of Matter; Spokesman Books: Nottingham, UK, 1927. [Google Scholar]
- Boltuc, P. Ideas of the Complementary Philosophy. (Pol: Idee Filozoff Komplementarnej); Warsaw University: Warsaw, Poland, 1984; pp. 4–8. [Google Scholar]
- Boltuc, P. Introduction to the Complementary Philosophy. (Wprowadzenie do filozofii komplementarnej). Colloquia Communia 1987, 4, 221–246. [Google Scholar]
- Spinoza, B. Ethics; Curley, E., Ed.; Penguin Classics: London, UK, 2005. [Google Scholar]
- Armstrong, D. The Nature of Mind; University of Queensland Press: Brisbane, Australia, 1966; pp. 37–48. [Google Scholar]
- Buber, M. I and Thou; Kaufmann, W., Ed.; Charles Scribner and Sons: New York, NY, USA, 1970. [Google Scholar]
- Boltuc, P. Is There an Inherent Moral Value in the Second-Person Relationships? In Inherent and Instrumental Value; Abbarno, G.J., Ed.; University Press of America: Lanham, MD, USA, 2014; pp. 45–61. [Google Scholar]
- Tully, R. Russell’s Neutral Monism. J. Bertrand Russell Stud. 1988, 8, 209–224. [Google Scholar] [CrossRef]
- Kotarbiński, T. Elementy teorii poznania, logiki formalnej i metodologii nauk, Lwow: Ossolineum, 1929; English translation (with several appendixes concerning reism), Gnosiology. In The Scientific Approach to the Theory of Knowledge, Trans; Wojtasiewicz, O., Ed.; Pergamon Press: Oxford, UK, 1966. [Google Scholar]
- Quinton, A. The Nature of Things; Routledge: London, UK, 1973. [Google Scholar]
- Wittgenstein, L. Logisch-Philosophische Abhandlung. In Annalen der Naturphilosophie; Wilhelm, O., Ed.; Verlag von Veit & Comp.: Leipzig, UK, 1921. [Google Scholar]
- Samsonovich, A.V. On a roadmap for the BICA Challenge. Biol. Inspired Cogn. Archit. 2012, 1, 100–107. [Google Scholar] [CrossRef]
- Goertzel, B.; Ikle, M.J.; Wigmore, J. The Architecture of Human-Like General Intelligence. In Theoretical Foundations of Artificial General Intelligence; Wang, P., Goertzel, B., Eds.; Atlantis Press: Paris, France, 2012. [Google Scholar]
- Goertzel, B. Mapping the Landscape of Human-Level Artificial General Intelligence; AI Magazine: Menlo Park, CA, USA, 2015. [Google Scholar]
- Chalmers, D. The Conscious Mind: In Search of a Fundamental Theory; Oxford University Press: Oxford, UK, 1996. [Google Scholar]
- Boltuc, P. Church-Turing Lovers. In Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence; Lin, P., Abney, K., Jenkins, R., Eds.; Oxford University Press: Oxford, UK, 2017; pp. 214–228. [Google Scholar]
- Baars, B. A Cognitive Theory of Consciousness; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
- Tononi, G. Integrated information theory of consciousness: An updated account. Arch. Ital. Biol. 2012, 150, 290–326. [Google Scholar]
- Darmos, S. Quantum Gravity and the Role of Consciousness. In Physics: Resolving the Contradictions between Quantum Theory and Relativity; CreateSpace Independent Publishing Platform: Hong Kong, China, 2014. [Google Scholar]
- Nagel, T. Mind and Cosmos; Oxford University Press: Oxford, UK, 2012. [Google Scholar]
- Guarini, M. Carebots and the Ties that Bind. APA Newslett. Philos. Comput. 2016, 16, 38–43. [Google Scholar]
- Dennet, D. Quining Qualia. In Consciousness in Modern Science; Marcel, A., Bisiach, E., Eds.; Oxford University Press: Oxford, UK, 1988; pp. 42–77. [Google Scholar]
- Harman, G. Change in View: Principles of Reasoning; M.I.T. Press/Bradford Books: Cambridge, MA, USA, 1986. [Google Scholar]
- Harman, G. Can Science Understand the Mind? In Conceptions of the Mind: Essays in Honor of George A. Miller; Harman, G., Ed.; Lawrence Erlbaum: Hillside, NJ, USA, 1993; pp. 111–121. [Google Scholar]
- Harman, G. Explaining an Explanatory Gap. American Philosophical Association Newsletter on Philosophy and Computers. Ph.D. Thesis, University of Arizona, Tucson, AZ, USA, 2007; pp. 2–3. [Google Scholar]
- Harman, G. More on Explaining a Gap. Am. Philos. Assoc. Newslett. Philos. Comput. 2008, 8, 4–6. [Google Scholar]
- Evans, R. A Kantian Cognitive Architecture. Philos. Stud. 2017, in press. [Google Scholar]
- Evans, R. Kant on Constituted Mental Activity. APA Newslett. Philos. Comput. 2017, 41–54. [Google Scholar]
© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bołtuć, P. Metacomputable. Entropy 2017, 19, 630. https://doi.org/10.3390/e19110630
Bołtuć P. Metacomputable. Entropy. 2017; 19(11):630. https://doi.org/10.3390/e19110630
Chicago/Turabian StyleBołtuć, Piotr. 2017. "Metacomputable" Entropy 19, no. 11: 630. https://doi.org/10.3390/e19110630
APA StyleBołtuć, P. (2017). Metacomputable. Entropy, 19(11), 630. https://doi.org/10.3390/e19110630