The Difficulties in Symbol Grounding Problem and the Direction for Solving It
Abstract
:1. Introduction
2. Main Solutions of the Symbol Grounding Problem and Their Problems
2.1. Harnad’s Hybrid Symbolic/Sensorimotor System
- (1)
- Iconic representations: They are the projections of distal objects onto proximal perceptual organs such as images or sounds. For example, the data could be “the many shapes of an apple projected onto our retina.” These raw data are further processed and abstracted as categorical representations.
- (2)
- Categorical representations: They are features extracted from iconic representations. They are common perceptual features shared of the data. For example, for all apples, the features of a red or cyan color and approximate roundness are common to all apples. Categorical representations are the basic units of meanings and the names of these categories are the basic symbols of the symbol system.
- (3)
- Symbolic representations: They are composed of basic symbols that designate various categories. For example, a symbolic representation such as “zebra” is composed of two basic symbols: “horse” and “stripes.” The meaning of “horse” and that of “stripes” are derived from iconic and categorical representations.
2.2. Physical Symbol Grounding
2.3. Floridi and Taddeo’s “Zero Semantical Commitment”
Unfortunately, the hybrid model does not satisfy the Z condition. The problem concerns the way in which the hybrid system is supposed to find the invariant features of its sensory projections that allow it to categorize and identify objects correctly…Neural networks can be used to find structures (if they exist) in the data space, such as patterns of data points. However, if they are supervised, e.g., through back propagation, they are trained by means of a pre-selected training set and repeated feedback, so whatever grounding they can provide is entirely extrinsic. If they are unsupervised, then the networks implement training algorithms that do not use desired output data but rely only on input data to try to find structures in the data input space. Units in the same layer compete with each other to be activated. However, they still need to have built-in biases and feature-detectors in order to reach the desired output.[8] (p. 423)
Suppose we have a set of finite strings of signs—e.g., 0s and 1s—elaborated by an AA. The strings may satisfy the semiotic definition—they may have a form, a meaning and a referent—only if they are interpreted by an AA that already has a semantics for that vocabulary. This was also Peirce’s view. Signs are meaningful symbols only in the eyes of the interpreter. But the AA cannot be assumed to qualify as an interpreter without begging the question. Given that the semiotic definition of symbols is already semantically committed, it cannot provide a strategy for the solution of the SGP.[8] (p. 435)
Unfortunately, as Vogt himself acknowledges, the guess game cannot and indeed it is not meant to ground the symbols. The guess game assumes that the AAs manipulate previously grounded symbols, in order to show how two AAs can come to make explicit and share the same grounded vocabulary by means of an iterated process of communication. Using Harnad’s example, multiplying the number of people who need to learn Chinese as their first language by using only a Chinese-Chinese dictionary does not make things any better.[8] (pp. 435–436)
Usually, the symbols constituting a symbolic system neither resemble nor are causally linked to their corresponding meanings. They are merely part of a formal, notational convention agreed upon by its users. One may then wonder whether an AA (or indeed a population of them) may ever be able to develop an autonomous, semantic capacity to connect its symbols with the environment in which the AA is embedded interactively. This is the SGP.[8] (p. 420)
3. The Problem of Consciousness in the Symbol Grounding Problem
3.1. Harnad’s Paradox
Cognitive science typically postulates unconscious mental phenomena, computational or otherwise, to explain cognitive capacities. The mental phenomena in question are supposed to be inaccessible in principle to consciousness. I try to show that this is a mistake, because all unconscious intentionality must be accessible in principle to consciousness; we have no notion of intrinsic intentionality except in terms of its accessibility to consciousness.[26] (p. 585)
If both tests are passed, then the semantic interpretation of its symbols is “fixed” by the behavioral capacity of the dedicated symbol system … the symbol meanings are accordingly not just parasitic on the meanings in the head of the interpreter, but intrinsic to the dedicated symbol system itself.[4] (p. 345)
The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful … But whether its symbols would have meaning rather than just grounding is something that even the robotic Turing Test—hence cognitive science itself—cannot determine, or explain.[27]
It is logically possible that an ungrounded symbol system has intrinsic meanings or that a grounded symbol system fails to have them. I’m merely betting (probabilistically, but with reasons) that T3-capacity is sufficient for having a mind and meaning.[28]
Sensory-motor robotic capacities are necessary to ground some, at least, of the model’s words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that—nor to explain how and why–the model feels (if it does). That problem is much harder to solve (and perhaps insoluble).[20]
3.2. The Arguments of Others
Several solutions have then been proposed, with a very promising one by Steels claiming that none of these really solved Harnad’s problem. Taddeo and Floridi introduced the Z condition—concretizing the SGP. Finally, Müller and Fields showed that it is unsolvable, and that it can be delegated to the hard problem of consciousness.[25] (p. 260)
4. The Denial of Intrinsic Intentionality and the New Direction of SGP
4.1. The Theories of Naturalizing Intentionality
4.2. The Evolutionary Robotics Research
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Searle, J. Minds, Brains, and Programs. Behav. Brain Sci. 1980, 3, 417–424. [Google Scholar] [CrossRef]
- Searle, J. Why Dualism (and Materialism) Fail to Account for Consciousness. In Questioning Nineteenth Century Assumptions about Knowledge; Lee, R., Ed.; SUNY Press: New York, NY, USA, 2010; Volume III, pp. 5–48. [Google Scholar]
- Searle, J. Minds and Brains without Programs. In Mindwaves; Basil Blackwell: Oxford, UK, 1987; pp. 209–223. [Google Scholar]
- Harnad, S. The Symbol Grounding Problem. Phys. D 1990, 42, 335–346. [Google Scholar] [CrossRef]
- Fodor, J. Psychosemantics: The Problem of Meaning in the Philosophy of Mind; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
- Dretske, F. Knowledge and the Flow of Information; MIT Press: Cambridge, MA, USA, 1981. [Google Scholar]
- Harnad, S. Minds, Machines and Searle. J. Exp. Theor. Artif. Intell. 1989, 1, 5–25. [Google Scholar] [CrossRef]
- Taddeo, M.; Floridi, L. Solving the Symbol Grounding Problem: A Critical Review of Fifteen Years of Research. J. Exp. Theor. Artif. Intell. 2005, 17, 419–445. [Google Scholar] [CrossRef]
- Steels, L. The Symbol Grounding Problem Has Been Solved, so What’s Next? In Symbols and Embodiment Debates on Meaning and Cognition; Oxford University Press: New York, NY, USA, 2008; pp. 223–244. [Google Scholar] [CrossRef]
- Bringsjord, S. The Symbol Grounding Problem Remains Unsolved. J. Exp. Theor. Artif. Intell. 2015, 27, 63–72. [Google Scholar] [CrossRef]
- Chalmers, D. Facing up to the Problem of Consciousness. J. Conscious. Stud. 1995, 2, 200–219. [Google Scholar]
- Harnad, S. Doing, Feeling, Meaning and Explaining. On the Human. 2011. Available online: https://eprints.soton.ac.uk/272243/ (accessed on 24 September 2022).
- Vogt, P. The Physical Symbol Grounding Problem. Cogn. Syst. Res. 2002, 3, 429–457. [Google Scholar] [CrossRef]
- Brooks, R.A. Elephants Don’t Play Chess. Robot. Auton. Syst. 1990, 6, 3–15. [Google Scholar] [CrossRef]
- Chandler, D. Semiotics: The Basics, 3rd ed.; Routledge: New York, NY, USA, 2017. [Google Scholar]
- Bielecka, K. Symbol Grounding Problem and Causal Theory of Reference. New Ideas Psychol. 2016, 40, 77–85. [Google Scholar] [CrossRef]
- Nöth, W. Handbook of Semiotics; Indiana University Press: Bloomington, IN, USA, 1990. [Google Scholar] [CrossRef]
- Raczaszek-Leonardi, J.; Deacon, T. Ungrounding Symbols in Language Development: Implications for Modeling Emergent Symbolic Communication in Artificial Systems. In Proceedings of the 2018 Joint IEEE 8th International Conference on Development and Learning and Epigenetic Robotics, ICDL-EpiRob, Tokyo, Japan, 17–20 September 2018; pp. 232–237. [Google Scholar] [CrossRef]
- Deacon, T. The Symbolic Species; W.W. Norton: New York, NY, USA, 1997. [Google Scholar]
- Harnad, S. Alan Turing and the “Hard” and “Easy” Problem of Cognition: Doing and Feeling. Turing100: Essays in Honour of Centenary Turing Year 2012. Available online: https://arxiv.org/abs/1206.3658 (accessed on 24 September 2022).
- Taddeo, M.; Floridi, L. A Praxical Solution of the Symbol Grounding Problem. Minds Mach. 2007, 17, 369–389. [Google Scholar] [CrossRef]
- Müller, V. Which Symbol Grounding Problem Should We Try to Solve? J. Exp. Theor. Artif. Intell. 2015, 27, 73–78. [Google Scholar] [CrossRef]
- Müller, V. The Hard and Easy Grounding Problems. Int. J. Signs Semiot. Syst. 2011, 1, 70–73. [Google Scholar]
- Rodríguez, D.; Hermosillo, J.; Lara, B. Meaning in Artificial Agents: The Symbol Grounding Problem Revisited. Minds Mach. 2012, 22, 25–34. [Google Scholar] [CrossRef]
- Cubek, R.; Ertel, W.; Palm, G. A Critical Review on the Symbol Grounding Problem as an Issue of Autonomous Agents. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2015; pp. 256–263. [Google Scholar] [CrossRef]
- Searle, J. Consciousness, Explanatory Inversion, and Cognitive Science. Behav. Brain Sci. 1990, 13, 585–596. [Google Scholar] [CrossRef]
- Harnad, S. Symbol Grounding Problem. Scholarpedia 2007, 2, 2373. [Google Scholar] [CrossRef]
- Harnad, S. Symbol Grounding Is an Empirical Problem: Neural Nets Are just a Candidate Component. 1993. Available online: http://cogprints.org/1588/1/harnad93.cogsci.html (accessed on 24 September 2022).
- Davidson, P. Toward a General Solution to the Symbol Grounding Problem: Combining Machine Learning and Computer Vision. In Proceedings of the AAAI Fall Symposium Series, Machine Learning in Computer Vision: What, Why and How, Lund, Sweden, 22–24 October 1993; pp. 157–161. [Google Scholar]
- Menant, C. Turing Test, Chinese Room Argument, Symbol Grounding Problem. Meanings in Artificial Agents. Am. Philos. Assoc. Newsl. Philos. Comput. 2013, 13, 30–34. [Google Scholar]
- Bielecka, K. Why Taddeo and Floridi Did Not Solve the Symbol Grounding Problem. J. Exp. Theor. Artif. Intell. 2015, 27, 138. [Google Scholar] [CrossRef]
- Dennett, D. The Intentional Stance; MIT Press: Cambridge, MA, USA, 1987. [Google Scholar]
- Brentano, F. Psychology from an Empirical Standpoint; Routledge: London, UK, 2012. [Google Scholar]
- Millikan, R. Varieties of Meaning; MIT Press: Hong Kong, China, 2004. [Google Scholar] [CrossRef]
- Neander, K. A Mark of the Mental: In Defense of Informational Teleosemantics; The MIT Press: Cambridge, MA, USA, 2017. [Google Scholar] [CrossRef]
- Holland, J. Outline for a Logical Theory of Adaptive Systems. J. ACM 1962, 9, 297–314. [Google Scholar] [CrossRef]
- Floreano, D.; Mondada, F. Evolution of Homing Navigation in a Real Mobile Robot. IEEE Trans. Syst. Man Cybern. Part B Cybern. 1996, 26, 396–407. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, J.; Mao, H. The Difficulties in Symbol Grounding Problem and the Direction for Solving It. Philosophies 2022, 7, 108. https://doi.org/10.3390/philosophies7050108
Li J, Mao H. The Difficulties in Symbol Grounding Problem and the Direction for Solving It. Philosophies. 2022; 7(5):108. https://doi.org/10.3390/philosophies7050108
Chicago/Turabian StyleLi, Jianhui, and Haohao Mao. 2022. "The Difficulties in Symbol Grounding Problem and the Direction for Solving It" Philosophies 7, no. 5: 108. https://doi.org/10.3390/philosophies7050108
APA StyleLi, J., & Mao, H. (2022). The Difficulties in Symbol Grounding Problem and the Direction for Solving It. Philosophies, 7(5), 108. https://doi.org/10.3390/philosophies7050108