Next Article in Journal
Undecidability and Quantum Mechanics
Previous Article in Journal
Inhabited Institutionalism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Higher Cognition: A Mechanical Perspective

by
Robert Friedman
Department of Biological Sciences, University of South Carolina, Columbia, SC 29208, USA
Retired.
Encyclopedia 2022, 2(3), 1503-1516; https://doi.org/10.3390/encyclopedia2030102
Submission received: 2 July 2022 / Revised: 18 August 2022 / Accepted: 22 August 2022 / Published: 22 August 2022
(This article belongs to the Section Social Sciences)

Definition

:
Cognition is the acquisition of knowledge by the mechanical process of information flow in a system. In cognition, input is received by the sensory modalities and the output may occur as a motor or other response. The sensory information is internally transformed to a set of representations, which is the basis for downstream cognitive processing. This is in contrast to the traditional definition based on mental processes, a phenomenon of the mind that originates in past ideas of philosophy.

1. Definition of Cognition

1.1. A Scientific Definition of Cognition

Dictionaries commonly refer to cognition as a set of mental processes for acquiring knowledge [1,2]. However, this view originates from the assignment of mental processes to the act of thinking and is anchored in philosophical descriptions of the mind, including the concepts of consciousness and intentionality [1,3,4]. This also presumes that objects of nature are reflections of true and determined forms, and creates a division between the substances of matter and that of the mind.
Instead, a material description of cognition is restricted to the physical processes available to nature. An example is the study of primate face recognition, where the measurements of facial features serve as the basis of object recognition [5]. This perspective also excludes the concept that there is an innate and prior knowledge of objects, so therefore, cognition would form a representation of objects from their constituent parts [6,7]. Likewise, there is not an expectation that the physical processes of cognition are functionally deterministic.
The following sections on cognition focus on an informational perspective. For example, information flow as a physical process is a fundamental cause of cognition, so this scale of interest is insightful in forming expectations about cognitive processes. These expectations do not exclude the other levels of the biological hierarchy which yield an insight into brain function, such as in the action of individual neurons of regions in the primate brain and their effect on motor function [8,9].

1.2. Mechanical Perspective of Cognition

Scientific work generally acknowledges a mechanical description of information and the physical processes as drivers of cognition. However, a perspective based on the duality of physical and mental processing is retained to a small degree in the academic world. For example, there is a conjecture about the relationship between the human mind and a simulation of it [10]. This idea is based on assumptions about intentionality and the act of thinking. In contrast with this view, a physical process of cognition is defined by the generation of an action in neuronal cells without dependence on non-material processes [11].
Another result of physical limits on cognition is observed in the intention of moving a body limb, such as a person reaching for an object across a table. Instead, studies have replaced the assignment of intentionality with a material interpretation of this action, and have shown that the relevant neural activity occurs before awareness of the associated motor action [12].
Across the natural sciences, the neural system has been studied at various biological scales, including at the molecular level and at the higher level in the case of information processing [13,14]. At this higher-level perspective, the neural systems are functionally analogous to the deep learning models of computer science, as both are based on information and the flow of information [15,16]. This allows for a comparative approach for understanding cognitive processes. However, at the lower scale, the artificial neural system is dependent on an abstract model of neurons and the network, so here, the animal neural system is not likely comparable.

1.3. Scope of this Definition

The definition of cognition as used here is restricted to a set of mechanical processes [1]. Moreover, cognitive processing is described from a broad perspective, with some examples from the visual system along with insights from the deep learning approaches in computer science.
This is an informational perspective of cognition, since this scale has explanatory power in explaining the causes of knowledge. The other scales, both the large and the small, seem less tractable for constructing explanations of cognition. At the larger scale, if we consider the mental processes as occurrences of the mind, then the phenomena of cognition are subject to mere interpretation, guided by perception and impression, and are not restricted to the true designs and processes of nature. At the lower scale, modeling cognition is less tractable as an activity at the level of individual neuronal cells, given the complexity of the corresponding experiments. These notions on scale and perspective are pivotal in finding explanations for the phenomena of nature [17].
There are also insights from other perspectives, but the definition that follows is not a systematic review of studies of the mind or a broad survey of empirical knowledge across the cognitive sciences; instead, it is a narrow survey of the physical processes and the phenomena of information of higher cognition. This definition is also for a general audience and academic workers outside the science of cognition. However, within the practice of cognitive science, the technical terms may be defined in a different context, consistent with the stricter definitions as recommended by this entry [8,18].

1.4. Organization of Cognition as a Science

The following sections represent the categories of cognition and its processes. However, they do not reflect the true divisions in cognition, since the cognitive processes are not fully understood at a mechanistic level. Instead, the divisions are based on commonly used boundaries in thinking about cognition, such as in the division between sensory perception and higher reasoning regarding concepts.
The last section on conceptual knowledge is a synthesis of ideas from the previous sections, and serves the purpose of yielding an insight into the general properties of higher cognition. The overarching theme of the sections is that the neural network and its information flow are the foundation for a deeper understanding of the cognitive processes.

1.5. Definition of the Terminology

This entry uses terminology from science and engineering that requires further clarification. An example is a (mental) representation. In this case, a representation is commonly defined as information that corresponds to an idea or image. This is a particular case where there is reference to the mind, but this term is also a reminder that the origin of these phenomena is in the brain itself. They are encoded in the neural network of the brain.
Another term is “probabilistic”, as a description of a process. This refers to a process that is expected to vary and potentially lead to different outcomes. Representations are expected to occur by this process, so their properties will vary among individuals.
The reference to deep learning originates in the field of engineering. This is a many-layered neural network that is particularly suited for learning about the abovementioned representations. These artificial neural networks share a network-like organization with that of the brain in animals. The other terms and their use are expected to follow their commonly accepted meanings as found in a dictionary of scientific or common words. For example, the biosphere of the Earth refers to that portion of the planet shaped by biological and geological processes. These processes are dynamic, so they have changed over time and across the surface of the Earth.
Lastly, the informational processes have been described as physical processes because information flow is a phenomenon of the physical world. Therefore, matter and energy are required for this phenomenon to occur. The proximate mechanism of the flow of information in the brain is in the electrochemical dynamics that occur among neuronal cells, involving the movement of the ions of chemical elements that generate an electromotive force (voltage), and the diffusion of molecular-level neurotransmitters. The neural system is also influenced by humoral factors, such as the chemical messengers known as hormones.

2. Visual Perception

2.1. Evolution and Probabilistic Processes

The processes of vision occupy about one-half of the cerebral cortex of the human brain [19]. Similar to the many sensory forms of language processing in humans, vision is a major source of input and recognition of the outside world. The complexity of the sensory systems reveals an important aspect of the evolutionary process, as observed across cellular life, along with their countless forms and novelties. Evolution depends on physical processes, such as mutation and population exponentiality, along with a dependence on geological time scales for building biological complexity, as observed at all scales of life. These effects have also formed and shaped the biosphere of the Earth.
This vast complexity across living organisms is revealed by deconstruction of the camera eye in animals. This novel form emerged over time from a simpler one, such as an eye spot, and depended on a sequence of adaptations over time [20,21]. These rare and unique events did not hinder the independent formation of the camera eye, as it occurs in both the lineage of vertebrates and the unrelated lineage of cephalopods. This is an example of evolution as a powerful generator of change in physical traits, although counterforces restrict evolution from searching across an infinite number of possible novelties, including constraints that are found in the genetic code and those of the physical processes that shape these traits.
The evolution of cognition and neural systems are expected to occur by a similar probabilistic process to that theorized in the origin and design of the camera eye. An alternative to this bottom-up design in nature is to suggest a set of non-probabilistic processes and a top-down design consistent with determinism. For the hypothesis of determinism in nature, there is an expectation of true and perfect forms, as Plato theorized, but this hypothesis is not favorable for descriptions of activity in the brain.
Therefore, with the probabilistic view of evolution and the force of natural selection, the neural systems are expected to show a large degree of optimality in their design, as observed across the other biological systems [22]—especially since neural systems co-adapt with the sensory systems. However, this optimality is also constrained by the limits of molecular, cellular, and population processes [23]. This is not an assertion that biological systems are perfectly optimal, but that they are reasonably efficient in their structure and function. This view is particularly supported by observations of anatomical features across vertebrate species, and their adaptations for specific environments, such as those observed in the skeletal design of whales versus horses.

2.2. Abstract Encoding of Sensory Input

“The biologically plausible mechanism of cognition originates from the high-dimensional information in the outside world. In the case of vision, the sensory data consist of reflected light rays that are absorbed across a two-dimensional surface, the retinal cells of the eye. These light rays may range across the electromagnetic spectra, but the retinal cells are specific to a small subset of these light rays” [3].
Figure 1 shows the above view, in abstract form, as a sheet of neuronal cells that receive sensory input from the outside world. The input is processed by cell surface receptors and communicated downstream for neural system processing. The sensory neurons and their receptors can be imagined as a set of activation values that are undergoing change over time, and abstractly described as a dynamic system, in which change occurs in the dimensions of space and time.
The information processing of the sensory organs is tractable for scientific study, but the downstream cognitive processes are less understood at a mechanistic level. The cognitive processes include the generalizing of knowledge, also referred to as transfer learning, which is a higher level of organization than that constructed from the sensory input [7,24,25]. Transfer learning is dependent on segmentation (division) of the sensory world and identification of sensory objects (such as visual or auditory) with resistance to variation in viewpoint or perspective (Figure 2) [26].
In computer science, there is a model [6] designed for the segmentation and robust recognition of objects. This approach includes sampling of the sensory input, the identification of the parts of sensory objects, and encoding of the information in an abstract form for presentation to the downstream neural processes. The encoding scheme is expected to include a set of discrete representational levels of unlabeled (unidentified) objects and then uses a probabilistic approach for matching these representations to known objects in the memory. Without the potential for a labeled memory that describes an object, then there is no opportunity for knowledge of the object and a basis for knowledge in general.
Information is the proximate cause of cognition, and the laws of thermodynamics determine how information flows in any physical system, whether in a biological context or an artificial analog [27]. At other spatial scales, the physical processes in the brain are not homologous with an artificial neural network, such as at the level of neurons, where the intricacies of cellular processes are not shared with an artificial one. However, our history is filled with examples of engineers replicating the large-scale designs of nature, including lakes, bridges, and the construction of underwater vessels. The designs are similar at a physical scale because both natural and artificial forms are constrained by physical processes.

3. General Cognition

3.1. Algorithmic Description

Experts have investigated the question of whether an algorithm can explain brain computation [28]. They concluded that this is an unsolved problem, even though natural processes are inherently representable by a quantitative model. However, information flow in the brain is a product of a non-linear dynamical system, a complex phenomenon that is analogous to the physics of fluid flow, a complexity that may exceed the limits of computational work. Similarly, these systems are highly complex and not easily mirrored by simple mathematical descriptions [28,29]. Experts recommend an empirical approach for disentangling these kinds of complex systems, since they are not considered very tractable at a theoretical level.
An artificial neural system, such as in the deep learning architectures, has strong potential for testing hypotheses on higher cognition. The reason is that engineered systems are built from parts and relationships that are known, whereas in nature, the origin and history of the system is obscured by time and a large number of events; in this case, acquiring scientific knowledge likely requires extensive experimentation that is often confounded with error, including from sources that are known and unknown.

3.2. Encoding of Knowledge

It is possible to hypothesize about a model of object representation in the brain and its artificial analog in the deep learning systems. First, these cognitive systems are expected to encode objects by their parts, the basic elements of an object [5,6,7]. Second, it is expected that the process is stochastic, a probabilistic process, as in all other natural processes.
The neural network system is, in its essence, a programmable system [30], encoded with weight values along the connections in the network and activation values at the nodes. It is expected that the brain functions analogously at the level of information processing, since these systems are both based on non-linear dynamic principles of an interconnected network of nodes and a distribution of the representations of objects [7,28,31,32]. Furthermore, the encoding schemes in the network are likely to be abstract and generated by probabilistic processes.
Moreover, a physical interpretation of cognition requires the matching of patterns for the generalization of knowledge. This is consistent with a view of cognition as a statistical machine with a reliance on sampling for robust information processing [33]. With advancement in the deep learning methods, such as the invention of the transformer architecture [7,34,35], it is possible to sample and search for exceedingly complex patterns in a sequence of information, including in the case of object detection across a visual scene [36]. This sampling of the world occurs across the sensory modalities, such as those in vision and hearing, which are the sources of information for processing and constructing the internal representations [37].

3.3. Representation of Common-Sense Concepts

Microsoft Research released a deep learning method based on the transformer architecture, along with the inclusion of curated and structured data, to achieve some degree of parity with people in common-sense reasoning [38]. Their example of this kind of reasoning is described by a question on what people do while playing a guitar. The common-sense answer is for people to sing. This association is not a naive one, since the concept of singing is not a property of a guitar. Their achievement of parity with people is possible by the addition of the curated and structured data.
Their finding showed that an online corpus by itself is insufficient for a full knowledge of concepts. The conventional transformer architecture is dependent on and limited by the information inherent in a sequence of data for downstream representation of conceptual knowledge. In their case, the missing component was the curation and structure in the data, and the results showed a competitive capability for building concepts from representations as derived from input data.
The use of a large sample of representations that correspond to an abstract or non-abstract object or an event is expected to further increase robustness in a model of higher cognition [39]. Our knowledge of concepts is expected to form in the same manner. If there are incomplete or missing parts of a concept, then a person will have difficulty in forming the whole concept and applying it during problem solving.

3.4. Future Directions in Cognitive Science

3.4.1. Dynamics of Cognition

Is higher cognition as interpretable as a deep learning system? This question arises from the difficulty of disentangling the mechanisms of an animal neural system, whereas it is possible to record the changing states of an artificial system, since its underlying design is known. If the artificial system is analogous, then it is possible to gain insight into the natural forms of cognition [7,40]. However, the assumption for this analogy may not hold. For example, it is known that the mammalian brain is highly dynamic, such as in the rates of sensory input and the downstream activation of internal representations [28]. These dynamic properties are not easily modeled in deep learning systems, a constraint of hardware design and efficiency [28]. This has been an impediment to the design of an artificial system that is approximate to higher cognition, although there are concepts for modeling these dynamics, such as an architecture that includes “fast weights” and provides a form of true recursion across a neural network [7,28]. This allows for a self-referential system that can continue to adapt to new experience. Recently, there have been studies on this architecture to address the performance problem [41,42].
The artificial neural networks continue to scale in size and efficiency. This work has been accompanied by empirical approaches for exploring the sources of error in these systems, and this effort is dependent on a thorough understanding about the construction of the models. One avenue for increasing the robustness in the output is by combining many sources of sensory data, such as from the visual domain and senses that are associated with the natural language domains, where the communication of language is not restricted to the written form. Another approach is to establish unbiased measures in the reliability of model output [28,36]. Likewise, error in information processing is not resistant to bias in animals, such as in human cognition, where there are well-documented biases in speech perception [43].
These approaches are a foundation for emulating the modularity and breadth of function in higher cognition. For achieving this aim, meta-learning methods can create a formal, modular [44], and structured framework for combining disparate sources of data. This scalable approach would lead to building complex information systems and reflect the higher cognitive processes [45,46].

3.4.2. Generalization of Knowledge

Another area of interest is the property of generalization in a model of higher cognition. This property may be better understood by a study of the processes that form the internal representations from sensory input [6,47,48]. Further, in an abstract context, generalizability is based on the premise that information on the outside world is compressible, such as in its repeatability of the patterns of sensory information, so that it is possible for any system to classify objects and therefore obtain knowledge of the world.
There is also the question of how to reuse knowledge outside the environment where it is learned, “being able to factorize knowledge into pieces which can easily be recombined in a sequence of computational steps, and being able to manipulate abstract variables, types, and instances” [7]. Therefore, it is relevant to have a model of cognition that includes the higher-level representations based on the parts of objects, whether derived from sensory input or internal to the neural network. However, the dynamic and various states of the internal representations are also contributors to the processes of higher reasoning.

3.4.3. Embodiment in Cognition

Lastly, there is uncertainty on the dependence of cognition on the outside world. This dependence has been characterized as the phenomenon of embodiment, i.e., that the occurrence of cognition is dependent on an animal or similar form, so the natural form of cognition is also an embodied cognition, even in the case where the world is a machine simulation [28,49,50]. In essence, this is a property of a robotic and mechanical system, where its functions are fully dependent on specific input and output from the world. Although a natural system receives input, produces output, and learns at a time scale constrained by the physical world, an artificial system is not as constrained, such as in the case of reinforcement learning [50,51,52], a method that can also reconstruct sensorimotor function in animals. Moreover, an artificial system is not restricted to a single bodily form in its functions.
Deepmind [50] developed artificial agents in a three-dimensional space that learn in a continually changing world. The method uses a deep reinforcement learning method in conjunction with dynamic generation of environments that lead to the unique arrangement of each world. Each of the worlds contains artificial agents that learn to handle tasks and receive rewards for completing specific objectives. An agent observes a pixel image of an environment along with receiving a “text description of their goal” [50]. Task experience is sufficiently generalizable that the agents are capable of adapting to tasks that are not yet known from prior experience. This reflects an animal that is embodied in a world and is learning interactively by the performance of physical tasks. It is known that animals navigate and learn from the world around them, so the above approach is a meaningful experiment within a virtual world. However, the above approach has fragility for tasks outside of its distribution of prior learned experiences.

4. Abstract Reasoning

4.1. Abstract Reasoning as a Cognitive Process

Abstract reasoning is often associated with a process of thought, but the elements of the process are ideally represented as physical processes. This restriction constrains explanations of the emergence of abstract reasoning, as in the formation of new concepts in an abstract world. Moreover, a process of abstract reasoning may be compared against the more intuitive forms of cognition as found in vision and speech perception. Without sensory input, the layers of the neural system are not expected to encode new information by a pathway, as is expected in the recognition of visual objects. Therefore, it is expected that any information system is dependent on an external input for learning, an essential process for the formation of experiential knowledge.
It follows that abstract reasoning is formed from an input source as received by the neural system. If there is no input that is relevant to a pathway of abstract reasoning, then the system is not expected to encode that pathway. This also leads to the hypothesis of whether abstract reasoning is composed of one or more pathways, and the contribution of other unrelated pathways in cognition. It is probable that there is no sharp division between abstract reasoning and the other types of reasoning, and the likelihood that there is more than one pathway of abstract reasoning, as exemplified in the case of solving puzzles that require the manipulation of objects in the visual world.
Another hypothesis is on whether the main source of abstract objects is the internal representations. If true, then a model of abstract reasoning would involve the true forms of abstract objects, in contrast to the recognition of an object by reconstruction from sensory input in the neural network system.
Since abstract reasoning is dependent on an input source, there is an expectation that deep learning methods modeling the non-linear dynamics are sufficient to model one or more pathways involved in abstract reasoning. This reasoning involves the recognition of objects that are not necessarily sensory objects with definable properties and relationships. As with the training process to learn sensory objects, it is expected that there is a training process to learn about the forms and properties of abstract objects. This class of problem is of interest, since the universe of abstract objects is boundless, and their properties and interrelationships are not constrained by the essential limits of the physical world.

4.2. Models of Abstract Reasoning

A model of higher cognition includes abstract reasoning [7]. This is a pathway or pathways that are expected to learn the higher-level representations of sensory objects, such as from vision or hearing, and for which the input is processed and generative of a generalizable rule set. These may include a single rule or a sequence of rules. One model is for the deep learning system to learn the rule set, such as in the case of puzzles solvable by a logical operation [53]. This is likely the basis for a person playing a chess game by memorizing prior patterns of information and events on the game board, which lead to general knowledge of the game system as a kind of world model.
Similarly, another kind of visual puzzle is the Rubik’s Cube. However, in this case, the final state of the puzzle is known, where each face of the cube will share a single and unique color. Likewise, if there is a detectable rule set, then there must be patterns of information that allow the construction of a generalized rule set.
The pathway to a solution can include the repeated testing of potential rule sets against an intermediate or final state of the puzzle. This iterative process may be approached by a heuristic search algorithm [7]. However, these puzzles are typically low-dimensional as compared with abstract verbal problems, as in inductive reasoning. The acquisition of rule sets for verbal reasoning requires a search for patterns in a higher-dimensional space. In either of these cases of pattern searching, whether complex or simple, they are dependent on the detection of patterns that represent a set of rules.
It is simpler to imagine a logical operation as the pattern that offers a solution, but it is expected that inductive reasoning involves higher-dimensional representations than a simple operator that combines Boolean values. It is also probable that these representations are dynamic, so there is potential to sample from a set of many valid representations.

4.3. Future Directions in Abstract Reasoning

4.3.1. Embodiment in a Virtual and Abstract World

While the phenomenon of embodiment refers to an occupant of the three-dimensional world, this is not necessarily a complete model for reasoning on abstract concepts. However, it is plausible that at least some abstract concepts are solvable in a virtual three-dimensional world. Similarly, Deepmind showed a solution to visual problems across a generated set of three-dimensional worlds [50].
A population and distribution of tasks are also elements in Deepmind’s approach. They show that learning a task distribution leads to knowledge for solving tasks outside the prior task distribution [50,51]. This leads to the potential for generalizability in solving tasks, along with the promise that increased complexity across the worlds would lead to further expansion in the knowledge of tasks.
However, the problem of abstract concepts extends beyond the conventional sensory representations as formed by higher cognition. Examples include visual puzzles with solutions that are abstract and require the association of patterns that extend beyond the visual realm, along with the symbolic representations from the areas of mathematics [54,55].
By combining these two approaches, it is possible to construct a world that is not a reflection of the three-dimensional space as inhabited by animals, but to construct a virtual world of abstract objects and sets of tasks instead [51]. The visual and symbolic puzzles, such as in the case of chess and related boardgames [52], are solvable by deep learning approaches, but the machine reasoning is not generalized across a space of abstract environments and objects.
The question is whether the abstract patterns used to solve chess are also useful in solving other kinds of puzzles. It seems a valid hypothesis that there is at least some overlap in the use of abstract reasoning between these visual puzzles and the synthesis of knowledge from other abstract objects and their interactions [50], such as in solving problems by the use of mathematical symbols and their operators [55,56]. Since humans are capable of abstract thought, it is plausible that the generation of a distribution of general abstract tasks would lead to a working system for solving a wider set of abstract problems [57].
If, instead of a dynamic generation of three-dimensional worlds and objects, there is a vast and dynamic generation of abstract puzzles, for example, then the deep reinforcement learning approach could be trained on solving these problems and acquiring knowledge of these tasks [50]. The question is whether the distribution of these applicable tasks is generalizable to an unknown set of problems (those unrelated to the original task distribution), and the compressibility of the space of tasks. This hypothesis is further supported by a recent study [57].

4.3.2. Reinforcement Learning and Generalizability

Google Research showed that an unmodified reinforcement learning approach is not necessarily robust for acquiring knowledge of tasks outside the trained task distribution [51]. Therefore, they introduced an approach that incorporates a measurement of similarity among worlds that are generated by a reinforcement learning procedure. This similarity measure is estimated by behavioral similarity, corresponding to the salient features by which an agent finds success in any given world. Given that these salient features are shared among the worlds, the agents have a path for generalizing knowledge for success in worlds outside their experience. Procedurally, the salient features are acquired by a contrastive learning procedure, i.e., a method for unlabeled clustering of samples, and embeds these values of behavioral similarity in the neural network itself [58].
This reinforcement learning approach is dependent on both a deep learning framework and an input data source. The source of input is typically in a two- or three-dimensional artificial environment where an artificial agent learns to accomplish tasks within the confines of the worlds and their rules [50,51]. One approach is to represent the salient features of tasks and the worlds in a neural network. As Google Research showed [51], the process requires an additional step in extracting the salient information for creating better models of the tasks and worlds. They found that this method was more robust in the generalization of tasks. Similarly, in higher cognition, it is expected that the salient features used to generalize tasks are stored in the neuronal network.
Therefore, a naive input of visual data from a two-dimensional environment is not an efficient means of coding tasks that consistently generalize across environments. To capture the high-dimensional information in a set of related tasks, Google Research extended the reinforcement learning approach to better capture the task distribution [51], and it may be possible to mimic this approach by similar methods. These task distributions provide structured data for representing the dynamics of tasks among worlds, and therefore generalize and encode the high-dimensional and dynamic features in a low-dimensional form.
It is difficult to imagine the relationship between two different environments. The game of checkers and that of chess appear as different game systems. Encoding the dynamics of each of these in a deep learning framework may show that they relate in an unintuitive and abstract way [50]. This concept is expressed in the article cited above [51], indicating that short paths of a larger pathway may provide the salient and generalizable features. In the case of boardgames, the salient features may not correspond to a naive perception of visual relatedness. Likewise, our natural form of abstract reasoning shows that patterns are captured in these boardgames, and these patterns are not entirely recognized by a single rule set at the level of our awareness, but, instead, are likely represented at a high-dimensional level in the neural network itself.
For emulation of a process of reasoning, extracting the salient features from a pixel image is a complex problem, and the pathway may involve many sources of error. Converting images to a low-dimensional form, particularly for the salient subtasks, allows for a greater expectation of the generalization and repeatability in the patterns of objects and events. Where it is difficult to extract the salient features of a system, it is possible to translate and reduce the objects and events in the system to text-based descriptors, a process that has been studied and lends itself to interpretation [57,58,59,60].
Lastly, since the higher cognitive processes involve the widespread use of dynamic representations, it is plausible that the tasks are not merely generalizable but may originate in the varied sensory and memory systems. Therefore, the tasks would be expressed by the different sensory forms, although the low-dimensional representations are more generalizable, providing a better substrate for the recognition of patterns, and are essential for a process of abstract reasoning.

5. Conceptual Knowledge

5.1. Knowledge by Pattern Combination and Recognition

In the 18th century, the philosopher Immanuel Kant suggested that the synthesis of prior knowledge leads to new knowledge [61]. This theory of knowledge extended the concept of objects from a set of perfect forms to a recombination of forms, leading to a boundless number of mental representations. This was the missing concept to explain the act of knowing. Therefore, the forces of knowledge were no longer dependent on description outside the realm of matter, or on hypotheses based on an unbounded complexity of material interactions.
It is possible to divide these objects and forms of knowledge into two categories: sensory and abstract. The sensory objects are ideally constructed from sensory input, even though this assumption is not universal. Instead, perception may refer to the construction of these sensory objects, along with any error occurring in their associated pathways. In comparison, the abstract object is ideally a true form. An ideal example is a mathematical symbol, such as an operator for the addition of numbers [55]. However, an abstract object may coincide with sensory objects, such as an animal and its taxonomic relationship to other forms of animals.
Therefore, one hypothesis is that the objects of knowledge are instead a single category, but that the input used to form the object is from at least two sources, including sensations from the outside world and the representation of objects as stored in the memory.
A hypothetical example is from chess. A person is not able to calculate each game piece and position given all events on the board. Instead, the decision-making is largely dependent on boardgame patterns with respect to the pieces and positions. However, the observable patterns as compared with all possible patterns is strongly bounded. One solution is in the hypothesis that the patterns also exist as internal representations that are synthesized and formed into new patterns not yet observed. Evidence for this hypothesis is in the predictive coding of sensory input, namely that this compensatory action allows a person to perceive elements of a visual scene or speech a short time prior to its occurrence [33]. This same predictive coding pathway may apply to internal representations, such as in chess gameboard patterns, and the ability to recombine prior objects of knowledge. The process of creating new forms and patterns would allow a person to greatly expand upon the number of observable patterns in a world.
To summarize, the process of predictive coding of sensory information should also apply to the reformation of internal representations. This is a force of recombination that is expected to lead to a very large number of forms in the memory, and is used for the detection of objects and forms that have not yet been observed. Knowledge by synthesis of priors has the potential to generate a multitude of forms that are consistent with the extent of human thought. In this case, the cognitive ether of immeasurability or incomputability is not necessary for explaining higher cognition and its processes.

5.2. Models of Generalized Knowledge

Evidence is mounting in support of a deep learning model, namely the transformer, for sampling data and constructing high-dimensional representations [35,46,55,57,58,60,62,63,64]. A study by Google Research employed a decision transformer architecture to show transfer learning in tasks that occurred in a fixed and controlled setting (Atari) [58]. This work supports the concept that generalized patterns occur in an environment with the potential for resampling those patterns in other environments. The experimental control of the environmental properties is somewhat analogous to the cognitive processes that originate in a single embodied source [49,50]. Altogether, the sampling of patterns is from the population of all possible patterns that occur in the system. A sufficiently large sample of tasks is expected to lead to knowledge of the system. The system may be thought of as a physical system; in this case, it is a visual space of two dimensions.
In another study, Deepmind questioned whether task-based learning can occur across multiple embodied sources, such as the patterns derived from torque-based tasks (a robot arm) and those from a set of captioned images [57]. Their results showed evidence of transfer learning across heterogeneous sources, and indicated that their model is expected to scale in power with an increase in data and model size.
These studies are complemented by Chan and others [62]. This insightful work showed convincing evidence of the superior performance of the transformer architecture in handling a sequence data model. They further revealed the importance of distributional qualities and dynamics in the training dataset, and its relationship to the properties of natural language data [62].
These computational studies illustrate proof that model performance continues to scale with model size [57,58]. These models for generalized task learning occur in a particular setting. It is possible to consider the setting as a physical system, such as in a particular simulation or in our physical world [50,64,65,66]. With a robust sampling of tasks in a controlled physical system, it is possible to learn the system and transfer the knowledge of tasks from the known to those unknown [50,64,65,66]. This is a form of pattern sampling that is robust in its representation of the population of all patterns that occur in a system. Deepmind has searched for these patterns in a system by deep reinforcement learning while optimizing the approach by simultaneously searching for the shortest path toward learning the system [65]. This method is, in essence, learning a world model and forming a base set of cognitive processes for downstream use.
Since images with text descriptors lead to generalized task learning [64], then video with text descriptors [63] is expected to enhance the model with a temporal dimension, and reflect tasks that are dynamic in time [66]. OpenAI developed a deep learning method that receives input as video data, but with a minimal number of associated text labels, and is as capable as a person in learning tasks and modeling a world (Minecraft) [66]. There is also a question on the difference between simple and complex tasks. However, the tasks may be decomposed into their parts and patterns, although OpenAI’s reinforcement learning system is achieving this aim without prior identification of these patterns [66].

5.3. Knowledge as a Physical Process

The informational processes in machine systems are analogous to those in the brain. They are both systems constrained by the physical world and its rules. While the machine systems are tractable for empirical studies of information flow and the acquisition of knowledge, the biological systems are not nearly as tractable. There is limited understanding of the specifics of how neurons work and how they organize to encode information, such as in memory formation in mammals [18].
For example, biology is dependent on the instruments of neurobiology for capturing the dynamics of a single neuronal cell’s activity, while a quantifiable behavior or action is observed over time [8,9,67]. The biological studies also require extensive experimentation for verification and insight; otherwise, a single experiment or a few experiments will result in unsupported interpretations and conclusions on the dynamic pathway or pathways of the neural system [68,69]. As in the history of studies in ecology, phenomenological approaches are better replaced by those built on theory and quantitative models, along with prudence in forming robust hypotheses, not mere questions, in the natural sciences. This problem is not restricted to natural science, but also applies to the science of engineering [35].

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Merriam-Webster Dictionary (an Encyclopedia Britannica Company: Chicago, IL, USA). Available online: https://www.merriam-webster.com/dictionary/cognition (accessed on 27 July 2022).
  2. Cambridge Dictionary (Cambridge University Press: Cambridge, UK). Available online: https://dictionary.cambridge.org/us/dictionary/english/cognition (accessed on 27 July 2022).
  3. Friedman, R. Cognition as a Mechanical Process. NeuroSci 2021, 2, 10. [Google Scholar] [CrossRef]
  4. Vlastos, G. Parmenides Theory of Knowledge. In Transactions and Proceedings of the American Philological Association; The Johns Hopkins University Press: Baltimore, MD, USA, 1946; pp. 66–77. [Google Scholar]
  5. Chang, L.; Tsao, D.Y. The code for facial identity in the primate brain. Cell 2017, 169, 1013–1028. [Google Scholar] [CrossRef] [PubMed]
  6. Hinton, G. How to represent part-whole hierarchies in a neural network. arXiv 2021, arXiv:2102.12627. [Google Scholar]
  7. Bengio, Y.; LeCun, Y.; Hinton, G. Deep Learning for AI. Commun. ACM 2021, 64, 58–65. [Google Scholar] [CrossRef]
  8. Streng, M.L.; Popa, L.S.; Ebner, T.J. Modulation of sensory prediction error in Purkinje cells during visual feedback manipulations. Nat. Commun. 2018, 9, 1099. [Google Scholar] [CrossRef]
  9. Popa, L.S.; Ebner, T.J. Cerebellum, Predictions and Errors. Front. Cell. Neurosci. 2019, 12, 524. [Google Scholar] [CrossRef]
  10. Searle, J.R.; Willis, S. Intentionality: An Essay in the Philosophy of Mind; Cambridge University Press: Cambridge, UK, 1983. [Google Scholar]
  11. Huxley, T.H. Evidence as to Man’s Place in Nature; Williams and Norgate: London, UK, 1863. [Google Scholar]
  12. Haggard, P. Sense of agency in the human brain. Nat. Rev. Neurosci. 2017, 18, 196–207. [Google Scholar] [CrossRef]
  13. Ramon, Y.; Cajal, S. Textura del Sistema Nervioso del Hombre y de los Vertebrados Trans; Nicolas Moya: Madrid, Spain, 1899. [Google Scholar]
  14. Kriegeskorte, N.; Kievit, R.A. Representational geometry: Integrating cognition, computation, and the brain. Trends Cogn. Sci. 2013, 17, 401–412. [Google Scholar] [CrossRef]
  15. Hinton, G.E. Connectionist learning procedures. Artif. Intell. 1989, 40, 185–234. [Google Scholar] [CrossRef]
  16. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  17. Descartes, R. Meditations on First Philosophy; Moriarty, M., Translator; Oxford University Press: Oxford, UK, 2008. [Google Scholar]
  18. Friedman, R. Themes of advanced information processing in the primate brain. AIMS Neurosci. 2020, 7, 373. [Google Scholar] [CrossRef] [PubMed]
  19. Prasad, S.; Galetta, S.L. Anatomy and physiology of the afferent visual system. In Handbook of Clinical Neurology; Kennard, C., Leigh, R.J., Eds.; Elsevier: Amsterdam, The Netherlands, 2011; pp. 3–19. [Google Scholar]
  20. Paley, W. Natural Theology: Or, Evidences of the Existence and Attributes of the Deity, 12th ed.; R. Faulder: London, UK, 1809. [Google Scholar]
  21. Darwin, C. On the Origin of Species; John Murray: London, UK, 1859. [Google Scholar]
  22. De Sousa, A.A.; Proulx, M.J. What can volumes reveal about human brain evolution? A framework for bridging behavioral, histometric, and volumetric perspectives. Front. Neuroanat. 2014, 8, 51. [Google Scholar] [CrossRef] [PubMed]
  23. Slobodkin, L.B.; Rapoport, A. An optimal strategy of evolution. Q. Rev. Biol. 1974, 49, 181–200. [Google Scholar] [CrossRef] [PubMed]
  24. Goyal, A.; Didolkar, A.; Ke, N.R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; Bengio, Y. Neural Production Systems. arXiv 2021, arXiv:2103.01937. [Google Scholar]
  25. Scholkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Toward Causal Representation Learning. Proc. IEEE 2021, 109, 612–634. [Google Scholar] [CrossRef]
  26. Wallis, G.; Rolls, E.T. Invariant face and object recognition in the visual system. Prog. Neurobiol. 1997, 51, 167–194. [Google Scholar] [CrossRef]
  27. Friedman, R. A Perspective on Information Optimality in a Neural Circuit and Other Biological Systems. Signals 2022, 3, 25. [Google Scholar] [CrossRef]
  28. 28. Rina Panigrahy (Chair), Conceptual Understanding of Deep Learning Workshop. Conference and Panel Discussion at Google Research, 17 May 2021. Panelists: Blum, L., Gallant, J., Hinton, G., Liang, P., Yu, B. Available online: https://sites.google.com/view/conceptualdlworkshop/home (accessed on 17 May 2021).
  29. Gibbs, J.W. Elementary Principles in Statistical Mechanics; Charles Scribner’s Sons: New York, NY, USA, 1902. [Google Scholar]
  30. Schmidhuber, J. Making the World Differentiable: On Using Self-Supervised Fully Recurrent Neural Networks for Dynamic Reinforcement Learning and Planning in Non-Stationary Environments; Technical Report FKI-126-90; Technical University of Munich: Munich, Germany, 1990. [Google Scholar]
  31. Griffiths, T.L.; Chater, N.; Kemp, C.; Perfors, A.; Tenenbaum, J.B. Probabilistic models of cognition: Exploring representations and inductive biases. Trends Cogn. Sci. 2010, 14, 357–364. [Google Scholar] [CrossRef]
  32. Hinton, G.E.; McClelland, J.L.; Rumelhart, D.E. Distributed representations. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition; Rumelhart, D.E., McClelland, J.L., PDP Research Group, Eds.; Bradford Books: Cambridge, MA, USA, 1986. [Google Scholar]
  33. Friston, K. The history of the future of the Bayesian brain. NeuroImage 2012, 62, 1230–1233. [Google Scholar] [CrossRef]
  34. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  35. Phuong, M.; Hutter, M. Formal Algorithms for Transformers. arXiv 2022, arXiv:2207.09238. [Google Scholar]
  36. Chen, T.; Saxena, S.; Li, L.; Fleet, D.J.; Hinton, G. Pix2seq: A language modeling framework for object detection. arXiv 2021, arXiv:2109.10852. [Google Scholar]
  37. Hu, R.; Singh, A. UniT: Multimodal Multitask Learning with a Unified Transformer. arXiv 2021, arXiv:2102.10772. [Google Scholar]
  38. Xu, Y.; Zhu, C.; Wang, S.; Sun, S.; Cheng, H.; Liu, X.; Gao, J.; He, P.; Zeng, M.; Huang, X. Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention. arXiv 2021, arXiv:2112.03254. [Google Scholar]
  39. Zeng, A.; Wong, A.; Welker, S.; Choromanski, K.; Tombari, F.; Purohit, A.; Ryoo, M.; Sindhwani, V.; Lee, J.; Vanhoucke, V.; et al. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. arXiv 2022, arXiv:2204.00598. [Google Scholar]
  40. Chaabouni, R.; Kharitonov, E.; Dupoux, E.; Baroni, M. Communicating artificial neural networks develop efficient color-naming systems. Proc. Natl. Acad. Sci. USA 2021, 118, e2016569118. [Google Scholar] [CrossRef]
  41. Irie, K.; Schlag, I.; Csordás, R.; Schmidhuber, J. A Modern Self-Referential Weight Matrix That Learns to Modify Itself. arXiv 2022, arXiv:2202.05780. [Google Scholar]
  42. Schlag, I.; Irie, K.; Schmidhuber, J. Linear transformers are secretly fast weight programmers. In Proceedings of theInternational Conference on Machine Learning, PMLR 139, Virtual, 24 July 2021; pp. 9355–9366. [Google Scholar]
  43. Petty, R.E.; Cacioppo, J.T. The elaboration likelihood model of persuasion. In Communication and Persuasion; Springer: New York, NY, USA, 1986; pp. 1–24. [Google Scholar]
  44. Mittal, S.; Bengio, Y.; Lajoie, G. Is a Modular Architecture Enough? arXiv 2022, arXiv:2206.02713. [Google Scholar]
  45. Ha, D.; Tang, Y. Collective Intelligence for Deep Learning: A Survey of Recent Developments. arXiv 2021, arXiv:2111.14377. [Google Scholar]
  46. Mustafa, B.; Riquelme, C.; Puigcerver, J.; Jenatton, R.; Houlsby, N. Multimodal Contrastive Learning with LIMoE: The Language-Image Mixture of Experts. arXiv 2022, arXiv:2206.02770. [Google Scholar]
  47. Chase, W.G.; Simon, H.A. Perception in chess. Cogn. Psychol. 1973, 4, 55–81. [Google Scholar] [CrossRef]
  48. Pang, R.; Lansdell, B.J.; Fairhall, A.L. Dimensionality reduction in neuroscience. Curr. Biol. 2016, 26, R656–R660. [Google Scholar] [CrossRef] [PubMed]
  49. Deng, E.; Mutlu, B.; Mataric, M. Embodiment in socially interactive robots. arXiv 2019, arXiv:1912.00312. [Google Scholar]
  50. Open-Ended Learning Team; Stooke, A.; Mahajan, A.; Barros, C.; Deck, C.; Bauer, J.; Sygnowski, J.; Trebacz, M.; Jaderberg, M.; Mathieu, M.; et al. Open-ended learning leads to generally capable agents. arXiv 2021, arXiv:2107.12808. [Google Scholar]
  51. Agarwal, R.; Machado, M.C.; Castro, P.S.; Bellemare, M.G. Contrastive behavioral similarity embeddings for generalization in reinforcement learning. arXiv 2021, arXiv:2101.05265. [Google Scholar]
  52. Silver, D.; Hubert, T.; Schrittwieser, J.; Antonoglou, I.; Lai, M.; Guez, A.; Lanctot, M.; Sifre, L.; Kumaran, D.; Graepel, T.; et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018, 362, 1140–1144. [Google Scholar] [CrossRef]
  53. Barrett, D.; Hill, F.; Santoro, A.; Morcos, A.; Lillicrap, T. Measuring abstract reasoning in neural networks. In Proceedings of the International Conference on Machine Learning, PMLR 80, Stockholm, Sweden, 15 July 2018. [Google Scholar]
  54. Schuster, T.; Kalyan, A.; Polozov, O.; Kalai, A.T. Programming Puzzles. arXiv 2021, arXiv:2106.05784. [Google Scholar]
  55. Lewkowycz, A.; Andreassen, A.; Dohan, D.; Dyer, E.; Michalewski, H.; Ramasesh, V.; Slone, A.; Anil, C.; Schlag, I.; Gutman-Solo, T.; et al. Solving Quantitative Reasoning Problems with Language Models. arXiv 2022, arXiv:2206.14858. [Google Scholar]
  56. Drori, I.; Zhang, S.; Shuttleworth, R.; Tang, L.; Lu, A.; Ke, E.; Liu, K.; Chen, L.; Tran, S.; Cheng, N.; et al. A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level. arXiv 2021, arXiv:2112.15594. [Google Scholar] [CrossRef]
  57. Reed, S.; Zolna, K.; Parisotto, E.; Colmenarejo, S.G.; Novikov, A.; Barth-Maron, G.; Gimenez, M.; Sulsky, Y.; Kay, J.; Springenberg, J.T.; et al. A Generalist Agent. arXiv 2022, arXiv:2205.06175. [Google Scholar]
  58. Lee, K.H.; Nachum, O.; Yang, M.; Lee, L.; Freeman, D.; Xu, W.; Guadarrama, S.; Fischer, I.; Jang, E.; Michalewski, H.; et al. Multi-Game Decision Transformers. arXiv 2022, arXiv:2205.15241. [Google Scholar]
  59. Chen, L.; Lu, K.; Rajeswaran, A.; Lee, K.; Grover, A.; Laskin, M.; Abbeel, P.; Srinivas, A.; Mordatch, I. Decision Transformer: Reinforcement Learning via Sequence Modeling. Adv. Neural Inf. Process. Syst. 2021, 34, 15084–15097. [Google Scholar]
  60. Fei, N.; Lu, Z.; Gao, Y.; Yang, G.; Huo, Y.; Wen, J.; Lu, H.; Song, R.; Gao, X.; Xiang, T.; et al. Towards artificial general intelligence via a multimodal foundation model. Nat. Commun. 2022, 13, 1–13. [Google Scholar] [CrossRef] [PubMed]
  61. Kant, I.; Smith, N.K. Immanuel Kant’s Critique of Pure Reason; Translated by Norman Kemp Smith; Macmillan & Co: London, UK, 1929. [Google Scholar]
  62. Chan, S.C.; Santoro, A.; Lampinen, A.K.; Wang, J.X.; Singh, A.; Richemond, P.H.; McClelland, J.; Hill, F. Data Distributional Properties Drive Emergent In-Context Learning in Transformers. arXiv 2022, arXiv:2205.05055. [Google Scholar]
  63. Seo, P.H.; Nagrani, A.; Arnab, A.; Schmid, C. End-to-end Generative Pretraining for Multimodal Video Captioning. arXiv 2022, arXiv:2201.08264. [Google Scholar]
  64. Yan, C.; Carnevale, F.; Georgiev, P.; Santoro, A.; Guy, A.; Muldal, A.; Hung, C.; Abramson, J.; Lillicrap, T.; Wayne, G. Intra-agent speech permits zero-shot task acquisition. arXiv 2022, arXiv:2206.03139. [Google Scholar]
  65. Guo, Z.D.; Thakoor, S.; Pîslar, M.; Pires, B.A.; Altche, F.; Tallec, C.; Saade, A.; Calandriello, D.; Grill, J.; Tang, Y.; et al. BYOL-Explore: Exploration by Bootstrapped Prediction. arXiv 2022, arXiv:2206.08332. [Google Scholar]
  66. Baker, B.; Akkaya, I.; Zhokhov, P.; Huizinga, J.; Tang, J.; Ecoffet, A.; Houghton, B.; Sampedro, R.; Clune, J. Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos. arXiv 2022, arXiv:2206.11795. [Google Scholar]
  67. Traniello, I.M.; Chen, Z.; Bagchi, V.A.; Robinson, G.E. Valence of social information is encoded in different subpopulations of mushroom body Kenyon cells in the honeybee brain. Proc. R. Soc. B 2019, 286, 20190901. [Google Scholar] [CrossRef]
  68. Bickle, J. The first two decades of CREB-memory research: Data for philosophy of neuroscience. AIMS Neurosci. 2021, 8, 322. [Google Scholar] [CrossRef]
  69. Piller, C. Blots on a field? Science 2022, 377, 358–363. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An abstract representation of information that is received by a sensory organ, such as the light rays absorbed by neuronal cells across the retinal surface of the camera eye [3].
Figure 1. An abstract representation of information that is received by a sensory organ, such as the light rays absorbed by neuronal cells across the retinal surface of the camera eye [3].
Encyclopedia 02 00102 g001
Figure 2. The first panel is a drawing of the digit nine (9), while the next panel is the same digit transformed by rotation [3].
Figure 2. The first panel is a drawing of the digit nine (9), while the next panel is the same digit transformed by rotation [3].
Encyclopedia 02 00102 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Friedman, R. Higher Cognition: A Mechanical Perspective. Encyclopedia 2022, 2, 1503-1516. https://doi.org/10.3390/encyclopedia2030102

AMA Style

Friedman R. Higher Cognition: A Mechanical Perspective. Encyclopedia. 2022; 2(3):1503-1516. https://doi.org/10.3390/encyclopedia2030102

Chicago/Turabian Style

Friedman, Robert. 2022. "Higher Cognition: A Mechanical Perspective" Encyclopedia 2, no. 3: 1503-1516. https://doi.org/10.3390/encyclopedia2030102

APA Style

Friedman, R. (2022). Higher Cognition: A Mechanical Perspective. Encyclopedia, 2(3), 1503-1516. https://doi.org/10.3390/encyclopedia2030102

Article Metrics

Back to TopTop