1. Introduction
To make computing machines mimic living organisms, first, we must understand the unique features of the living organisms that make them sentient, resilient, and intelligent. Physical and mental structures which transform information and knowledge are the essential ingredients of all living organisms. Our knowledge about these structures comes from genomics [
1], neuroscience [
2,
3], cognitive science [
4], and the studies of artificial intelligence [
5,
6]. The references cited provide a compelling picture of the information processing structures used by the living organisms and their role in managing the “life” processes with varying degrees of sentience, resilience, and intelligence.
In most living organisms, genes encode the life processes and pass them on from the survivor to the successor. The genetic knowledge structures include executable workflows and control processes that describe stable patterns of the living organism. These processes are designed to optimally utilize the resources available to assure the organism’s creation and safekeeping when interacting with its environment. Creation involves the processes that use knowledge to transform matter and energy. The system with “self” awareness is assembled using physical structures with several constituent components. Safekeeping involves the ability to dynamically monitor and control an organism’s behavior along with its interactions with its environment using genetic descriptions. Intelligent systems augment inherited knowledge through genes with cognitive processes embedded in the nervous systems and neural networks. The system uses its components to gain information through its sensory components and converts it to knowledge using its neural networks. The neurons which fire together are wired together to capture the knowledge about events that caused the firing, and the neurons that are wired together fire together to exhibit autopoietic and cognitive behaviors.
Both autopoiesis and cognition are capabilities exploited by living organisms. They are the essence of an organism’s sentient, resilient, and intelligent behaviors that contribute towards managing its stability, safety, and sustenance. Autopoiesis refers to the behavior of a system that replicates itself and maintains identity and stability while its components face fluctuations caused by external influences. Autopoiesis enables them to use the specification in their genomes to instantiate themselves using matter and energy transformations. They reproduce, replicate, and manage their stability using cognitive processes. Cognition allows them to process information into knowledge and use it to manage its interactions between various constituent parts within the system and its interaction with the environment. Cognition uses various mechanisms to gather information from various sources, convert it into knowledge, develop a history through memorizing the transactions, and identify new associations through their analysis. Organisms have developed various forms of cognition. According to Burgin [private communication],
A process is:
embedded if it goes in some physical or mental system; for example, the process of walking in the street is embedded in this street but not embodied in it.
embodied if it goes in the system that maintains it; for example, the process of computation going in a computer.
enacted if it is initiated by the system where it goes, by a system involved in the process, or by another system.
elevated if there is a hierarchy of processes and the process goes on higher levels of this hierarchy; for example, the hierarchy of inductive Turing machines and processes within them.
extended if it moves outside the system in which it started or if it goes beyond some boundary in time.
efficient if it produces high-quality results being provided with sensible resources.
endogenous if it has an internal cause or origin.
In short, the living organism’s computing models, consisting of complex multi-layer networks of genes combined with neural network processing, enable the formulation of descriptions and execution of workflow components having not only the content of how to accomplish a task but also providing the context, constraints, control, and communication to assure systemic coordination to accomplish the overall purpose of the system embedded in the genome. Various constituent structures process information and convert it into knowledge which is integrated and used by a higher level of cognition known as elevated cognition.
Intelligent systems have also developed internal and external communication structures that allow sentient behavior (the ability to sense and react). Computing (the ability to transform information obtained through the senses, create and process knowledge structures capturing the dynamics), communication (the ability to pass information within its components and with external systems) and cognition (the ability to create and execute processes that sense and react to changing circumstances) are essential ingredients of intelligence that provide sentience and resilience (the ability to know and adapt appropriately to changing circumstances).
Biological structures are described as complex adaptive systems (CAS) composed of many interrelated and interacting components (made up of components that exploit the properties of atoms, molecules, compounds, etc., to create the composed structures). CAS [
7] exhibits self-organization, non-linearity, the transition between states of order and chaos, and emergence. The system often exhibits behavior that is difficult to explain through an analysis of the system’s constituent parts. Such behavior is called emergent. CAS are complex systems that can adapt to their environment through an evolution-like process and are isomorphic to networks (nodes executing specific functions based on local knowledge and communicating information using links connecting the edges). The system evolves into a complex multi-layer network, and the functions of the nodes and the composed structure define the global behavior of the system as a whole. Sentience, resilience, and intelligence are the result of these structural transformations and dynamics exhibiting autopoietic and cognitive behaviors.
If digital machines were to mimic living organisms, we must endow them with the ability to exhibit autopoietic and cognitive behaviors. Fortunately, the general theory of information (GTI) [
8,
9], our understanding of structural reality [
10], and various tools derived from them [
11,
12,
13,
14,
15] provide a new approach to not only model autopoietic and cognitive behaviors in living organisms but also provide a new method to infuse them into digital automata.
The thesis of this paper is that if digital machines were to mimic living organisms’ sentient, resilient, and intelligent behaviors, then they must be infused with autopoietic and cognitive behaviors. Current information-processing structures with symbolic computing (based on John von Neumann’s stored program implementation of the Turing machine) and deep learning (based on algorithms that mimic neural networks) fall short [
16,
17,
18,
19] of mimicking the autopoietic and cognitive behaviors of living organisms. Software applications lack self-management and depend on external entities to find resources, deploy, configure, monitor, and manage them. Current deep learning algorithms such as CNN (convolution neural networks), RNN (reinforced neural networks), etc., while they are very successful in providing knowledge insights from information gathered and represented in the form of symbolic data structures, do not provide the basic ingredients that are required to integrate various knowledge insights from multiple sets of inputs from different sources. In short, they are unable to provide a common knowledge representation from different neural networks processing different data sets just as the mammalian neocortex and the reptilian cortical columns do, as we shall see later. The required ingredients are:
A systemic view of the myriad relationships among the knowledge components inferred from different sources (equivalent to the sense of “self” and its structural relationships with the entities with which they interact using 7e cognition),
A common knowledge representation to encapsulate the dynamic interactions and component behavioral evolution as events influence changes in the system, and
A sense of history and the best practices to reason at a higher level with shared knowledge from multiple inputs from multiple components to optimize global behavior to address fluctuations that impact the stability, safety, and sustenance of the system (elevated cognition).
In essence, current symbolic and sub-symbolic components constitute a CAS and, left to themselves, they are subject to the properties of non-deterministic emergence in the face of large fluctuations in the system component interactions. Autopoietic and elevated cognition (known as super-symbolic computation [
15]) provides the mechanisms to maintain stability, safety and optimize the system’s global behavior.
In this paper, we use GTI to discuss the evolution of sentience, resilience, and intelligence in living organisms. We examine information processing structures and discuss a theoretical model providing their essential characteristics such as autopoiesis and cognition. The model is derived from GTI and described in [
8,
9,
10,
11,
12,
13,
14,
15]. This model allows us to design ways to infuse autopoietic and cognitive behaviors into digital information processing structures built using digital automata.
Section 2 describes the lessons from studying the evolution of autopoietic and cognitive behaviors in living organisms. In
Section 3, we present a theoretical model based on the general theory of information and the theory of structures to replicate the structures exhibiting the autopoietic and cognitive behaviors. In
Section 4, we describe a new approach to integrating knowledge from multiple sources such as symbolic and sub-symbolic computations with a common knowledge representation, and provide model-based reasoning to mitigate risk. In
Section 5, we conclude with some observations on this approach and its impact on information technology’s current and future state.
3. General Theory of Information and the Burgin-Mikkilineni Thesis (BMT)
GTI [
8,
9,
10] provides a “unified context for existing directions in information studies, making it possible to elaborate on a comprehensive definition of information; explain relations between information, data, and knowledge; and demonstrate how different mathematical models of information and information processes are related [
32] p. 1.” We briefly summarize the tools that GTI provides to model information processing structures and their behaviors both in humans and digital machines.
All material structures contain information and living organisms have developed physical structures (networks of genes and neurons) which gather the information through their senses and convert it into knowledge. The knowledge according to GTI consists of a fundamental triad or a named set [
33]. “Named sets as the most encompassing and fundamental mathematical construction encompass all generalizations of ordinary sets and provide unified foundations for the whole mathematics [
8] p. 566.” According to GTI, “Any natural phenomenon has the structure of some fundamental triad (FUTRAD) or some system consisting of fundamental triads (FUTRADS). As a consequence, fundamental triads and their systems appear to be the basic objects of cognition, and the theory of fundamental triads helps to attain a new and profound understanding of nature’s structure and behavior—with a refreshing and simpler (than before) way to describe it. [
34] p. 7.”
The fundamental triad represents knowledge about structures as shown in
Figure 2.
It represents the knowledge in the form of two entities, their relationships, and interactions represented as behaviors. An information unit is described by the existence or non-existence (1 or 0) of an entity or an object that is physically observed or mentally conceived. The difference between an entity and an object is that the entity is an abstract concept with attributes such as a computer with memory and CPU. An object is an instance of an entity with an identity, defined by two components which are the object-state and object-behavior. An attribute is a key-value pair with an identity (name) and a value associated with it. The attribute state is defined by its value. Information is related to knowledge and is defined by the relationships between various entities and their interactions (behaviors) when the values of the attributes change. A named set as a fundamental triad defines the knowledge about two different entities (
Figure 2). Each entity, called the knowledge node, receives information through various sensors and transforms it into knowledge based on its internal state which defines various attributes, relationships, and behaviors.
Figure 3 shows the structure of a knowledge node.
A knowledge structure [
14] defines various triadic relationships between all the entities that are contained in a system. A knowledge structure is composed of knowledge nodes representing the domain knowledge as a multi-layer complex network depicting various entities, their relationships, and their behaviors.
In essence, a knowledge structure schema and operations provide a process model and its evolution [
14].
Figure 4 depicts a knowledge structure as a multilayer network.
As discussed in detail in Burgin, Mikkilineni [
14], the knowledge structure provides a common knowledge representation received from various sources using composed fundamental triads. All the knowledge nodes wired together fire together to exhibit collective behavior.
A structural machine [
12,
13,
14] is an information processing structure that uses the knowledge structures as schema and performs operations on them to evolve information changes in the system from one instant to another when any of the attributes of any of the objects change. The structural machines supersede the Turing machines by their representations of knowledge and the operations that process information [
14,
15,
16,
17]. Triadic structural machines with multiple general and mission-oriented processors enable autopoietic and cognitive behaviors. The details are discussed on using structural machines, knowledge structure schemas, and operations on them in the Burgin, Mikkilineni paper [
14]. The knowledge nodes (shown in
Figure 4) are executed by the structural machine using various processors with conventional resources. Structural machines operate on knowledge structures in contrast to Turing machines operating on symbolic data structures.
The ontological Burgin-Mikkilineni thesis states that “autopoietic and cognitive behavior of artificial systems must function on three levels of information processing systems and be based on triadic automata. The axiological BM thesis states that efficient autopoietic and cognitive behavior has to employ structural machines. [
17] p. 1.”
A genome in the language of GTI [
17] encapsulates “knowledge structures” coded in the form of DNA and executed using the “structural machines” in the form of genes and neurons which use physical and chemical processes (dealing with the conversion of matter and energy). The information accumulated through biological evolution is encoded into knowledge to create the genome which contains the knowledge network defining the function, structure, and autopoietic and cognitive processes to build and evolve the system while managing both deterministic and non-deterministic fluctuations in the interactions among internal components or their interactions with the environment.
A digital genome [
17] is defined as a collection of “knowledge structures” coded in an executable form to be processed with “structural machines” implemented using digital genes (in the form of symbolic computing algorithms) and digital neurons (in the form of sub-symbolic neural net algorithms) both of which use stored program control implementation of Turing machines. The digital genome enables digital process execution to discover the computing resources in the environment, use them to assemble the hardware, cognitive apparatuses in the form of digital genes and digital neurons, and evolve the process of sentient, resilient, intelligent, and efficient management of both the self and the environment with 7e cognitive processes.
The digital genome incorporates the knowledge in the form of multi-layer intelligence with a definition of the sentient digital computing structures that discover, monitor, and evolve both the self and the interactions with each other, and the environment based on best practices infused in them.
The digital genome specifies the execution of knowledge networks using both symbolic computing and sub-symbolic computing structures. The knowledge network consists of a super-symbolic network of symbolic and sub-symbolic networks executing the functions defined in their components [
15]. The structure provides the system behavior and evolution maintaining the system’s stability in the face of fluctuations in both internal and external interactions. The digital genome encapsulates both autopoietic and cognitive behaviors of digital information processing structures capable of sentience, resilience, and intelligence. The digital genome typifies infused cognition as opposed to evolved cognition in biological systems. The infusion is made by the human operators who teach the machines how to evolve.
In the next section, we will describe how to design a new class of autopoietic and cognitive machines using existing information technologies such as cloud computing, containers, and their management tools just as the neocortex overlay utilized existing reptilian cognitive behaviors.
4. Infusing Autopoietic and Cognitive Behaviors into Digital Automata
GTI and structural theories of reality tell us that the material world is composed of structures that deal with transformations of matter and energy. The mental world exists in living beings and is composed of structures that deal with information and knowledge. The mental structures are formed using the physical structures to receive information from various senses, process it to create knowledge structures, and use them to manage the stability, safety, and sustenance of the system. The physical structures used to process information consist of symbolic (networks of genes) and sub-symbolic computing structures (neural networks). The digital world is composed of symbolic and sub-symbolic structures that process information received in the form of symbols and convert it into the knowledge of the state of the system and its evolution.
Symbolic and sub-symbolic computing structures with various algorithms that operate on symbolic data structures have provided significant benefits. These include business process automation, real-time communication, collaboration, and commerce, etc. Deep learning has delivered a variety of practical uses by revolutionizing customer experience, machine translation, language recognition, autonomous vehicles, computer vision, text generation, speech understanding, and a multitude of other AI applications. However, current symbolic and sub-symbolic computing structures operate as silos and there is no common knowledge representation that brings together the knowledge from these silos together. We propose to use knowledge structures to integrate the knowledge from the silos and provide super-symbolic computing that operates on knowledge structures in contrast to the data structures.
Figure 5 shows a new class of machines that integrate symbolic and sub-symbolic computing structures. Symbolic computing (using algorithms and operations on symbolic data structures) provides the equivalent of the networks of genes that interact with the physical resources using the transformation laws of matter and energy. The neural network algorithms provide the equivalent of extracting information and converting it into knowledge with 4E cognition. Super-symbolic computing provides the mechanism to represent knowledge from multiple sources as a knowledge network. The nodes contain the knowledge structures representing the state of various entities, relationships, and behaviors as an always-on executable service module. The inputs to the knowledge node provide the information that triggers the behavioral changes in the nodes that impact other node behaviors through the communication of information as outputs from the nodes. All the knowledge nodes wired together fire together to execute a collective behavior.
All the knowledge nodes wired together fire together to execute a collective behavior. The behavior is defined by the pre-condition and post-condition constraints. The knowledge network is implemented as a structural machine that provides operations on the Knowledge structure schema [
12,
13,
14].
A design of the implementation of a structural machine using knowledge structures that represent the “life processes” of a computational workflow of a business software application is depicted in
Figure 5. The digital genome shown on the right-hand side is like a cell that contains all the executables, their control structures, and operational details about an application designed to be a distributed computing structure executed in infrastructures offered by different providers. It contains knowledge about what resources (CPU, Memory, storage, network, etc.) are required for each component, where they are available, and how to use them. It is, in essence, a specification of all “life processes” defined in the genome as knowledge structures (entities, relationships, and behaviors). When the digital genome is executed in a computer as a program to create a specific instance of the business application, it functions as a manager of downstream networks that will be created, monitored, and managed based on the “life process” definitions for these downstream networks. It is done by creating downstream network managers which know what their downstream functions are (like specialized functional cells such as a web server, application server, and a database) that execute specific functions and communicate with other cells influencing their behaviors. All specialized functions are defined using knowledge structures (again entities, relationships, and behaviors executed using local CPU, Memory, OS, Database, file system, program executables, etc.).
The figure shows a digital genome node using cloud resources. It deploys the autopoietic and cognitive knowledge networks. Each leaf knowledge node, in turn, configures, executes, monitors, and manages the various functional tasks that contribute to a knowledge network behavior/. The knowledge functional nodes wired together fire together to execute the collective behaviors. The autopoietic and cognitive behaviors that model the life processes are executed at each layer to maintain global stability and intended outcomes. Deviations are monitored at each level and corrections are made based on life process definitions at each level.
The figure also shows how neural networks and sub-symbolic computing are used to create common knowledge representation from multiple modules executing different algorithms. Knowledge structures and knowledge networks provide a method to create a common knowledge representation from information obtained from multiple means.
The introduction of common knowledge representation in the form of knowledge structures that combine the knowledge obtained from symbolic and sub-symbolic computations in the functional nodes is new. The structural machine implementation with triadic automata [
12,
13,
14] provides global stability and successful global outcomes from various components to meet the system’s goals. while maintaining the local autonomy of individual component management. The super-symbolic overlay manages global optimization while dealing with the fluctuations in the interactions of components impacted by local constraints. The real-time global monitoring and management provide the capability for optimizing global behavior using downstream knowledge network reconfiguration. To the author’s knowledge, this approach is the first of its kind in introducing autopoietic and cognitive behaviors in the discussion of digital automata and the path towards strong AI. GTI, structural theory of reality, and the derived tools provide a powerful framework not only to understand the material world but also to design and build a new class of digital machines with improved sentience, resilience, and intelligence. The beauty of this approach is that it utilizes current generation symbolic and sub-symbolic computing structures without disrupting the status quo to create a new class of machines. This is precisely how the mammalian neocortex utilized the classical reptilian cortical columns to integrate knowledge obtained from multiple senses.