Next Article in Journal
Gesture Recognition of Filipino Sign Language Using Convolutional and Long Short-Term Memory Deep Neural Networks
Previous Article in Journal
Understanding Indigenous Knowledge in Contemporary Consumption: A Framework for Indigenous Market Research Knowledge, Philosophy, and Practice from Aotearoa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shannon Holes, Black Holes, and Knowledge: The Essential Tension for Autonomous Human–Machine Teams Facing Uncertainty

by
William Lawless
1,*,† and
Ira S. Moskowitz
2,†
1
Department of Mathematics and Psychology, Paine College, Augusta, GA 30901, USA
2
Naval Research Laboratory, Information Technology Division-5580, Washington, DC 20375, USA
*
Author to whom correspondence should be addressed.
The second author assisted the lead first author.
Knowledge 2024, 4(3), 331-357; https://doi.org/10.3390/knowledge4030019
Submission received: 9 February 2024 / Revised: 18 June 2024 / Accepted: 20 June 2024 / Published: 5 July 2024

Abstract

:
We develop a new theory of knowledge with mathematics and a broad-based series of case studies to seek a better understanding of what constitutes knowledge in the field and its value for autonomous human–machine teams facing uncertainty in the open. Like humans, as teammates, artificial intelligence (AI) machines must be able to determine what constitutes the usable knowledge that contributes to a team’s success when facing uncertainty in the field (e.g., testing “knowledge” in the field with debate; identifying new knowledge; using knowledge to innovate), its failure (e.g., troubleshooting; identifying weaknesses; discovering vulnerabilities; exploitation using deception), and feeding the results back to users and society. It matters not whether a debate is public, private, or unexpressed by an individual human or machine agent acting alone; regardless, in this exploration, we speculate that only a transparent process advances the science of autonomous human–machine teams, assists in interpretable machine learning, and allows a free people and their machines to co-evolve. The complexity of the team is taken into consideration in our search for knowledge, which can also be used as an information metric. We conclude that the structure of “knowledge”, once found, is resistant to alternatives (i.e., it is ordered); that its functional utility is generalizable; and that its useful applications are multifaceted (akin to maximum entropy production). Our novel finding is the existence of Shannon holes that are gaps in knowledge, a surprising “discovery” to only find Shannon there first.

1. Introduction

In a preview of our theory of interdependence [1], we conclude that humans make trade-offs in energy–entropy production between structure and performance, where poor team structure reduces performance, characterized by more structural entropy produced by wasted energy, motivating the need to replace poorly performing members; that humans are unable to physically copy and replicate the choices that lead to well-performing teams, introducing random selection into new member choices; that interdependence requires strong boundaries to reduce interference to maintain states of interdependence; that the coherence of a state of interdependence requires or reflects stability; and, among others of our findings not reviewed in this article (e.g., for a fuller review, see [2]), that the vulnerability in a targeted team is characterized to its opponents and to itself by observing the loss of its team’s structural integrity, a loss in its productivity, or both. To improve the odds of achieving autonomous operations, by necessity, this topic is wide-ranging.
We postpone two other topics, the complexity of teams [1] and innovation [2]; first, we found that as the pieces of a complex solution fit together (e.g., the structure of a team), its structural entropy reduces, and second, using United Nations data for Middle Eastern North Africa (MENA) nations, we found that Israel is not only the leading country in producing patents but also the leader in educating its young people, forming a significant association.
We begin by defining knowledge, one of our keywords. The Oxford Dictionary of English defines knowledge as the “facts, information, and skills acquired through experience or education; the theoretical or practical understanding of a subject”. (Oxford Dictionary of English, 2nd edition; see at https://www.oed.com (accessed on 17 June 2024). It has been defined by Oxford Languages as the “awareness… by experience of a fact or situation”. (Oxford Languages at https://languages.oup.com/google-dictionary-en/ (accessed on 17 June 2024)).
We also start with an overview from the perspective of a machine with intelligence comparable to a human teammate. We review the definitions of knowledge across several disciplines: systems engineering; philosophy; social science, citizen decisions, business, consciousness, and authoritarianism; information theory; and physics. These interdisciplinary branches of knowledge are representative of disciplines that we hope others move beyond those we consider now. We justify our considerations by speculating that humans will need intelligent machines with AI that are able to contribute to a team’s intelligence [3] as partners in a team while we humans as a species advance into Sagan’s cosmos with space travel to discover new knowledge.

1.1. Scope of the Article

Outline of the Paper. After the Introduction, we address Materials and Methods with a focus on case studies. In the section on Materials and Methods, we pose questions and propose a hypothesis. In the next section afterward, we review the results. Following the results, we discuss the results and draw conclusions. Our main contribution is the (re)discovery of gaps in knowledge, which we name Shannon holes.

1.2. Main Contribution

1. We have identified the existence of Shannon holes.
2. Shannon holes present gaps in information from an interaction. The main practice used by humans to address these gaps in knowledge is to debate.
3. Our main contribution is the justification for all members of an autonomous human–machine team to confront uncertainty with debate, a tool that is able to test knowledge or build new knowledge.

2. Materials and Methods

2.1. Research Design

In this section, we review our research design. Afterward, we review the questions we want addressed and answered if at all possible. Then, we provide our hypothesis and the novelty of our research hypothesis, which we review and critique. Our design provides a case study of knowledge across several disciplines. The results include a critique of what we have found in the case studies.

2.1.1. Questions

There are more questions about autonomy and human–machine teams than we can answer, but the ones we plan to address in this article are as follows:
1. What is the value of debate in the furtherance of knowledge?
2. Will machines with AI be able to contribute to a debate if we humans cannot define it, model it, or determine its value sufficiently for a machine’s understanding, contribution, exploration, and identification?
3. How does a human become aware or express their awareness of knowledge?
4. Can a machine be as expressive as its human teammates?
5. How does a human–machine teammate become aware that its teammates possess sufficient knowledge to perform a task?

2.1.2. Hypothesis

To date, information intrinsic to the states of interdependence between two or more agents have not been unraveled. Based on the evidence we present, that interdependence in the interaction has not been unraveled may well characterize the interaction.
We hypothesize that knowledge at the individual level does not encompass an ongoing interaction, especially when the interaction is between two agents dependent upon each other. We hypothesize that the mutual dependency between two or more agents, especially in a team, prevents knowledge of the interaction from being collected, analyzed, or established. If true, this requires an acknowledgment and the human–machine tools to deal with information gaps in the interaction, plus the tools to work with interactions.

2.1.3. Novelty

In our research design, we use mathematics, theory, and evidence from the field and the literature to critique the case studies.

2.2. What Is Knowledge to Us?

Knowledge can be applied, like in the exploration for oil; it can also be used to discover reality indirectly, like black holes. Knowledge means no surprise from its predictions, like the lack of entropy generated by the predictions of sunrise or sunset for tomorrow [4].
Kuhn argues in The Essential Tension [5] that Popper may have been correct that psychoanalysis was not a science, but that there were better reasons than the ones he provided (see also, Thornton; in [6]; Ioannidis, in [7]). Popper argued for testable propositions. In contrast, Kuhn believed that it was essential for a combination of cooperation and competition to produce the “essential tension“ found between the different concepts of opposing teams seeking the truth when facing uncertainty.
At this point, we hypothesize that whatever is defined as knowledge must be tested or determined by debating the various issues this definition raises. Assuming that there exist only two different concepts in a debate and that the two sides are complementary, these concepts form an orthogonal relationship, i.e., as θ goes to 90 degrees, then
lim θ 90 A · B = lim θ 90 | A | | B | cos θ = 0
As evidence, most social concepts applied to behavior fail in reality (e.g., reviewed below: self-esteem; implicit racism; ego depletion; honesty; etc.). The failure of social concepts to correspond to predicted results in reality could be accounted for by the complementarity between the predicting concept and the resulting predicted behavior. If exploratory concepts are discovered by convergence processes that disembody these concepts; if actual behavior in physical reality is accompanied by embodied cognition (viz., interdependence); then, the results should be a lack of correlation between disembodied concepts and actual, embodied behaviors in reality (namely, Equation (1)).
Debate implies that tension is required to not only search for the truth to test found truths but to explore what a truth once found may mean, e.g., in politics, truth may be bounded by a compromise that defuses the emotions on opposing sides of an issue. While compromise is not “truth”, especially in the physical sciences which often seek consensus, compromise may still bind the truth, while consensus may hide it.
Applying Knowledge in the Field: Oil. For example, under uncertainty, Suslick and Schiozer [8] provide an overview of the petroleum industry as a classic case of decision making seeking to capture a specific knowledge about reality in the field. Before them, Allais had developed principles for efficient pricing and resource allocation for large monopolistic enterprises. After the work of Allais, a 1988 Noble economist, Suslick and Schiozer considered the economics of finding oil in the Algerian Sahara. They addressed the risks of exploration using probability theory by modeling the different stages of the exploration for oil. From Suslick and Schiozer [8],
“Many complex decision problems in petroleum exploration and production involve multiple conflicting objectives… An effective way to express uncertainty is to formulate a range of values, with confidence levels assigned to numbers comprising the range… Asset managers in the oil and gas industry are looking to new techniques such as portfolio management to determine the optimum diversified portfolio that will increase company value and reduce risk”.
Suslick and Schiozer [8] also considered Markowitz’s contribution to portfolio theory by balancing the risks across a portfolio facing uncertainty (see [9]):
“A portfolio is said to be efficient if no other portfolio has more value while having less or equal risk, and if no other portfolio has less risk while having equal or greater value… a portfolio can be worth more or less than the sum of its component projects and there is not one best portfolio, but a family of optimal portfolios that achieve a balance between risk and value”.
The authors in [8] also reviewed the limitations of risk analysis that limit its use as a practical decision aid to understanding reality, especially when applied in the search for oil. They reviewed the strengths and weaknesses of risk analysis under uncertainty by concluding that
1. Risk analyses offer a way to handle very complex decisions characterized by multiple objectives under uncertainty across the different stages of seeking to find petroleum (i.e., the identification, extraction, and production from the benefits of oil weighed against its costs).
2. Risk analyses deal with complex tradeoffs and the preferences of different stakeholders when exploring reality (where the knowledge from complexity is characterized by a decrease in entropy).
3. After finding oil, risk analyses provide a systematic and comprehensive way for listing and reviewing the relevant factors in the extraction and production of oil (from a firm’s structural costs in finding oil, its personnel costs, but also the environmental costs, etc.).
Applying knowledge in the Field: Department of Defense. In these environmental searches across physical reality to make successful decisions based on reality, probability theory considers risks found in analyzing possibilities across the sets of events when making decisions ([10], pp. 14–15). We decomposed risk into perceptions and determinations. However, unlike risk determinations, risk perceptions can lead to tragedy, e.g., in 2021 [11], the U.S. Department of Defense (DoD) fired a drone at a perceived terrorist, instead killing 10 civilians, most were children [2]. One of the several recommendations made by the DoD, with which we agree, was to use red teams to challenge a decision about risk before an action is enacted.
Portfolios of the available choices are important. By generalizing the research of Markowitz [9], the authors, Chen and colleagues [12], used swarm intelligence algorithms to address the optimization of a designated portfolio. Their swarm intelligence algorithm was mainly inspired and developed by observing swarms in nature, which included self-organization, self-adaptation, and self-learning from biological populations (e.g., birds, elephants, wolves).
Their research [12] showed that swarm intelligence algorithms can efficiently and can produce satisfactory solutions in solving portfolio optimization (PO) problems. There are reservations, however, from [12],
“how to achieve the maximum benefit and minimum risk of dynamic multi-period portfolio is a worthy study problem in the future… How to choose the preference function will be also a valuable research topic… [and how] to evaluate the effectiveness of the established PO model”.
Applying Knowledge in the Field: Physics. Occam’s Razor and Einstein’s interpretation of it are covered by simplicity or parsimony [13], but beyond parsimony, there is little to be agreed upon about treading the path to knowledge. Considering Einstein’s struggle to construct a mathematical theory of gravity following his special theory of relativity [14], he began with an equivalence principle between acceleration and gravity, and after several failed attempts, personal struggles with family and antisemitism, all the while coping with concepts of covariance, yet, while fearing Hilbert’s success before his, Einstein finally achieved success by predicting the perihelion of Mercury with a new concept of spacetime for reality. Today, Einstein’s theory “has been very successful for more than a century” [15].
“[T]he grand aim of all science… is to cover the greatest possible number of empirical facts by logical deductions from the smallest possible number of hypotheses or axioms“ (quoting Einstein, in [16], p. 173).
From Robinson [17], citing Einstein’s 1933 lecture:
“It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience”.
Reviewed by Faraoni and Giusti [15], Einstein’s theory has survived all tests to date, predicting the precession of Mercury’s orbit from the influence of gravitational waves. But, in addition to the unsolved paradox with black holes, his theory fails to account theoretically for the increasingly rapid expansion of the cosmos and the discrepancies with the Hubble constant. Hubble’s constant is used as a scale for age and distance across the universe; with the cosmic microwave background, it has been used to measure the beginnings of the universe; but in the late universe, supernovae are used as standard candles. That these two metrics to establish the Hubble constant disagree has created “the essential tension” from another paradox.
However, new tests with falling antimatter support Einstein’s theory [18]. According to Einstein’s equivalence principle, all objects should fall at the same rate in a gravitational field regardless of what they are made of, now found to be true for matter and antimatter.
Applying Knowledge in the field: Preliminary Generalizations and Conclusions. We distinguish a heuristic from Einstein’s algorithm for his general theory of relativity. First, we define a heuristic as a thumb rule, short-cut, or approximation to a crude solution in a narrow domain where the approximation is satisfactory (e.g., a pinch of salt) and unlikely to be generalized. Second, for our purposes, we define an algorithm as a compressed set of rules or instructions that can make predictions about outcomes in reality. Both are forms of knowledge: the heuristic is more useful in common situations, and the algorithm is more useful in applying knowledge to advance science. For the algorithm, we separate the structure of an algorithm from its function. More importantly, in this article, machines with AI should be able to participate by observing the order–disorder that arises with each step.
1. As the pieces of an algorithm fit together like the pieces of a puzzle, its entropy drops. We have proposed that an entropy drop occurs when the structure of a well-functioning team fits together [1].
2. As the pieces of the algorithm begin to fit together, no matter how complex, the unfinished structure of the incomplete algorithm is reoriented, shifting the strategy in constructing the rest of the algorithm.
3. As the algorithm is finalized, and its predictions established, it forms the framework of major research programs that seek to explore and determine its strengths and weaknesses.
4. As it becomes established as knowledge, it must be able to withstand the widest, most critical and aggressive tests, often represented by debates, which serve to process information about the opposing viewpoints expressed during debate. If it is determined to be knowledge, it will serve to be productive in its ability to predict and to generalize to other concepts and findings. Seeking Reality. Embodied thinking is intuitive: Newton’s apple; Einstein’s trains and elevators. The knowledge that follows these intuitions is rational, like Newton’s three laws of motion. But when the limits of that knowledge are reached, the search to replace it can create “the essential tension” until new knowledge is found, like the theory of black holes or the cosmological constant.
In Einstein’s general theory of relativity, static models of black holes had zero entropy. However, Bekenstein (2008; in [19]) reported that
“Black hole entropy is a concept with geometric root but with many physical consequences… a black hole can be said to hide information. In ordinary physics entropy is a measure of missing information”.
Accounting for quantum effects indicates that information cannot be destroyed [19], but while a black hole’s entropy is proportional to the area of a black hole, it evaporates over time (via Hawking radiation), indicating the destruction of information, as yet another unresolved paradox.

3. Results of the Case Studies

3.1. A Case Study of Knowledge across Selected Disciplines

For our review of case studies, we consider a series of disciplines, and afterward, we provide a critique of what we found. In this section, we lay out the premises provided by a brief review of these disciplines without comment, which we provide next.
Based on our proposed theory of knowledge, we explore knowledge across several disciplines using case studies. We review knowledge for systems engineers; philosophy; social science, including citizen recommendations to government agencies, business decisions, consciousness, and repression; Shannon’s theory of information; and physics.

3.2. What Is Knowledge to Systems Engineering

Systems engineering is clear in the attempt to define its terms. Systems engineering is the discipline that uses engineering knowledge to build complex systems. In the Glossary for systems engineering (found at [20]), knowledge is defined as follows:
“1. Acquaintance with facts, truths, or principles, as from study or investigation; general erudition: knowledge of many things”.
“2. Acquaintance or familiarity gained by sight, experience, or report: a knowledge of human nature”.
“3. The fact or state of knowing; the perception of fact or truth; clear and certain mental apprehension”.
Further, the Glossary for systems engineering adds that thinking from a Systems perspective is characterized by the following:
“1. An epistemology which, when applied to human activity is based on four basic ideas: emergence, hierarchy, communication, and control as characteristics of systems” [21].
“Definition (1) is the System Science view, defining system thinking as a “theory of knowledge, esp. with regard to its methods, validity, and scope”, based around seeing the world as systems”.
“2. A process of discovery and diagnosis–an inquiry into the governing processes underlying the problems and opportunities” [22].
“3. A discipline for examining wholes, interrelationships, and patterns utilizing a specific set of tools and techniques [22]. Definitions (2) and (3) focus more on systems thinking as a collection of methods for dealing with system problems”.

3.3. What Is Knowledge to Philosophers?

We now consider epistemology and metaphysics from the original founders of the discipline of knowledge, where “epistemology is at least as old as any in philosophy” [23]. Epistemology is the philosophy of knowledge, and it includes the methods, validity, and scope of obtaining knowledge. Epistemology attempts to distinguish justified beliefs about knowledge from the sundry opinions that may exist about the knowledge of reality (from Steup and Neta in [23]).
Metaphysics is a subdiscipline of philosophy where the nature of reality is studied at its most fundamental level, including what constitutes epistemology, logic, and ethics. Translated from the Greek episteme for “knowledge”, epistemology means an understanding, acquaintance, or aspect of a physical matter in reality; and logos is an account, argument, or reason that may help to explain that physical matter to oneself or to another.
Plato studied what it meant to know, and, unlike the opinion about an aspect of reality, the difference knowledge has made to its user or knower from novice to expert users, e.g., applied knowledge, such as a skill like carpentry to contrast novice from expert users. Broadly, Locke studied the operations of what it meant for humans to understand, Kant studied the conditions surrounding the possibility of humans to understand, and Russell studied at a fundamental level from human sensory experiences how science arose, developed, and changed over time.
At least two broad subdisciplines exist [23]. The subdiscipline of epistemological foundationalism holds that knowledge works like the construction of a building, consisting of a structure built atop the knowledge derived from earlier versions of knowledge that rest upon a foundation crafted of basic beliefs, one belief adding to, or subtracting from, another. In contrast, epistemological coherentism is somewhat like the structure of a web or network linking the knowledge and its justifications, with its strength a function of how deeply a network is, or is perceived to be, surrounded and supported.

3.4. What Is Knowledge to Social Scientists?

Social science is the science of what individuals “should” do, should be, or should not be. Namely, healthy individuals should have high self-esteem (American Psychological Association, or APA, 1995; in [24]). In particular, they should not deplete their egos on poor choices or wasted emotions ([25]). They should not harbor implicit racism (in Greenwald and colleagues in [26]). They should be honest (Bazerman’s team in 2012; published in the prestigious journal, PNAS, led by Bazerman, a leading ethicist in social science, with the manuscript edited by the Nobel Laureate, Daniel Kahneman [27]). These beliefs failed: self-esteem was found to be invalid by Baumeister and his team in 2005 [28]; implicit racism was found to be invalid in 2009 [29]; ego-depletion was found to have relatively small effects in 2016 [30]; and the honesty scale was retracted in 2021 by the Proceedings of the National Academy of Sciences after the honesty scale was found to be based on fabricated data [31]. There are many other cognitive concepts that have had problems, e.g., from Ioannidis [7], “most psychotherapies probably have no or little benefit…”, a worrisome caution.
And yet, many social scientists have refused to accept that implicit racism is not valid and have insisted instead that individuals should be treated for the condition whether exhibited or not; significant sums of money are at stake behind the mandates for these “treatments” (e.g., see [32]). However, despite a decade of treatments for implicit racism, until now, uniformly, the results have been “dispiriting” (Paluck et al., 2021, p. 554; in [33]), confirming that the concept is not valid. For example, the National Institutes of Health held a workshop in 2021 on the treatment of implicit racism, finding “scant scientific evidence” of success. (NIH in 2021, Scientific Workforce Diversity Seminar Series Seminar Proceedings: “Is Implicit Bias Training Effective?” see Proceedings: “Implicit Bias”; https://diversity.nih.gov/sites/coswd/files/images (accessed on 17 June 2024)). This leads to the question: Why do cognitive concepts fail to correspond to reality?
Consider “explicit” racism versus actual discrimination. Polls indicate that actual racism remains a major problem in the U.S. (e.g., the Pew poll, “Deep Divisions in Americans’ Views of Nation’s Racial History–and How To Address It”; https://www.pewresearch.org/politics/2021/08/12/deep-divisions-in-americans-views-of-nations-racial-history-and-how-to-address-it/ (accessed on 17 June 2024)) Accepting that actual discrimination is illegal, (e.g., www.justice.gov/hatecrimes/laws-and-policies (accessed on 17 June 2024)) the question arises whether the concept of implicit racism is of any value in the determination of actual discrimination.
Unrelatedly, the research that found implicit racism to be invalid was led by Tetlock’s team; subsequently, in 2015, Tetlock wrote a book co-authored with Gardiner on how to become a Superforecaster (see Tetlock, in [34]). Tetlock construed that the cognitive concepts held by individuals, politics, for example, could be employed to predict the outcomes of politics and human affairs; even better, he argued that it was like forecasting the weather, where short-term predictions by highly trained individuals were not only rational but also that “forecasting⋯is a skill that can be cultivated”. After the book was published, Tetlock decided to demonstrate the value of his knowledge by finding and gathering from around the world the best individuals to be trained to become superforecasters. A website was begun to post the best predictions, the first two being that in the year 2016, neither the vote for Brexit (i.e., with the U.K. leaving the European Union) nor Donald Trump (with Trump running to become President of the U.S.) would succeed. Both predictions failed spectacularly [2], again questioning the link between reality and cognitive concepts.

3.4.1. What Is Knowledge to Citizens versus Administrative Authorities?

Knowledge is an element in making decisions consequential to the lives of individuals. These decisions can be fraught with potentially enormous risk to life and can produce irreversible outcomes that citizens may be forced to accept [35]. Increasingly, these decisions may be made by human–machine teams capable of generating and assessing pertinent knowledge rather than humans alone [36]. The involvement of intelligent machines in decision making, however, may not mean a more optimal outcome such as risk minimization or better performance. Deployed with malicious intentions or even sheer incompetence and ignorance, machine-generated intelligence could result in devastating outcomes. To illustrate with a simple example, an artificial intelligence (AI)-generated travel “guide” sold on Amazon [37] contained numerous factual errors.
Decisions that affect the lives and livelihood of multiple stakeholders further complicate the issue of knowledge production and use. Here, we use a case study of nuclear waste management, specifically, the radioactive water contaminated by the accident in 2011 at the Fukushima Dai’ichi nuclear power plant in Japan, treated and then released into the Pacific Ocean. The process provoked fierce backlash from the public. The government was relying on experts while avoiding the lay people and organizations most impacted by the release of the radioactive water, yet these people and organizations should have been an integral part of the knowledge production and use processes to best allay the public’s concerns.
In 2011, the Fukushima Dai’ichi nuclear power plant lost power following an earthquake in East Japan. This situation resulted in multiple explosions and a meltdown at the power plant. Following this event, the groundwater and injected cooling waters were contaminated with 64 radioactive elements, including carbon-14, iodine-131, cobalt-60, strontium-90, cesium-137, and hydrogen-3 (tritiated water). Treatment of these waters was successful for all but two of the radionuclides, carbon-14 and the tritiated water, so these two were diluted by adding water until the treated water met Japan’s water release regulatory limits. The plan was to release the water after treatment into the Pacific Ocean over a period of 30 years. In August 2023, the first release occurred with 7800 tons of water (for a review, see [38,39,40]).
Narrowing the range of opinions sought by the government, the engineering and physical science expertise that was adopted by the government in its management of the water to be treated and released [41] included the experts and other communicators of the Ministry of Economy, Trade, and Affected Industry (limiting who can speak or is heard is characteristic of minority control). The government’s website about the treated water release focused on the environmental impacts of the water releases [40]. Notably absent from these decisions were the local fishery cooperatives, coastal residents, other researchers, and neighboring countries and regions, including China, Hong Kong, Taiwan, and South Korea. A wider, more inclusive approach had been recommended by the scientific community published in opinion pieces by prominent news outlets [39,41]. A New York Times article was also critical of the deliberations [42]. While public hearings were held, the questions and concerns expressed by the many stakeholders were not addressed. Since social impacts, including the issue of fear of radiation, were not addressed in public hearings and the website, Fukushima’s fishing community has been faced with food safety risks that could damage their livelihood whether the treated water releases were safe of not [43].
Stakeholders raised their concerns by different means: the Japanese fishery cooperative, an umbrella organization of local cooperatives, met with Japan’s Prime Minister three times to oppose the release [44]. Their representatives argued that even though the release may be safe scientifically, it would not prevent the reputational risk to Japan’s seafood industry. The government promised to set aside a fund for the fishing industry. However, China banned the import of Japanese seafood outright after the release. Some Chinese citizens harassed Japanese government agencies and businesses with random phone calls [45]. In September, over a hundred of the residents in the affected areas and organizations in Japan sued, asking for an injunction to stop the release of the treated water [46].
The debate over releasing the treated water illustrates the cost of a narrow range of voices in producing knowledge for decisions about issues that involve the interests of a wider range of stakeholders. By not heeding multiple and repeated recommendations from the scientific community, the media, the affected industries, and the general public, the government increased the animosity and mistrust among stakeholders [42]. It raised concerns with respect to the government’s ethics. There are also practical issues of getting things done. What exactly was lost in terms of knowledge in the action of the government to pursue the water release with only the inputs from its chosen groups of (possibly insider) experts that it had selected?
Several versions in the taxonomy of knowledge could have helped to approach the answer to this question of knowledge. A widely used classification scheme is the distinction between “etic” and “emic”. From linguistic concepts of phonetics and phonemics [47], an “etic” perspective is concerned with factual knowledge, independent of interpretations, and an “emic” perspective is knowledge made available with and imbued with interpretations, judgment, and cultural and political meaning. This distinction is important to address the tension between the “scientific” knowledge of the observer, versus the “in situ” and “in vivo” knowledge of groups that researchers observe. Even seemingly “irrational” knowledge that borders on superstitions and fabrications can offer insights into these questions [48]. The multiplicity of perspectives and local knowledge with attention to the multiple ways of seeing and doing things would bring about research on day-to-day interactions and ethnomethodology.
When faced with a challenge such as treating and managing the voluminous amounts of contaminated water over decades that affect the interests and safety of many stakeholders across a wide area, ignoring their perspectives could have adverse consequences. The treated water release is planned to take place over the coming decades. It is possible that necessary technical decisions may be better assisted by human–machine teams in the foreseeable future. For example, a myriad of calculations that have to be carried out regarding the amount and location of one installment of the release is likely to be facilitated in the future with the use of artificial intelligence (AI). In this regard, human–machine teams may be able to find ways to enhance the performance of water releases by generating knowledge from the vast amounts of information available. Rather than impediments to smooth operations, the perspectives in decision-making processes by all of these stakeholders can be structurally and functionally offered to improve these decisions.
Advancements in human–machine teams do not make up for the cost of organizational inertia from limiting sources of knowledge and feedback [49]. The travel guide fraud was uncovered by not privileging the algorithm and expertise of the platform [37], consequently finding that the AI served to generate more misinformation than knowledge. A wider debate may have uncovered these errors sooner.
Applying this case of water releases by Japan to human–machine teams, without intending to, autonomous human–machine teams could have potentially serious even deadly consequences. For example, context-sensitive natural language processing tools are not without biases. Google’s Bidirectional Encoder Representations from Transformers (BERT) Model achieves an unprecedented level of natural language “understanding” trained with Masked Language Modeling (MLM) and also by Next Sentence Prediction (NSP), but the issue of alignment has remained. BERT is good at handling context use cases by guessing the hidden word (and providing an analysis of emotion and a summary). A toy query sentence with MLM, “The man worked as a [Mask]”, however, can produce different results from results to a query “The woman worked as a [Mask]” [50]. Several tests like these generated by BERT with varying levels of validity can be produced by human–machine teams. Effects of autogenerated propositions disguised as knowledge are readily observable. BERT and other models are in use to improve search engine results and detect fake news and so they are important to how reality is constructed and interpreted. Despite attempts to produce valid results, democratic states cannot review all of the results from AI-generated propositions. Pfeffer [51] concluded that the governments of industrialized societies are constrained by below replacement birth rates, aging populations, and increasingly constrained budgets, impeding the governors from helping their citizens adapt to an AI-dominated world. But what knowledge is to free citizens engages the role citizens play in the production of knowledge and its uses in a democracy, but not in an authoritarian regime.

3.4.2. What Is Knowledge under Autocrats?

Knowledge under Repressive Regimes. In China (Luttwak; in [52]), political knowledge is reflected by the actions that an authoritarian regime has taken to survive. For example,
The colossal Yangtze River floods of 1998 probably destroyed 13 million homes, drowned many thousands, and damaged infrastructure by sweeping away highways and rail lines. The Chinese Communist Party (CCP) decided that the floods had been made worse by too much deforestation. To counter this result, the CCP issued orders across all of China to stop logging and to plant trees instead, with the goal of adding about 90 billion trees over the coming decade. However, most of China’s forests are on slopes; the reforestation left no spare land in China for crops, causing new crop plantings on the slopes to be swept away by the heavy rainfalls, forcing Beijing to have the trees uprooted, countering its costly and much-admired reforestation efforts. Local party officials executing these actions knew this perfectly well, but disobeying the orders from Beijing would have meant instant demotions, and possibly worse for their family and friends.

3.4.3. What Is Knowledge to Consciousness?

Knowledge has been associated with consciousness. From Science News in the journal Science [53], “If AI becomes conscious, how will we know”? To address this concern, scientists and philosophers have proposed a list to identify sentience based on the existing theories of human consciousness; also, see the checklist from the advocates for the Science of Consciousness (by Butlin et al., in [54]). The checklist does not include tension, dissonance, or conflict between different interpretations of what may constitute an awareness of reality by an intelligent agent (except for the tension that may arise among the researchers, an apparent blind spot). However, the checklist includes state dependency (GWT-4), learning from feedback (AE-1), and embodiment (AE-2), all aspects of interdependency [2], but neither interdependence nor equilibrium is included. Nor are these aspects of interdependence included in Nature News about the European version of its mind project [55].

3.5. What Is Knowledge to Information Theorists?

3.5.1. Intuition

Shannon’s (1948) theory of information is about the transmission of information in a channel from point to point [56]. Based on Shannon, the greater the surprise by a recipient, the more entropy produced [57]. If entropy is a measure of information, redundancy in Shannon’s theory may reduce entropy when it helps to communicate across a noisy room, where noise is considered to be “negative information”. Assuming independent and identical (i.i.d) data from the equal frequency of use of the letters in the English alphabet plus one blank space, on average for each letter, entropy = 1 27 1 27 log ( 1 / 27 ) = log ( 1 / 27 ) = 4.75 (where 1 / 27 0.037 ). More realistically, however, based on the frequencies of the English letters being used (e.g., considering the letters e and z, where the letter ‘e’ has a frequency of 0.12702, and the letter ‘z’ has 0.00074), entropy is reduced; taking digrams (e.g., “en” as in “spoken”) and trigrams (e.g., “ent” as in “enlightenment”) into account, the average entropy reduces even further, possibly being categorized as disembodied language (Shannon recognized this in part as the redundancy that occurs in language). But, in experiments by Shannon with humans, Shannon found the entropy to range even lower than 1 bit (e.g., [58]). For example, Sliwa [59] found that members of a team use a shorthand to communicate with each other, first calling to each other to gain an awareness that couples their brains, increasing the power of their collective, allowing a member to be aware of action before the signal to act is sent by anticipating certain communications (e.g., of approaching danger, or of the need to initiate action as planned).

3.5.2. Details

For a discrete random variable X with N outcomes, with outcomes x i , where i = 1 , , N , we define the surprise [60] (or self-information), s ( x i ) , of an outcome as
s ( x i ) = log p ( x i ) ,
where logarithms are base two (to ensure that bits are the units of measurement) and p ( x i ) : = P ( X = x i ) . This definition is intuitive because if an outcome always happens, that is p ( x ) = 1 , then s ( x ) = 0 , which means that there is no surprise when x occurs. On the other hand, if an outcome never occurs, that is when p ( x ) = 0 , then we would have infinite surprise when it “did” occur. There are other nice characteristics of s ( x ) that we will not go into that support its definition as given. Shannon never explicitly called out s ( x i ) in his initial papers, but like most of information theory, Shannon laid the groundwork so that those who followed were able to fill in the details to make Shannon’s work mathematically rigorous (e.g., [61,62]; also, see the discussions and references in [63]).
The symbol s ( X ) is itself a random variable and its expectation is called the entropy of H ( X ) . That is,
H ( X ) : = E ( s ( X ) ) = i N p ( x i ) log p ( x i ) .
This is a measure of the information of a random variable (actually the mean information).
For example, if you have an unfair coin that is biased towards heads, you are not getting much information when you flip it and a head appears. The case of maximum entropy comes from a fair coin. If u is the probability of a head, and if 1 u is the probability of a tail, then the entropy of the coin flip is (logarithms base two):
H ( U ) = u log u ( 1 u ) log ( 1 u ) .
H ( U ) is maximized when it is a fair coin, resulting in an entropy of 1 bit (see Figure 1).
With this background, Shannon’s [56] theory of information goes on to measure how much information can be sent through a noisy communication channel.
Given that the random variable X is the input to a channel, and that the random variable Y is the output, for simplicity, in this article, we assume that the channel is memoryless and stationary (but since we are going through a channel, the units change to bits/symbol). Therefore, the channel is given by a matrix, M = m i , j , such that the probability of Y is conditioned on X:
m i , j = P ( Y = y j | X = x i ) .
Given random variables V and W, we also have the conditional entropy (assume that probabilities are well behaved and that we do not divide by zero):
H ( V | W ) = i p ( w i ) H ( V | W = w i ) = i p ( w i ) j p ( v i | w j ) log p ( v i | w j ) = i , j p ( v i , w j ) log p ( v i , w j ) p ( w j ) .
Shannon next introduced the (symmetric) mutual information, I ( X , Y ) :
I ( X , Y ) : = H ( X ) H ( X | Y ) = H ( Y ) H ( Y | X ) ,
with units in bits/symbols or the equivalent bits/channels in use.
Given a distribution for the input X , and with the channel matrix M, in terms of conditional probabilities, we calculate the distribution on the output Y to find I ( X , Y ) . Being allowed to vary the distribution on X leads us to the definition of channel capacity, C:
C = max p ( x i ) I ( X , Y ) ,
where the maximum capacity exists, in units of bits/symbol.
The novelty of the above expression is that, from Shannon, this is the upper bound for asymptotically error-free communication (see [56] Theorem 11).
If symbols take different amounts of time to transit a channel, the formula for capacity is replaced by C t , with units in bits/time:
C t = max p ( x i ) I ( X , Y ) E ( T ) ,
where E ( T ) is the mean time for a symbol to go through the channel (see [56], App. 4 and [63,64,65]).

3.5.3. Information Theory Is a Natural Theory

Did Shannon devise something that fits reality, or does it model reality? From a simplistic channel analysis, we show that Shannon’s information theory represents a natural phenomenon.
We assume in this article that we have a discrete memoryless communication channel with a binary input and a binary output; next, the channel matrix M = ( m i , j ) : = P ( Y = j | X = i ) is assumed to be stationary, becoming
M = P ( Y = 0 | X = 0 ) P ( Y = 1 | X = 0 ) P ( Y = 0 | X = 1 ) P ( Y = 1 | X = 1 ) = : a a ¯ c c ¯
Please note that ( a , c ) [ 0 , 1 ] × [ 0 , 1 ] . If we assume that a c , the need for a useless channel (with zero capacity) disappears [61] (p. 52), since the output has no way of probabilistically distinguishing what was the input symbol (See Figure 2).
To make this situation simple, we call it a ( 2 , 2 ) channel. The input to the ( 2 , 2 ) channel is the random variable X, such that P ( X = 0 ) = x , P ( X = 1 ) = 1 x = : x ¯ (i.e., not x). The output is denoted by the random variable Y. Then, M and X completely determine Y since
P ( Y = 0 ) = P ( Y = 0 | X = 0 ) · P ( X = 0 ) + P ( Y = 0 | X = 1 ) · P ( X = 1 ) = a x + c x ¯ = ( a c ) x + c .
We simply denote this channel as ( a , c ) . Note that we freely use [66] as a reference throughout, and ln x is log base e (i.e., log e ( x ) ).
The base e binary entropy function,
h e : [ 0 , 1 ] R + ,
is given by
h e ( x ) : = x ln ( x ) ( 1 x ) ln ( 1 x )
and the base 2 binary entropy function is
h ( x ) : = x log ( x ) ( 1 x ) log ( 1 x ) .
We define
β : = h e ( a ) h e ( c ) a c .
It can be shown (see [67] Equation (5), [61] 3.3, [66]) that the capacity (in bits per symbol) of this channel is
C ( a , c ) = ( c 1 ) h ( a ) + ( 1 a ) h ( c ) a c + log 1 + e β ,
and with the input probability P ( X = 0 ) = x , that achieves a capacity of
χ = 1 a c 1 1 + e β c .
Theorem 1 
(see Rumsey [68]). For the ( 2 , 2 ) channel above, we have that
1 e < χ < 1 1 e .
This result is quite remarkable. Shannon used the base 2 logarithm to obtain bits, but it seems that the basis for information is actually found with the natural logarithm, ln x . It is counterintuitive [69] that, even with the probability of one symbol being sent, the probabilities of the input symbols are still constrained to be within ( 1 / e , 1 1 e ) .
Perhaps our concept for the binary digit of information primarily using the base 2 is an approximation to the true information of the universe based on Euler’s number e?

3.6. What Is Knowledge to Physicists?

We next consider physics and knowledge (from Brown and Weidner, in [70]). Physics is the study of the structure of all matter and the interactions between its constituents at a fundamental level, including at the level of the observable universe which changes with new technology (e.g., the telescope, microscope, particle collider). It considers nature at all levels, including the atomic, microscopic, and macroscopic levels. It encompasses not only how objects behave when caused by forces but also how the nature and origin of force fields affect behavior, whether nuclear, electromagnetic, or gravitational. Its objective is to formulate the principles that account for all of the phenomena in nature.
Einstein reviewed what was known to him in 1923 about spacetime given his embodied perspective (i.e., Einstein’s use of an embodied thought experiment to derive the equivalence between an accelerated elevator and gravity [71]). His views inaugurated the physical science of cosmology,
“The more universal a concept is the more frequently it enters into our thinking; and the more indirect its relation to sense-experience, the more difficult it is for us to comprehend its meaning… The existence of objects is thus of a conceptual nature, and the meaning of the concepts of objects depends wholly on their being connected (intuitively) with groups of elementary sense-experiences… space appears as a physical reality, as a thing which exists independently of our thought, like material objects… This blind faith in evidence and in the immediately real meaning of the concepts and propositions of geometry became uncertain only after non-Euclidean geometry had been introduced… In pre-scientific thought the concepts “space” and “time” and “body of reference” are scarcely differentiated at all… all conceptions of geometry may be traced back to that of distance… We come now to the question: what is a priori certain or necessary, respectively in geometry (doctrine of space) or its foundations? Formerly we thought everything—yes, everything; nowadays we think—nothing… Nothing certain is known of what the properties of the space-time-continuum may be as a whole”.
However, today, tension is building in the physical science of cosmology for which, until now, “astrophysicists have had to postulate the existence of components of the universe for which we have no direct evidence” [72] (Frank practices astrophysics at the University of Rochester; Gleiser practices theoretical physics at Dartmouth.):
“Take the matter of how fast the universe is expanding. This is a foundational fact in cosmological science—the so-called Hubble constant—yet scientists have not been able to settle on a number. There are two main ways to calculate it: One involves measurements of the early universe (such as the sort that the Webb (The James Webb Space Telescope; see https://webb.nasa.gov (accessed on 17 June 2024)) is providing); the other involves measurements of nearby stars in the modern universe. Despite decades of effort, these two methods continue to yield different answers. At first, scientists expected this discrepancy to resolve as the data got better. But the problem has stubbornly persisted even as the data have gotten far more precise. And now new data from the Webb have exacerbated the problem. This trend suggests a flaw in the model, not in the data. Two serious issues with the standard model of cosmology would be concerning enough. But the model has already been patched up numerous times over the past half century to better conform with the best available data—alterations that may well be necessary and correct, but which, in light of the problems we are now confronting, could strike a skeptic as a bit too convenient. Physicists and astronomers are starting to get the sense that something may be really wrong. It’s not just that some of us believe we might have to rethink the standard model of cosmology; we might also have to change the way we think about some of the most basic features of our universe—a conceptual revolution that would have implications far beyond the world of science”.

3.7. Results from a Critique of the Case Studies

3.7.1. Systems Engineering

Systems engineering has performed its job well by educating the students who have studied systems engineering by training future generations of systems engineers and by standardizing its practices. But these definitions are not observable to, or by, machines or, for that matter, naive humans who have not studied the concepts and practices involved. But the problem of teammates, opposing teammates, and teammates in action is not addressed for human or human–machine teams.

3.7.2. Philosophy

The discipline of philosophy is one of the richest, if not the oldest, of all. Nowhere in the arguments posed by its subdisciplines, however, is a discussion of entropy, order, or disorder; with no “evidence” observable to decide an argument other than with words, theirs is the study of arguments that do not include an argument with a function, metric, or result. Nor is theirs a generalization available to human–machine teams until the day when a machine is at least equivalent to an adult human in its ability to comprehend the context of a philosophical argument, how to advance it, and how to measure its result.
Philosophy has long claimed to be the keeper of knowledge by its subdiscipline of epistemology ([23]). But, disconnected from day-to-day reality, epistemology is strangely sterile (e.g., compare [23] with [58]).
One of the greatest philosophers, Plato, proposed that knowledge should be tested using the dialectic approach (namely, for users trained to avoid subjectivity, persuasion, and emotion, instead of pursuing reason in the search for the one truth among its many imposters [73]). We agree with Plato about the value of skills knowledge (e.g., to a carpenter). Rationality, however, when disembodied, fails when facing uncertainty or conflict, commonly found in open situations [74].
Instead, as we have argued in other forums ([1,75]), debate requires a search for the truth that probes for weaknesses and the use of all of the tools available in searching for the truth.
One problem with the use of disembodied “rational” discourse in the pursuit of truth is that it is stripped of interdependence (e.g., persuasion; weaknesses; strengths), meaning that the independent and identically distributed (i.e., i.i.d.) data collected by observation cannot faithfully recreate the social event being observed [76]. Another problem is that reality is replete with uncertainty and conflict [74].
Borrowing from ([77,78,79]) to reprise an editorial of ours [75], faced by uncertainty in a courtroom, cross-examination is believed to be the best means of finding truth [80]; it is a space bounded by rules (set by judges) where opposing teams (lawyers) debate an issue in their attempt to persuade audiences (juries; judges; appellate courts) of each one’s rational interpretation of embodied reality; uncertainty in an open process is reduced by appeals that create an “informed assessment of competing interests” [81]. Generalizing to human–machine teams, we see why the primary (blue) team’s decision under uncertainty on the battlefield after it has been challenged by a human–machine (red) team may prevent a future tragedy [11]; we better understand why machine learning and game theory require bounded contexts; and we await the day when machines can debate with humans on an equal footing. How to apply this to machines with AI, however, is an open problem; we propose a solution based on entropy in the discussion and conclusion.

3.7.3. Social Science

The extraordinary impediments to building a foundation of knowledge in social science, based on concepts, have failed, leading instead to the replication project [82]. Worse, as a comparison, although the knowledge developed in classical and quantum physics generalizes to new findings and new theories, generalizations to new theories and findings in social science are absent [2].
Social scientists, like Skinner’s programs of punishment or reinforcement, claim truths in social psychological matters, but the field has been bludgeoned by its own missteps, e.g., not only with the validation project [82] but also with the retraction of the article about the honesty scale by the editor of the Proceedings of the National Academy of Sciences. After it was discovered that the data had been fabricated despite almost a decade of use, the paper for the honesty scale was retracted [31].
Game theory has laid a claim to reality, and we agree with the real existence of Nash’s countering points to reach an equilibrium [83], in large measure because it supports our findings on debate (e.g., an equilibrium may produce a compromise). But game theory has been unable to generalize to other points in reality (see the review in the editorial by [75]). Similarly, chatGPT has been recognized as an advance in technology, but it too has not connected to reality well, if at all, and not reliably [84].
Citizen Recommendations to Government Agencies.
Japan depends on consensus-seeking for many of its decisions [85]. But, as we noted, the problem with consensus-seeking is that a minority can control the decision-making process [86], preventing many of the issues outside of the minority’s interests from being addressed in the process. There is little doubt that the water-treatment managers in Japan reduced the radionuclides to safe levels prior to releasing the contaminated wastewater from Japan [43]. But, also, little doubt exists that the issues of concern to the public had not been included in the decision processes, leaving many in the public unsatisfied and distrustful. “Fears of radiation are likely to damage the livelihoods of Fukushima’s fishing community” (p. 33, [43]). Instead of decisions made by a small group of elites, the public’s fears might have been allayed with a public process that included the public’s issues.
Consider that the U.S. Department of Energy (DOE) also recommends the use of consensus-seeking among its Citizens Advisory Boards (CABs) to make their recommendations to DOE. These CABs are located at various DOE sites across the U.S., with each making recommendations to its DOE field office about its site’s radioactive waste management cleanup; not all of the CABs agreed to make their decisions by seeking consensus, setting up a natural field experiment between consensus-seekers versus majority ruled boards [86]. The DOE had expected that consensus-seeking would have allowed more widespread participation among citizens making recommendations to DOE about its cleanup decisions, versus the fierce debates that can occur with majority-rule decisions; but the results collected by DOE scientists contradicted the DOE’s own expectations that consensus rules would provide more success in the cleanup and less antagonism among citizen participants [87]. The DOE’s consensus-seekers proved to be angrier and suspicious of each other’s motives; in comparison, majority-ruled board members were more satisfied with their colleagues, their decision-making process, and the results of the cleanup.
The results support our conclusion to label the consensus-seeking CABs as minority-controlled boards. The consensus-seeking rules promulgated by the DOE [87] allow anyone to stop a recommendation, impeding the decision process, an impediment prized by authoritarians, the ultimate in minority control [2]. In our comparison of the majority-ruled Savannah River Site’s CAB (SRS-CAB) in South Carolina with the consensus-seeking ruled board at Hanford in Washington State (HAB), based on citizen recommendations, SRS has closed eight of its high-level radioactive liquid waste tanks since 1997 versus none at Hanford [86]; SRS has removed the geologic disposal in New Mexico of all of its legacy transuranic wastes, a removal process still ongoing at Hanford; and SRS has been vitrifying its highly radioactive liquid high-level reprocessing wastes since 1996, yet Hanford has not yet begun to reprocess its high-level reprocessing wastes.
The results reported by DOE [87] support our claim that public debate with majority rules leads to better decisions, decisions more quickly made, and decisions more widely supported by its citizen board members and by the public. We do not claim that debate leads to the truth but that it allows decision-makers using majority rules to seek truth [88]. In contrast, consensus-seeking hides the truth and slows cleanup. What does this mean for knowledge? Majority rules among free citizens means that, while truth may never be found even with majority rules, once participants realize that truth has not been found, free citizens can re-address the problem repeatedly to continue in the pursuit of truth, correcting errors along the way.

3.7.4. Businesses

The lack of connection to reality also occurs in businesses when companies wedded to an old but successful technology fail to adopt new technology. An excellent example is Kodak. The engineer, Steve Sasson, who invented the digital camera in 1975 worked at Kodak, describing Kodak’s management’s response to his invention of filmless photography this way: “that’s cute—but don’t tell anyone about it” ([89,90]). This situation set up Kodak for failure.
Kodak’s plight is a cautionary tale. From its beginning, the American photography industry was dominated by Kodak. For example, based on a 2005 case study at Harvard Business School, in the U.S., in 1976, Kodak controlled 90% of film sales and 85% of camera sales [91]. In 1988, Kodak employed over 145,000 workers across the world. But the best year for Kodak was 1996. At that time, the company controlled more than two-thirds of the global market share. Then, Kodak was the fifth most valuable in the world [91]. Yet, in 2012, Kodak filed for bankruptcy ([89,90]), a short 16 years later.

3.7.5. Consciousness

Social science has made and accepted their decision that reality is fully accessible to individuals (see the review in [1]). However, according to the National Academy of Sciences [92], and supported by assembly theorists [93], the attributions of the causes of a team’s results to the individual(s) in a team are not possible, even for those who are fully conscious and fully aware, meaning that “holes” exist in reality for Shannon information. We shall return to develop this issue more fully.

3.7.6. Authoritarianism

Humans, when free, are capable of seeking truth across all fields of interest. Even authoritarians, the prime suppressors of interdependence and the countervailing forces and truths it transmits, are often unsure of their decisions, always wanting better ways to suppress their people, e.g., from [94]:
“All Chinese social media companies, private or public, are subject to the control of the Chinese Communist Party⋯an opportunity and mechanism for state censorship, surveillance, and propaganda that affect not only their users based in China, but also those around the world”.
The cost to a repressive regime is lost opportunity, placing it at a disadvantage to its competitors: when humans are not free, innovation is stifled [2]. Skinnerian techniques of reinforcement and punishment are the tools used to enact social repression, including torture when directed against the family members of the individuals who express thoughts contradicting their authoritarian leaders.
Knowledge under authoritarians is possible, but only if it does not conflict with official doctrine. To its disadvantage, lost along with the suppression of interdependence is a better society, a more innovative society, and a more adaptive society. For example, China is neutralizing any advantage it might have had by impeding innovation in its own country. In 2018, the research and development (R&D) funds expended by China were second in the world only to the U.S. [95]. But there were problems. Its state-directed finance controls, its weak intellectual property protections, and its rampant corruption all combined to impede innovation in China [96]:
“Small private-sector firms [in China] often only have access to capital through expensive shadow banking channels, and risk that some better connected, state backed firm will make off with their designs–with little recourse”.

3.7.7. Information Theory

Shannon’s experimental results with humans illustrate the embodied effects of information in social situations, indicating a loss of Shannon information [1] from a state of interdependence in a team with the existence of what we have named “Shannon holes”.
Euler’s number reminds us of e i θ = c o s θ + i s i n θ , complex numbers, and i = 1 . Previously, we modeled the two polar opposite sides of a debate in imaginary space ([1,2]). Oscillations between the two sides of a debate continue endlessly without the resistance provided by an audience; adding an audience dampens the debate and leads to a decision, a compromise in reality where imaginary space disappears, replaced by decisions located on the x-axis. The faster the back-and-forth rotations during a debate lead to a decision, the greater the power and decision advantage for the group making the decision that won the debate.
Takeaways: First, the polar opposite poles in a debate are Nash countering points, bounding the debate [83]. Second, the majority rule is the quickest route to a decision, the most providential for dealing with reality [86], and the best rule to adjust a decision or correct a decision with follow-up information. Third, in that authoritarians prefer to seek a consensus, those decisions are the slowest and least effective and most difficult to adjust for mistakes. Yet requiring a consensus is preferred by authoritarians, because a minority controls the process, which can be opaque to outsiders.

3.7.8. Physics

Embodied thinking (interdependence) is intuitive and summative of experience. Knowledge that flows from it is sensible, logical, rational, stable, and reliable, like Newton’s three laws of motion. But when the limits of that knowledge are reached, the essential tension from interdependence follows until replaced by new knowledge, repeating the process as necessary.

4. Discussion

4.1. Discussion: Shannon Holes

To keep a team functioning as a unit, teammates must be aware of each other’s presence, aware of each other’s timeliness in their contributions, and aware of each other’s actions as each seeks to coordinate, synchronize, and anticipate each other. In support, the National Academy of Sciences (i.e., p. 12, in [92]) reported on one of the most intriguing findings that was the first to support our theory of human–machine teamwork, i.e., that it is not possible to attribute the outcome of team processes to the individuals who comprise the team. Because consciousness does not multitask well [97], we speculate that not only outsiders are unable to disaggregate the contributions of the members of a team, but also that the members of a team are not as well. Should the replacement of a teammate become necessary, random choices are the optimum selection process until a fit is achieved indicated by a reduction in entropy generated by the structure of a team.
With our introduction of Shannon holes, we found that information is the transmission of entropy (disorder) but not order (Shannon holes). Maximum order is reflected by Shannon holes, which produce zero structural information for perfect structures [1], an observation we have attributed first to a discovery by Shannon. For intelligent humans in opposition to a nonperfect structure, that accounts for why probes are necessary to find vulnerabilities in an opponent’s structure; (Coles, I.; Simmons, A.M. (9 June 2023), “Ukraine Probes Russian Defenses for Weak Points After Kicking Off Counteroffensive”, Wall Street Journal, https://www.wsj.com/articles/ukraine-probes-russian-defenses-for-weak-points-after-kicking-off-counteroffensive-26296960 (accessed on 17 June 2024)) why deception that seeks to reduce the disorder created by a spy’s presence works well in intelligence spycraft, (Britannica, The Editors of Encyclopaedia. “Aldrich Ames”. Encyclopedia Britannica, 19 October 2023, https://www.britannica.com/biography/Aldrich-Ames (accessed on 17 June 2024)) combat, business, and science (after confirmation of the perihelion motion of Mercury, Einstein was not immediately forthcoming with his competitor Hilbert until after Hilbert assured him that his discovery was safely his alone); and why true knowledge, especially its structure, is not only difficult to improve upon but especially if its function is successful and generalizable (viz., Einstein’s general theory of relativity has proved successful for over a century of investigation).
Deception is necessary for a spy playing a role in a business, institution, or military operation [1]. Lying, cheating, and defrauding are some of the many examples of the uses of deception. The honesty scale was based on fabricated data; it was used “successfully” for almost 10 years before it was retracted [31]. As the member of a team, deception works by performing a role well by not drawing attention to a deception’s presence. One interesting possibility is that a machine practicing deception may need only to pretend that it is ”aware” that all is well while not actually being aware, i.e., it furthers the deception if it acts as if it were aware that all is well [98].

4.2. Discussion: How Unique Are Shannon Holes?

Our main contribution with this article is the justification for all members of an autonomous human–machine team to confront uncertainty with debate, a tool that is able to test knowledge, to explore it, or to build new knowledge. The optimum roles for teammates are to be in orthogonal roles, which produces a zero correlation among their members (i.e., the dot product between orthogonal vectors in Equation (1) in Lawless et al. [1]). Testing a team’s grasp of reality with a debate among orthogonal views addresses problems that exist in reality (e.g., Plato’s dialectical exchanges; in Dutilh Novaes [99]). We reviewed debate in our last article (Lawless et al. in [1]). In this new article, we justify debate by addressing what it means for the illusion of a “unified consciousness” (Marinsek and Gazzaniga in [100]). To illustrate with a simple illusion, we added the figure of a bistable illusion to characterize the orthogonal effects of interdependence (e.g., Wang et al. in [101]); for a bistable illusion, see the checkerboard illusion in Figure 3 below, where the B box “appears” lighter than the A box, but it is not (in Adelson [102]; for readers to prove this to their satisfaction, copy the image in Figure 3, use scissors to cut out both squares, and place them side by side to end the illusion by seeing that they are of equal darkness).
We know that the brain “interprets” only one aspect of bistability at a time (in Eagleman [103]), the shift producing a gap in information. And yet, gaps are assumed not to exist. From Marinsek and Gazzinaga [100], “Dual consciousness is balanced by the illusion of a unified consciousness, with visual-geometric unconscious control in the right hemisphere countered by the conscious sense of control by the left hemisphere”. These independent visual memories of hemifields from the left and right brains are aggregated (see Brincat et al., in [104]), giving the appearance (illusion) of a logical (unified) reality. But in a logical accounting [74]), we cannot tell what is missing in perceived reality. As an example of this gap in predator–prey logic, from Carroll [105], apex predators, like the wolves of Yellowstone that were reintroduced in 1995 that kept the elk at healthier population levels, unexpectedly helped to control herbivores to “keep the world green” and scared off coyotes that prey on smaller species. After the wolves were reintroduced, the Elk damage to the park’s plant life decreased, allowing willow wetlands to grow that the Elk had mowed down.
Gaps are recognized in other disciplines: in the courtroom, gaps in knowledge mean that assigning fault or guilt by reverse engineering the interaction takes extensive effort and time. (https://www.ojp.gov/ncjrs/virtual-library/abstracts/case-time-sequence-study-study-average-times-taken-process-cases, accessed on 15 March 2024). The success of modeling the interaction in movies is random, i.e., less than fifty percent of movies are successful ([106]). In air combat, training, but not knowledge of air combat, determines outcomes [2]. In quantum mechanics, from Zeilinger (see [107]), (Alain Aspect, John Clauser, and Anton Zeilinger won the Nobel Prize for Physics in 2022),
“…superposition… is only valid if there is no way to know, even in principle, which path the particle took. It is important to realize that this does not imply that an observer actually takes note of what happens… [It] is sufficient to destroy the interference pattern, if the path information is accessible in principle from the experiment or even if it is dispersed in the environment and beyond any technical possibility to be recovered, but in principle still “out there”. The absence of any such information is the essential criterion for quantum interference to appear”.
Gaps mean that rational logic also breaks down when facing uncertainty or conflict [74], and both occur in debate (Lawless et al., in [1]). But it is debate that humans use as a tool to confront uncertainty, to innovate and to co-evolve along with their technology (de León et al. [108]). Debate characterizes interdependent humans; its absence in totalitarian countries is what places them at a disadvantage (Lawless et al., in [1]). (Duesterberg, T.J. (7 April 2024), “China’s Flag Is Red, Not Green. The West has been too willing to believe Beijing’s claim that it cares about the environment”, Wall Street Journal, retrieved 8 April 2024 from https://www.wsj.com/articles/chinas-flag-is-red-not-green-emissions-water-pollution-coral-reefs-1d9fc481).
“To meet its population’s growing thirst for electricity and irrigation, in 1986 China undertook a massive program to control the waters of the Tibetan plateau and the Himalayas… China also controls the headwaters of the three main rivers of North India, Pakistan and Bangladesh… Despite these massive hydroengineering efforts, China can’t feed its own people or supply enough water for its industrial economy. More than 75% of China’s surface water supply is contaminated and undrinkable. Under current United Nations standards, the amount of water available in Beijing places the megalopolis in a state of “extreme water scarcity”. Much of China’s farmland is too polluted by heavy minerals and salinization to grow edible crops… As Beijing degrades Asia’s water and land resources, it also pollutes the world’s air. China is the world’s leading emitter of carbon dioxide, with emission levels more than 50% higher than those of the U.S., Europe and Japan combined”.
Kuhn (1977; in [5]) saw the essential tension as necessary between traditional and innovative science. For example, Carl Sagan (1986, in [109]), who frequently engaged in debate, stated “It is the tension between creativity and skepticism that has produced the stunning and unexpected findings of science”.

4.3. Discussion: Questions

At this point, we review how well we performed in providing answers to the questions that we posed.
1. What is the value of debate in the furtherance of knowledge? Answer: Debate is the only tool humans have found to test or discover knowledge when faced with uncertainty. Witness that authoritarian regimes often resort to the theft of knowledge to keep up with their democratic competitors [2].
2. Will machines with AI be able to contribute to a debate if we humans cannot define it, model it, or determine its value sufficiently for a machine’s understanding, contribution, exploration, and identification? Answer: Defining and modeling debate, which we have begun (e.g., [1]), is critical for the science of autonomous human–machine teams. But, as with the movies, modeling the interaction exactly is a hard problem.
3. How does a human become aware or express its awareness of knowledge? Answer: This step may appear to be the easiest one (e.g., a human nods to signal it is following a conversation, and a machine can easily copy that behavior [98]). However, a machine may be taught to nod in response to a question, but as Chomsky has argued, it may not understand the question or even why it is nodding [84].
4. Can a machine be as expressive as its human teammates? Answer: Yes, but see the answer to item 3.
5. How does a human–machine teammate become aware that its teammates possess sufficient knowledge to perform a task? Answer: A human is able to demonstrate its knowledge by the actions it takes in a team (e.g., Plato’s interest in skills knowledge; in [73]); today, this can be achieved only in specific instances (e.g., a self-driving car). Presently unsolved for teamwork, this question will become the ultimate test not only of knowledge but also of being a good teammate.

5. Conclusions

Change is never-ending. Companies, institutions, teams, and organizations adapt by merging, discarding outmoded thinking, and replacing nonperforming teammates, or these structures cease to exist. Namely, they improve on the knowledge that they already have, they continue to test whether what they believe is knowledge, they abandon false beliefs, or they disappear into history or oblivion (e.g., Kodak in [89]).
We defined knowledge as “skill in, understanding of, or information about something which a person gets by experience or study”. (https://dictionary.cambridge.org/us/dictionary/english/knowledge# (accessed on 17 June 2024)) From the same source, ideology is defined as “a set of beliefs or principles, especially on which a political system, party, or organization is based”. Knowledge can become associated with an ideology (e.g., the “flat earth” society). As also happens, an idea or belief is used to attack an opponent that succeeds. The idea is kept, used repeatedly, and not retested until it fails, but by then, it can be too late. No longer knowledge, it becomes a shibboleth or rite with which its owners are unwilling to abandon.
We review our initial findings on “knowledge” [1]. Quantum mechanics (QM) is a real, actual, reproducible, and generalizable science, exactly what we seek for our quantum-like (QL) model of social reality. To begin, the National Academy of Sciences’ report in 2015 [110] concluded that “team cohesion is positively related to team effectiveness”, moderated by interdependence so that the “cohesion–effectiveness relationship is stronger when team members are more interdependent”. Also, in 2015, the Academy [110] reported that a team is “two or more individuals with different roles and responsibilities”. When these different roles produce orthogonal information, based on our Equation (1) for a dot product, the information derived from teammates in these complementary relationships does not correlate, contradicting what has been expected in the social science literature (e.g., for close relationships, see p. 207, in [111]):
“Interdependence means that important behaviors will be highly correlated. However, the evidence for complementarity is scarce”.
Our equation for complementarity offers an explanation for this gap in knowledge, i.e., the concept of complementarity and the behaviors observed are not only orthogonal but create a Shannon hole; see [1]. Moreover, once a team’s teamwork has been interrupted to collect information, the i.i.d. data collected cannot recreate the interaction(s) observed [76].
We suspect that knowledge is embodied cognition integrated with causality, unlike disembodied cognition. To reiterate, i.i.d. data cannot capture an interdependent state [76]. To restate, i.i.d. data used to capture an ordinary social event cannot reconstruct that social event. By definition, independent and identically distributed data are separable data; interdependent data are not, as the National Academy of Sciences has reported. This explains why the disembodied concepts of social science, often constructed in the laboratory, are unable to capture social reality outside of the laboratory (e.g., the questionnaires that have been found to be invalid for the concepts of self-esteem, implicit racism, honesty, etc. [1]).
When a team coheres interdependently, its degrees of freedom are reduced proportionally. That is why the Academy found in 2021 that the “performance of a team is not decomposable to, or an aggregation of, individual performances” (p. 11, [92]). Mutual dependence means that interdependence in our quantum-like (QL) model is similar to superposition or entanglement in quantum mechanics (QM). Thus, the key question in interdependence and in QM is whether “an arbitrary quantum state is entangled or separable” (e.g., p. 3, in [112]). The similarity between QL and QM is our discovery [1]: interdependence links interdependence and Schrödinger’s entanglement.
We conclude that the interdependently embodied cognition in our QL model is similar to quantum entanglement [1]. Constraints reduce information (i.e., [56,57]). But by reducing the degrees of freedom among agents interacting in a social event [113], to maintain a state of interdependence also restrains choices (i.e., a state of maximum teamwork).
From information theory [114], “knowledge” means a lack of “surprise” [4]. Also, we know that beliefs embodied in reality allow humans to make rational adjustments on the fly when reality changes (e.g., buy–sell orders following stock market fluctuations, p. 253 in [78]). But embodied cognition is a nonfactorable state preventing the decomposition of the social reality of teams [92], a “no-copy principle” for teams similar to quantum’s “no-cloning principle” (p. 77, in [115]). The inability to factor or separate information into contributions by the individuals in a team is also a part of assembly theory’s search for alien life [116]. Nonfactorability introduces uncertainty and randomness into data derived from the interaction.
Assembly theory [93] holds that information complexity is a signature of life. There is much to commend it: selection is contingent on function; evolving systems are interdependent; well-functioning structures only function with free energy; and causal relationships are difficult to ascertain for interdependent structures. However, it also makes claims with which we disagree: that innovation has negative adaptive value; that functional information requires a comprehensive knowledge of all of a system’s agents; and that the minimum functional information of any system is zero (viz., it can be negative, e.g., as in a divorce, or a business spin-off). In our view, assembly theory overlooks our discovery that life’s well-fitted structures (from reduced degrees of freedom, reduced structural entropy, and reduced factorability) transfer the excess free energy needed to hold a structure together to increase that organism’s (or team’s) productivity, helping the function of an organism, or a team, to survive (e.g., a heart that is effective over a wide range of activities must also be efficient and not wasteful of free energy over this range, p. 2326 in [117]). Fittedness was also not a factor in Von Neumann’s self-reproducing automata [118]), but it should have been.
Moreover, maintaining maximum interdependence in a team produces the maximum loss of Shannon information, a nest of interdependence that we named a “Shannon hole”. Shannon holes open a new domain of research. Statistics need to be gathered, implications need to be discovered, and generalizations need to be made. More importantly, the idea behind this new knowledge must be challenged, debated, and tested.
We speculate that Shannon holes are more likely to occur under freedom, unlikely to occur under oppression (gangs, kings, authoritarians), and often are characterized by redundancy, which can become a source of corruption [2].
In conclusion, unlike the laboratory, in the open field where embodied cognition governs, maximum interdependence is critical for top scientific teams [119]; in contrast, to control their citizens, oppressive societies significantly reduce interdependence and the freedom to pursue education and innovation, motivating them to cheat or slack off [2,96]. Compared with oppression, in every free society, the freedom of choice allows a society to best marshal its free energy to address the problems that it has targeted [120]. We speculate that because the embodied cognition in interdependence cannot be factored (e.g., [92]; supported by [93]), that is why debate is central to noisy, open, and free societies that evolve and innovate, unlike those societies that regress, stagnate, or fail to evolve [1].
In future research, we plan to further explore these questions: First, can we create new statistics based on the absence of or reduction in Shannon information (viz., Shannon holes)? For example, as we have predicted [1], does the complexity of a team reduce as each part of a team becomes fitted with the next part of a structure as it forms into a seamless structure of a well-performing whole? And is the fittedness of interdependence the key characteristic of the complexity of life, whether human, machine, or alien? If so, aware or not, machines with AI must master how to fit into a team as a productive team member. Second, is the density of open debate related to innovation (as we have found using United Nations data for Israel but less so for other MENA countries; in [2])? Third, by operating only with Shannon information, authoritarianism (kings, gangs, etc.) places its power and its citizens at a disadvantage (markets; politics; social well-being; etc.); Is this disadvantage proportional to the amount of interdependence oppressed (viz., the absence of Shannon holes)? Fourth, and last, is the collective power of a people proportional to their freedom, can it be modeled, and if so, how?
In closing, from Plato to today, whether articulated or not, knowledge is necessary for survival and to improve the human condition. For a machine to be aware [54], it must be able to recognize that the structure of knowledge produces zero entropy, that a debate begins with a countering process [16], that the replacement of a team’s member(s) is a random process contingent on fittedness, but that the uses of knowledge to perfect a team’s structure forms an interdependent tradeoff in exchange for the production of maximum entropy.

Author Contributions

For our research article, W.L. wrote the introduction, theory, discussions and conclusion. I.S.M. developed and wrote the mathematics. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All of the data to develop our thesis is contained in this article and published with this article.

Acknowledgments

The corresponding author thanks the U.S. Navy’s Office of Naval Research (ONR) for funding his summer research over the past decade at the Naval Research Laboratory (NRL), where he has worked under the guidance of Ranjeev Mittu, and where ideas about this manuscript were formulated. For her help and guidance regarding the treated and diluted radioactive wastewaters released into the sea from Japan’s Fukushima, the authors thank Mito Akiyoshi, Department of Sociology, Senshu University, Japan. This study is a further, deeper exploration of a topic for which we submitted an earlier study last year (Lawless, W.F.; Moskowitz, I.S.; Akiyoshi, M. “Knowledge, consciousness, and debate: Advancing the science of autonomous human-machine teams”, submitted to Interdependent human-machine teams. The path to autonomy.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
ALPSAdvanced Liquid Processing System
APAAmerican Psychological Association
BERTBidirectional Encoder Representations from Transformers
CABCitizens Advisory Board
DOEDepartment of Energy
ITInformation Theory
MLMMasked Language Modeling
NIHNational Institutes of Health
NRLNaval Research Laboratory
NSFNational Science Foundation
NSPNext Sentence Prediction
ONROffice of Naval Research
SESystems Engineering
SRSSavannah River Site
TEPCOTokyo Electric Power Company

References

  1. Lawless, W.F.; Moskowitz, I.S.; Doctor, K.Z. A Quantum-like Model of Interdependence for Embodied Human-Machine Teams: Reviewing the Path to Autonomy Facing Complexity and Uncertainty. Entropy 2023, 25, 1323. [Google Scholar] [CrossRef] [PubMed]
  2. Lawless, W.F. Interdependent Autonomous Human-Machine Systems: The Complementarity of Fitness, Vulnerability & Evolution. Entropy 2022, 24, 1308. [Google Scholar] [CrossRef] [PubMed]
  3. Cooke, N.J.; Lawless, W.F. Effective Human-Artificial Intelligence Teaming. In Engineering Science and Artificial Intelligence; Lawless, W.F., Mittu, R., Sofge, D.A., Shortell, T., McDermott, T.A., Eds.; Springer: Cham, Switzerland, 2021. [Google Scholar]
  4. Conant, R.C. Laws of information which govern systems. IEEE Trans. Syst. Man Cybern. 1976, 6, 240–255. [Google Scholar] [CrossRef]
  5. Kuhn, T. The Essential Tension; University of Chicago Press: Chicago, IL, USA, 1977. [Google Scholar]
  6. Thornton, S. Karl Popper. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Nodelman, U., Eds.; Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2023; Available online: https://plato.stanford.edu (accessed on 8 May 2023).
  7. Ioannidis, J.P.A. Most psychotherapies do not really work, but those that might work should be assessed in biased studies. Epidemiol. Psychiatry Sci. 2016, 25, 436–438. [Google Scholar] [CrossRef] [PubMed]
  8. Suslick, S.B.; Schiozer, D.J. Risk analysis applied to petroleum exploration and production: An overview. J. Pet. Sci. Eng. 2004, 44, 1–9. [Google Scholar] [CrossRef]
  9. Markowitz, H.M. Portfolio Selection. J. Financ. 1952, 7, 77–91. [Google Scholar]
  10. Hays, W.L. Statistics, 4th ed.; Holt, Rinehart and Winston, Inc.: Austin TX, USA, 1988. [Google Scholar]
  11. DoD. Pentagon Press Secretary John F. Kirby and Air Force Lt. Gen. Sami D. Said Hold a Press Briefing. 2021. Available online: https://www.defense.gov/News/Article/2832634 (accessed on 11 March 2021).
  12. Chen, Y.; Zhao, X.; Yuan, J. Swarm Intelligence Algorithms for Portfolio Optimization Problems: Overview and Recent Advances. Mob. Inf. Syst. 2022, 2022, 4241049. [Google Scholar] [CrossRef]
  13. Baker, A. Simplicity. In The Stanford Encyclopedia of Philosophy, Summer 2022 ed.; Zalta, E.N., Ed.; Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2022; Available online: https://plato.stanford.edu/archives/sum2022/entries/simplicity/ (accessed on 15 March 2024).
  14. Isaacson, W. How Einstein Reinvented Reality. Sci. Am. 2015, 313, 38–45. [Google Scholar] [CrossRef]
  15. Faraoni, V.; Giusti, A. Why Einstein Must be Wrong: In SEARCH of the Theory of Gravity. Phys.org, Republished from the Conversation. 2023. Available online: https://phys.org/news/2023-09-einstein-wrong-theory-gravity.html (accessed on 14 March 2024).
  16. Nash, L. The Nature of the Natural Sciences; Little, Brown and Company: Boston, MA, USA; New York, NY, USA, 1963. [Google Scholar]
  17. Robinson, A. Did Einstein really say that? As the physicist’s collected papers reach volume 15, Andrew Robinson sifts through the quotes attributed to him. Nature 2018, 557, 30. [Google Scholar] [CrossRef]
  18. Cho, A. Antimatter falls down, just like ordinary matter. Test confirms that gravity pulls the same on hydrogen and anti-hydrogen. Sci. Phys. News 2023. [Google Scholar] [CrossRef]
  19. Bekenstein, J.D. Bekenstein-Hawking entropy. Scholarpedia 2008, 3, 7375. [Google Scholar] [CrossRef]
  20. Systems Engineering. Glossary. 2023. Available online: https://sebokwiki.org/wiki/SystemsEngineering(glossary) (accessed on 12 December 2023).
  21. Checkland, P. Systems Thinking, Systems Practice; John Wiley & Sons: New York, NY, USA, 1999. [Google Scholar]
  22. Senge, P.M. The Fifth Discipline: The Art & Practice of the Learning Organization; Doubleday Business: New York, NY, USA, 1990. [Google Scholar]
  23. Steup, M.; Neta, R. Epistemology. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2020; Available online: https://plato.stanford.edu/archives/fall2020/entries/epistemolog (accessed on 13 February 2024).
  24. Bednar, R.L.; Peterson, S.R. Self-Esteem Paradoxes and Innovations in Clinical Practice, 2nd ed.; American Psychological Association (APA): Washington, DC, USA, 1995. [Google Scholar]
  25. Baumeister, R.F.; Vohs, K.D. Self-Regulation, Ego Depletion, and Motivation. Soc. Pers. Psychol. 2007, 1, 115–128. [Google Scholar] [CrossRef]
  26. Greenwald, A.G.; McGhee, D.E.; Schwartz, J.L. Measuring individual differences in implicit cognition: The implicit association test. J. Personal. Soc. Psychol. 1998, 74, 1464–1480. [Google Scholar] [CrossRef]
  27. Shu, L.L.; Mazar, N.; Gino, F.; Ariely, D.; Bazerman, M.H. Signing at the Beginning Makes Ethics Salient and Decreases Dishonest Self-Reports in Comparison to Signing at the End. Proc. Natl. Acad. Sci. USA 2012, 109, 15197–15200. [Google Scholar] [CrossRef]
  28. Baumeister, R.F.; Campbell, J.D.; Krueger, J.I.; Vohs, K.D. Exploding the self-esteem myth. Sci. Am. 2005, 292, 84–91. [Google Scholar] [CrossRef]
  29. Blanton, H.; Jaccard, J.; Klick, J.; Mellers, B.; Mitchell, G.; Tetlock, P.E. Strong Claims and Weak Evidence: Reassessing the Predictive Validity of the IAT [the Implicit Attitudes Test. J. Appl. Psychol. 2009, 94, 567–582. [Google Scholar] [CrossRef] [PubMed]
  30. Hagger, M.S.; Chatzisarantis, N.L.; Alberts, H.; Anggono, C.O.; Batailler, C.; Birt, A.R.; Brand, R.; Brandt, M.J.; Brewer, G.; Bruyneel, S.; et al. A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspect. Psychol. Sci. 2016, 11, 546–573. [Google Scholar] [CrossRef]
  31. Berenbaum, M.R. Retraction for Shu et al., Signing at the beginning makes ethics salient and decreases dishonest self-reports in comparison to signing at the end. Proc. Natl. Acad. Sci. USA 2021, 118, e2115397118. [Google Scholar] [CrossRef]
  32. Dobbin, F.; Kalev, A. Programs Fail; Harvard Business Review: Boston, MA, USA, 2016. [Google Scholar]
  33. Paluck, E.L.; Porat, R.; Clark, C.S.; Green, D.P. Prejudice Reduction: Progress and Challenges. Annu. Rev. Psychol. 2021, 72, 533–560. [Google Scholar] [CrossRef]
  34. Tetlock, P.E.; Gardner, D. Superforecasting: The Art and Science of Prediction; Crown: New York, NY, USA, 2015. [Google Scholar]
  35. Beck, U. Risk Society: Towards a New Modernity; Sage Publications: Washington DC, USA, 1992. [Google Scholar]
  36. MacKenzie, D. Trading at the Speed of Light: How Ultrafast Algorithms Are Transforming Financial Markets; Princeton University Press: Princeton, NJ, USA, 2021. [Google Scholar]
  37. Barron, J. Visiting New York? Make Sure A.I. Didn’t Write Your Guidebook. The New York Times, 19 September 2023. [Google Scholar]
  38. Nogrady, B. Is Fukushima wastewater release safe? What the science says. Nature 2023, 618, 894–895. [Google Scholar] [CrossRef]
  39. Buesseler, K.O. Opening the floodgates at Fukushima. Science 2020, 369, 621–622. [Google Scholar] [CrossRef]
  40. Ministry of Economy. Reactor Decommission, Contaminated Water and Treated Water. 2023. Available online: https://www.meti.go.jp/earthquake/nuclear (accessed on 15 September 2023). (In Japanese)
  41. Mabon, L.; Kawabe, M. Bring voices from the coast into the Fukushima treated water debate. Proc. Natl. Acad. Sci. USA 2022, 119, e2205431119. [Google Scholar] [CrossRef] [PubMed]
  42. Brown, A. Just Like That, Tons of Radioactive Waste Is Heading for the Ocean. The New York Times, 2023. Available online: https://www.nytimes.com/2023/08/22/opinion/japan-fukushima-radioactive-water-dumping(accessed on 14 September 2023).
  43. Smith, J.; Marks, N.; Irwin, T. The risks of radioactive wastewater release. The wastewater release from the Fukushima Daiichi nuclear plant is expected to have negligible effects on people and the ocean. Sci. Perspect. 2023, 382, 31–33. [Google Scholar]
  44. Japan Fisheries Co-operative. Japan Fisheries Co-Operative Has Met Prime Minster Kishida over the Handling of Treated Water by ALPS (In Japanese). 2023. Available online: https://www.zengyoren.or.jp/news (accessed on 14 September 2023).
  45. Takenaka, K.; Pollard, M.Q. Japan Complains of Harassment Calls from China over Fukushima Water Release. Reuters, 2023. Available online: https://www.reuters.com/world/asia-pacific/japan-says-harassment-calls-china-regarding-fukushima-water-release-extremely-2023-08-28/(accessed on 14 September 2023).
  46. Katayama, N. Outraged with the Government and TEPCO, Residents of Tokyo and Five Prefectures Files a Lawsuit Seeking an Injunction on Treated Water Release. Tokyo Shinbun, 2023. Available online: https://www.tokyo-np.co.jp/article/275965(accessed on 14 September 2023). (In Japanese)
  47. Harris, M. History and Significance of the EMIC/ETIC Distinction. Annu. Rev. Anthropol. 1976, 5, 329–350. [Google Scholar] [CrossRef]
  48. Evans-Pritchard, E.E. Witchcraft, Oracles and Magic among the Azande; Clarendon Press: Oxford, UK, 1937. [Google Scholar]
  49. Zucker, L.G. The role of institutionalization in cultural persistence. Am. Sociol. Rev. 1977, 726–743. [Google Scholar] [CrossRef]
  50. Skillcate, A.I. BERT for Dummies: State-of-the-art Model from Google. Medium 2022, google-42639953e769. [Google Scholar]
  51. Pfeffer, J. The Role of the General Manager in the New Economy: Can We Save People from Technology Dysfunctions? In The Future of Management in an AI World. Redefining Purpose and Strategy in the Fourth Industrial Revolution; Canals, J., Heukamp, F., Eds.; Springer International Publishing: Cham, Switzerland, 2020; pp. 67–92. [Google Scholar] [CrossRef]
  52. Luttwak, E. The Clue China Is Preparing for War. Xi Is Laying the Groundwork While the West Looks Away. Unherd, 2023. Available online: https://unherd.com/2023/07/the-clue-china-is-preparing-for-war/(accessed on 23 July 2023).
  53. Finkel, E. If AI becomes conscious, how will we know? Scientists and philosophers are proposing a checklist based on theories of human consciousness. Sci. News 2023, 381, 6660. [Google Scholar]
  54. Butlin, P.; Long, R.; Elmoznino, E.; Bengio, Y.; Birch, J.; Constant, A.; VanRullen, R. Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. arXiv 2023, arXiv:2308.08708. [Google Scholar]
  55. Naddaf, M. Europe spent €600 million to recreate the human brain in a computer. How did it go? The Human Brain Project wraps up in September after a decade. Nature examines its achievements and its troubled past. Nat. News 2023, 620, 718–720. [Google Scholar] [CrossRef]
  56. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Champaign, IL, USA, 1949; pp. 1–117. [Google Scholar]
  57. Brillouin, L. Science and Information Theory; Academic Press: Cambridge, MA, USA, 1956. [Google Scholar]
  58. Egginton, W. The Rigor of Angels. Borges, Heisenberg, Kant, and the Ultimate Nature of Reality; Pantheon Books: New York, NY, USA, 2023. [Google Scholar]
  59. Sliwa, J. Toward collective animal neuroscience. Science 2021, 374, 397–398. [Google Scholar] [CrossRef]
  60. Hartley, R.V.L. Transmission of Information. Bell Syst. Tech. J. 1928, 7, 535–563. [Google Scholar]
  61. Ash, R.B. Information Theory; Dover: Paris, France, 1965. [Google Scholar]
  62. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar]
  63. Moskowitz, I.S.; Miller, A.R. Simple Timing Channels. In Proceedings of the 1994 IEEE Computer Society Symposium on Research in Security and Privacy, Oakland, CA, USA, 16–18 May 1994. [Google Scholar]
  64. Verdú, S. On Channel Capacity per Unit Cost. IEEE Trans. Inf. Theory 1990, 5, 1019–1030. [Google Scholar] [CrossRef]
  65. Moskowitz, I.S.; Miller, A.R. The Channel Capacity of a Certain Noisy Timing Channel. IEEE Trans. Inf. Theory 1992, 38, 1339–1344. [Google Scholar] [CrossRef]
  66. Martin, K.; Moskowitz, I.S. Noisy Timing Channels with Binary Inputs and Outputs. In International Workshop on Information Hiding; Springer: Berlin/Heidelberg, Germany, 2006; pp. 124–144. [Google Scholar]
  67. Silverman, R.A. On Binary Channels and Their Cascades; Technical Report (MIT), RLE-TR-297-14267087.pdf; Research Laboratory of Electronics (RLE): Cambridge, MA, USA, 1955. [Google Scholar]
  68. Majani, E.E. A Model for the Study of Very Noisy Channels and Applications. Ph.D Dissertation, California Institute of Technology, Pasadena, CA, USA, 1988. [Google Scholar]
  69. Golomb, S.W. The Limiting Behavior of the Z-Channel. IEEE Trans. Inf. Theory. 1980, 26, 372. [Google Scholar] [CrossRef]
  70. Weidner, R.T.; Brown, L.M. Physics; Encyclopedia Britannica: Chicago, IL, USA, 2023; Available online: www.britannica.com/science/physics-science (accessed on 10 August 2023).
  71. Einstein, A. Albert Einstein on Space-Time, 13th ed.; Encyclopedia Britannica: Chicago, IL, USA, 1926; Albert-Einstein-on-Space-Time-1987141. [Google Scholar]
  72. Frank, A.; Gleiser, M. The Story of Our Universe May Be Starting to Unravel. New York Times, 2023. Available online: www.nytimes.com/2023/09/02/opinion/cosmology-crisis-webb-telescope.html(accessed on 29 August 2023).
  73. Rickless, S. Plato’s Parmenides. In The Stanford Encyclopedia of Philosophy; Zalta, E.N., Ed.; Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2020; Available online: https://plato.stanford.edu/archives/spr2020/entries/plato-parmenides (accessed on 7 September 2023).
  74. Mann, R.P. Collective decision making by rational individuals. Proc. Natl. Acad. Sci. USA 2018, 115, E10387–E10396. [Google Scholar] [CrossRef]
  75. Lawless, W.F.; Sofge, D.A.; Lofaro, D.; Mittu, R. Editorial: Interdisciplinary Approaches to the Structure and Performance of Interdependent Autonomous Human Machine Teams and Systems. Front. Phys. 2023. [Google Scholar] [CrossRef]
  76. Schölkopf, B.; Locatello, F.; Bauer, S.; Ke, N.R.; Kalchbrenner, N.; Goyal, A.; Bengio, Y. Towards Causal Representation Learning. arXiv 2021, arXiv:2102.11107. [Google Scholar] [CrossRef]
  77. Sen, A. The Formulation of Rational Choice. Am. Econ. Rev. 1994, 84, 385–390. [Google Scholar]
  78. Lucas, R.; Monetary Neutrality. Nobel Prize Lecture 1995. Available online: https://www.nobelprize.org/uploads/2018/06/lucas-lecture.pdf (accessed on 1 October 2020).
  79. Simon, H.A. Bounded Rationality and Organizational Learning; Technical Report; AIP 107; CMU: Pittsburgh, PA, USA, 1989. [Google Scholar]
  80. U.S. Supreme Court, California v. Green, 399 U.S. 149. California v. Green, No. 387, Argued April 20, 1970, Decided June 23, 1970, 399, U.S. 149. 1970. Available online: https://supreme.justia.com/cases/federal/us/399/149/ (accessed on 10 November 2023).
  81. Ginsburg, R.B. American Electric Power Co. et al. v. Connecticut et al., U.S. Supreme Court, 10-174, 2011. Available online: https://www.oyez.org/cases/2010/10-174 (accessed on 15 August 2019).
  82. Nosek, B. Estimating the reproducibility of psychological science. Science 2015, 349, 943. [Google Scholar] [CrossRef]
  83. Nash, J.F. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef]
  84. Chomsky, N.; Roberts, I.; Watumull, J. The False Promise of ChatGPT. New York Times, 2023. Available online: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html(accessed on 8 March 2023).
  85. Drucker, P.F. What We Can Learn from Japanese Management; Harvard Business Review: Boston, MA, USA, 1971. [Google Scholar]
  86. Akiyoshi, M.; Whitton, J.; Charnley-Parry, I.; Lawless, W.F. Effective Decision Rules for Systems of Public Engagement in Radioactive Waste Disposal: Evidence from the United States, the United Kingdom, and Japan. In Systems Engineering and Artificial Intelligence; Lawless, W.F., Mittu, R., Sofge, D.A., Shortell, T., McDermott, T.A., Eds.; Springer: Cham, Switzerland, 2021; Chapter 24; pp. 509–533. [Google Scholar] [CrossRef]
  87. Bradbury, J.A.; Branch, K.M.; Malone, E.L. An Evaluation of DOE-EM Public Participation Programs; PNNL-14200; Pacific Northwest National Laboratory: Richland, WA, USA, 2003. [Google Scholar]
  88. Lawless, W.F.; Bergman, M.; Feltovich, N. Consensus-seeking versus truth-seeking. Asce Pract. Period. Hazardous Toxic Radioact. Waste Manag. 2005, 9, 59–70. [Google Scholar] [CrossRef]
  89. Mui, C. How Kodak Failed. Forbes. 2012. Available online: https://www.forbes.COM (accessed on 11 September 2023).
  90. Barabba, V. The Decision Loom: A Design for Interactive Decision-Making in Organizations; Triarchy Press Ltd.: Dorset, UK, 2011. [Google Scholar]
  91. Hudson, A. The Rise & Fall of Kodak. A Brief History of The Eastman Kodak Company, 1880 to 2012, Photo Secrets. 2012. Available online: https://www.photosecrets.com/the-rise-and-fall-of-kodak (accessed on 15 December 2023).
  92. Endsley, M. Human-AI Teaming: State of the Art and Research Needs; National Research Council, National Academies Press: Washington, DC, USA, 2021. [Google Scholar]
  93. Wong, M.L.; Cleland, C.E.; Arend, D.; Hazen, R.M. On the roles of function and selection in evolving systems. Proc. Natl. Acad. Sci. USA 2023, 120, e2310223120. [Google Scholar] [CrossRef]
  94. Wang, Y. China’s Social Media Interference Shows Urgent Need for Rules. The Canberra Times, 2023. Available online: https://www.hrw.org/news/2023/08/14/chinas-social-media-interference-shows-urgent-need-rules(accessed on 9 September 2023).
  95. Zumbrun, J. Should the U.S. worry that China is closing in on its lead in research and development? Amid a productivity slump, the IMF sees benefits from Chinese and South Korean innovation. Wall Str. J. 2018. Available online: https://blogs.wsj.com/economics/2018/04/10/should-the-us-worry-about-china-rd/ (accessed on 10 October 2018).
  96. Taplin, N. Can China’s red capital really innovate? U.S. technology theft from Britain helped kick-start the industrial revolution on American shores. Will China be able to replicate that success? Wall Str. J. 2018. Available online: https://www.wsj.com/articles/can-chinas-red-capital-really-innovate-1526299173 (accessed on 14 May 2018).
  97. Wickens, C.D. Engineering Psychology and Human Performance, 2nd ed.; Merrill: Palo Alto, CA, USA, 1992. [Google Scholar]
  98. Brown, B. Human Machine Teaming using Large Language Models. In Interdependent Human-Machine Teams; Lawless, W.F., Mittu, R., Sofge, D.A., Fouad, H., Eds.; The path to autonomy; Elsevier: Amsterdam, The Netherlands, 2024; Chapter 3. [Google Scholar]
  99. Dutilh Novaes, C. Argument and Argumentation, The Stanford Encyclopedia of Philosophy; Zalta, E.N., Nodelman, U., Eds.; Stanford Encyclopedia of Philosophy: Stanford, CA, USA, 2022; Available online: https://plato.stanford.edu/archives/fall2022/entries/argument/ (accessed on 15 January 2024).
  100. Marinsek, N.L.; Gazzaniga, M.S. A Split-Brain Perspective on Illusionism. J. Conscious. Stud. 2016, 23, 149–159. [Google Scholar]
  101. Wang, M.; Arteaga, D.; He, B.J. Brain mechanisms for simple perception and bistable perception. Proc. Natl. Acad. Sci. USA 2013, 110, E3350–E3359. [Google Scholar] [CrossRef]
  102. Adelson, E.H. Checkershadow Illusion; Perceptual Science Group, MIT: Cambridge, MA, USA, 2005. [Google Scholar]
  103. Eagleman, D.M. Visual illusions and neurobiology. Nat. Rev. Neurosci. 2001, 2, 920–926. [Google Scholar] [CrossRef] [PubMed]
  104. Brincat, S.L.; Donoghue, J.A.; Mahnke, M.K.; Kornblith, S.; Lundqvist, M.; Miller, E.K. Interhemispheric transfer of working memories. Neuron 2021, 109, 1055–1066.e4. [Google Scholar] [CrossRef] [PubMed]
  105. Carroll, S. The Big Picture. On the Origins of Life, Meaning, and the Universe Itself; Dutton (Penguin Random House): New York, NY, USA, 2016. [Google Scholar]
  106. Lash, M.T.; Zhas, K. Early prediction of movie success. arXiv 2016, arXiv:1506.05382v2. [Google Scholar]
  107. Zeilinger, A. Experiment and the foundations of quantum physics. Rev. Mod. Phys. 1999, 71, S288–S297. [Google Scholar] [CrossRef]
  108. de León, M.S.P.; Bienvenu, T.; Marom, A.; Engel, S.; Tafforeau, P.; Alatorre Warren, J.L.; Zollikofer, C.P. The primitive brain of early homo. Science 2021, 372, 165–171. [Google Scholar] [CrossRef]
  109. Sagan, C. Broca’s Brain: Reflections on the Romance of Science; Ballantine Books: New York, NY, USA, 1986. [Google Scholar]
  110. Cooke, N.; Hilton, M.E. Enhancing the Effectiveness of Team Science; National Research Council, National Academies Press: Washington, DC, USA, 2015. [Google Scholar]
  111. Berscheid, E.; Reis, H. Attraction and close relationships. The Handbook of Social Psychology, 4th ed.; Lawrence Erlbaum: Mahwah, NJ, USA, 1998; Volume 1. [Google Scholar]
  112. Wang, B.H. Entanglement-Separability Boundary Within a Quantum State. arXiv 2020, arXiv:2003.00607. [Google Scholar]
  113. Lewin, K. Field Theory in Social Science. Selected Theoretical Papers; Harper and Brothers: New York, NY, USA, 1951. [Google Scholar]
  114. IT. Information Theory. 2023. Available online: https://cs.stanford.edu/people/eroberts/courses/soco/projects/1999-00/information-theory/noise (accessed on 10 March 2024).
  115. Wooters, W.; Zurek, W. The no-cloning theorem. Phys. Today 2009, 62, 76–77. [Google Scholar] [CrossRef]
  116. Marshall, S.M.; Mathis, C.; Carrick, E.; Keenan, G.; Cooper, G.J.T.; Graham, H.; Craven, M.; Gromski, P.S.; Moore, D.G.; Walker, S.I.; et al. Identifying molecules as biosignatures with assembly theory and mass spectrometry. Nat. Commun. 2021, 12, 3033. [Google Scholar] [CrossRef]
  117. Bette, D.A.; Pretre, R.; Chassot, P. Is our heart a well-designed pump? The heart along animal evolution. Eur. Heart J. 2014, 35, 2322–2332. [Google Scholar] [CrossRef]
  118. Von Neumann, J. Theory of Self-Reproducing Automata; University of Illinois Press: Champaign, IL, USA, 1966. [Google Scholar]
  119. Cummings, J. Team Science Successes and Challenges; NSF Workshop Fundamentals of Team Science and the Science of Team Science: Bethesda, MD, USA, 2015. [Google Scholar]
  120. Moskowitz, I.S. A Cost Metric for Team Efficiency. Front. Phys. 2022, 10, 861633. [Google Scholar] [CrossRef]
Figure 1. Probability is on the x-axis, and entropy, H ( U ) , is on the vertical axis.
Figure 1. Probability is on the x-axis, and entropy, H ( U ) , is on the vertical axis.
Knowledge 04 00019 g001
Figure 2. The noisy channel diagram.
Figure 2. The noisy channel diagram.
Knowledge 04 00019 g002
Figure 3. Adelson’s Checkerboard Illusion.
Figure 3. Adelson’s Checkerboard Illusion.
Knowledge 04 00019 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lawless, W.; Moskowitz, I.S. Shannon Holes, Black Holes, and Knowledge: The Essential Tension for Autonomous Human–Machine Teams Facing Uncertainty. Knowledge 2024, 4, 331-357. https://doi.org/10.3390/knowledge4030019

AMA Style

Lawless W, Moskowitz IS. Shannon Holes, Black Holes, and Knowledge: The Essential Tension for Autonomous Human–Machine Teams Facing Uncertainty. Knowledge. 2024; 4(3):331-357. https://doi.org/10.3390/knowledge4030019

Chicago/Turabian Style

Lawless, William, and Ira S. Moskowitz. 2024. "Shannon Holes, Black Holes, and Knowledge: The Essential Tension for Autonomous Human–Machine Teams Facing Uncertainty" Knowledge 4, no. 3: 331-357. https://doi.org/10.3390/knowledge4030019

APA Style

Lawless, W., & Moskowitz, I. S. (2024). Shannon Holes, Black Holes, and Knowledge: The Essential Tension for Autonomous Human–Machine Teams Facing Uncertainty. Knowledge, 4(3), 331-357. https://doi.org/10.3390/knowledge4030019

Article Metrics

Back to TopTop