entropy-logo

Journal Browser

Journal Browser

An Entropy Approach to the Structure and Performance of Interdependent Autonomous Human Machine Teams and Systems (A-HMT-S)

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: closed (25 August 2024) | Viewed by 24274

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Psychology, Paine College, Augusta, GA 30901, USA
Interests: autonomous human-machine teams and systems (A-HMT-S); artificial intelligence
Special Issues, Collections and Topics in MDPI journals
U.S. Naval Research Laboratory, Navy Center for Applied Research in Artificial Intelligence (NCARAI), Washington, DC 20375, USA
Interests: artificial intelligence; natural computation; machine learning; teams and swarms; autonomous robotic systems

E-Mail Website
Guest Editor
U.S. Naval Research Laboratory, Navy Center for Applied Research in Artificial Intelligence (NCARAI), Washington, DC 20375, USA
Interests: electrical engineer and roboticists

Special Issue Information

Dear Colleagues, 

The development of autonomous teams and systems has become increasingly important. For our Special Topic, we are interested in developing the science of autonomy by advancing theory for human-machine teams and systems [1], its models, structure, performance and management of interdependent teams (e.g., with entropy or disorder); its central problems (e.g., control, machine learning, AI, the selection and mix of humans and machines); and its interdisciplinary issues (e.g., philosophy, ethics, trust, confidence, legal, social science). Possibly, computational swarms of independent agents increase the need for common goals or objectives, while interdependence increases the likelihood of system autonomy. This speculation leads us to suspect the existence of an overlap between the control of swarms and the governance of interdependent teams. If correct, autonomy presents a series of challenges that must be addressed. But we are left with more questions than answers (see our list at the end).

Theories based on traditional models using the independent data (i.e., i.i.d. models [2]) associated with Shannon [3] information theory have not been successful at replicating known interdependent effects (e.g., [4]), nor at predicting new effects (e.g., the inherent bistability of philosophical or political interpretations; of the shared tasks necessary to operate a restaurant business; of team competition in sports). Instead, social scientists in particular have been struggling to replicate their own findings [5], impeding them from being sufficiently confident to generalize to human-machine autonomy. In contrast, interdependence has been described as bewildering in the laboratory [6]. But it is a state-dependent phenomenon linked with similar phenomena [7]; e.g., quantum effects [8]. If we re-construe intelligence not as an individual quality that is to be prized, but, instead, as a more valuable phenomenon that occurs in the interactions of a whole team independent of the intelligence of its members [9], re-construal opens a path to theory and models that may advance the science of interaction among agents striving to form more productive social units composed of orthogonal parts (the complementary parts of whole teams, businesses, or systems). Moreover, state-dependent phenomena are dependent on multiple effects that contribute to a context in open systems, especially when having to make decisions while facing uncertainty ([10]; e.g., [11]), requiring a profound shift in what is considered observational. Although similar conclusions were drawn in social psychology [12] and Systems Engineering [13], Schrödinger ([14], p. 555) was the first to believe that a knowledge of the whole precluded full knowledge of the parts of a whole. Schrödinger attributed this loss of information at the individual level to the shift from independence to dependence. 

Independent models work well for closed system models. But closed models are often unrealistic. In a critique of closed systems, retired U.S. General Zinni [15] complained that the use of war games results in “preordained proofs”; that is, by choosing a game's context, a user can obtain a desired outcome. Moving from closed system models to open-systems, characterized by uncertainty, overwhelms traditional models; e.g., in economics, Rudd [16] began:

Mainstream economics is replete with ideas that “everyone knows” to be true, but that are actually arrant nonsense.

If the search for fitness is a search for dependency (viz., an adaptiveness that produces robustness), we seek models to advance autonomous human-machine systems, whether from economics; markets under threat (e.g., AT&T is shedding its media empire; in [17]); or disrupted relationships (divorces in marriage, business, science teams). In open systems, managing entropy production is critical.

Questions and Some Suggestions for Topics:

  • We know that a team, composed of interdependent teammates, is more productive than the same members who work independently [18]; we do not know why, but we suspect offsetting entropy production from the complementary parts of a team when a highly interdependent team has been formed into a cohesive unit.
  • Interdependence is state dependency. State-dependency models have achieved great predictive success in quantum mechanics while at the same time failing to be intuitive or to being open to a philosophical understanding [8];[19]. That highly predictive, state-dependent quantum models leave meaning open to interpretation makes models of interdependence non-traditional and non-rational, requiring a trial-and-error randomness in their structure [1]; how else are they identifiable other than by a system’s entropy production?
  • It is common among philosophers to be pessimistic about a theory of meaning [20]. It may be, however, that a philosophical approach to the meaning of autonomy yields the very insight that autonomy researchers can use to build new theory.
  • What does autonomy mean for humans or machines; swarms of machines; or society?
  • How might autonomy differ for models of independent human-machine agents versus interdependent ones; the coevolution of humans and technology (p. 170 [21]); or organizational (e.g., the restaurant workers in dependent and orthogonal roles, such as a cook, waiter and clerk) and biological models of interdependent agents that perform in complementary and dependent team roles (e.g., biological collectives, like ants [22]; plants or “mother trees” [23])?
  • A focus on independence is highlighted by the belief that “many hands make light work,” but this focus leaves the size of a team unsolved ([18], p. 33). Is the search for the fitness of a team’s or firm’s size the motivating cause of mergers or spin-offs? Does team size reflect the complexity of the problem being addressed?
  • Can interdependence resolve the open-system questions posed by Rudd in economics and Gen. Zinni in games?
  • From an interdisciplinary perspective, what might a non-traditional model of autonomous human-machine teams and systems look like?
  • What metrics can be proposed for autonomous team structures and performance?
  • How might the context of an autonomous team or system be determined when faced by the situations of uncertainty, conflict or competition that limit traditional models [24]?
  • Can the governance of an A-HMT-S be conducted and explained with AI [25]?
  • Should authority be given to a machine to take operational control from a human operator by overriding the human [26]? How much authority should be given to a machine in an interdependent team in the event its human operator becomes dysfunctional?
  • If the governance of an A-HMT-S is designed to promote democracy, does that increase confidence in AI for autonomous human-machine teams and systems (e.g., [27], p.11)?

References:

  1. Lawless, W.F. The interdependence of autonomous human-machine teams: The entropy of teams, but not individuals, advances science. Entropy 2019, 21, 1195. doi: 10.3390/e21121195.
  2. Schölkopf, B. et al. (2021), Towards Causal Representation Learning, arXiv, retrieved 7/6/2021 from https://arxiv.org/pdf/2102.11107.pdf
  3. Shannon, C.E. (1948), A Mathematical Theory of Communication, The Bell System Technical Journal, 27: 379–423, 623–656.
  4. Google unit DeepMind tried—and failed—to win ai autonomy from parent (2021). https://www.wsj.com/articles/google-unit-deepmind-triedand-failedto-win-ai-autonomy-from-parent-11621592951
  5. Nosek, B., corresponding author from OCS (2015), Estimating the reproducibility of psychological science, Science, 349 (6251): 943: also, https://doi.org/10.17226/25303.)
  6. Jones, E. (1998), Major developments in five decades of social psychology,  In Gilbert, D.T., Fiske, S.T., & Lindzey, G., Handbook Social Psychology, Vol. I, pp. 3-57. Boston: McGraw-Hill.
  7. Davies, P. Does new physics lurk inside living matter? Physics today 2020, 73, 34-41. doi: 10.1063/PT.3.4546.
  8. Lawless, W.F. Quantum-like interdependence theory advances autonomous human–machine teams (A-HMTS). Entropy 2020, 22, 1227. doi: 10.3390/ e22111227.
  9. Cooke, N. & Lawless, W.F. (2021), Cooke, N.J. & Lawless, W.F. (2021, forthcoming), Effective Human-Artificial Intelligence Teaming, In Lawless et al., Engineering Science and Artificial Intelligence, Springer.
  10. Lawless, W.F.; Mittu, R.; Sofge, D.; Hiatt, L. Artificial intelligence, autonomy, and human-machine teams: Interdependence, context, and explainable ai. AI Magazine 2019, 40, 5-13. doi: 10.1609/ aimag.v40i3.2866
  11. Colonial pipeline CEO Tells why he paid hackers a $4.4 million ransom (2021). https://www.wsj.com/articles/colonial-pipeline-ceo-tells-why-he-paid-hackers-a-4-4-million-ransom-11621435636?.
  12. Lewin, K. (1951), Field theory of social science. Selected theoretical papers. Darwin Cartwright (Ed.). New York: Harper & Brothers.
  13. Walden, D.D., Roedler, G.J., Forsberg, K.J., Hamelin, R.D. & Shortell, T.M. (Eds.) (2015), Systems Engineering Handbook. A guide for system life cycle processes and activities (4th Edition). Prepared by International Council on System Engineering (INCOSE-TP-2003-002-04. John Wiley.
  14. Schrödinger, E., 1935. “Discussion of Probability Relations Between Separated Systems,” Proceedings of the Cambridge Philosophical Society, 31: 555–563; 32 (1936): 446–451.
  15. Augier, M. & Barrett, S.F.X. (2021), “General Anthony Zinni (Ret.) on wargaming Iraq Millennium Challenge and competition," CIMSEC, https://cimsec.org/general-anthony-zinni-ret-on-wargaming-iraq-millennium-challenge-and-competition/
  16. Rudd, J.B. (2021). Why Do We Think That Inflation Expectations Matter for Inflation? (And Should We?). Federal Reserve Board, D.C. https://doi.org/10.17016/FEDS.2021.062.
  17. AT&T’s Hollywood Ending Erased Billions in Value; https://www.wsj.com/articles/att-hollywood-ending-erased-billions-value-hbo-discovery-warner-11621297279
  18. National Research Council (NRC, 2015). Enhancing the effectiveness of team science. doi: 10.17226/19007.
  19. Weinberg, S. (2017, 1/19), The Trouble with Quantum Mechanics, http://www.nybooks.com/articles/2017/01/19/trouble-with-quantum-mechanics; replies http://www.nybooks.com/articles/2017/04/06/steven-weinberg-puzzle-quantum-mechanics/
  20. Speaks, J. (2021), "Theories of Meaning", Stanford Encycl. Philosophy, Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2021/entries/meaning/
  21. de León, M.S.P. et al. (2021), The primitive brain of early homo. Science, 372: 165-171. doi: 10.1126/science.aaz0032.
  22. Insects (2021), Biological Collectives. https://ssr.seas.harvard.edu/insect-collectives.
  23. Mother Tree Project (2021). https://mothertreeproject.org.
  24. Mann, R.P. (2018), Collective decision making by rational individuals, PNAS, 115(44): E10387-E10396; from https://doi.org/10.1073/pnas.1811964115
  25. Gunning, D et al., 2019, XAI—explainable artificial intelligence. Science Robotics, 4. doi: 10.1126/scirobotics.aay7120.
  26. Sofge, D.A. et al. (2019), Will a Self-Authorizing AI-Based System Take Control from a Human Operator? From https://ojs.aaai.org/index.php/aimagazine/article/view/5196.
  27. National Security Commission Report on AI (2021); https://www.nscai.gov/2021-final-report/

Dr. William Lawless
Dr. Donald Sofge
Dr. Daniel Lofaro
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

21 pages, 976 KiB  
Article
CCTFv2: Modeling Cyber Competitions
by Basheer Qolomany, Tristan J. Calay, Liaquat Hossain, Aos Mulahuwaish and Jacques Bou Abdo
Entropy 2024, 26(5), 384; https://doi.org/10.3390/e26050384 - 30 Apr 2024
Viewed by 1350
Abstract
Cyber competitions are usually team activities, where team performance not only depends on the members’ abilities but also on team collaboration. This seems intuitive, especially given that team formation is a well-studied discipline in competitive sports and project management, but unfortunately, team performance [...] Read more.
Cyber competitions are usually team activities, where team performance not only depends on the members’ abilities but also on team collaboration. This seems intuitive, especially given that team formation is a well-studied discipline in competitive sports and project management, but unfortunately, team performance and team formation strategies are rarely studied in the context of cybersecurity and cyber competitions. Since cyber competitions are becoming more prevalent and organized, this gap becomes an opportunity to formalize the study of team performance in the context of cyber competitions. This work follows a cross-validating two-approach methodology. The first is the computational modeling of cyber competitions using Agent-Based Modeling. Team members are modeled, in NetLogo, as collaborating agents competing over a network in a red team/blue team match. Members’ abilities, team interaction and network properties are parametrized (inputs), and the match score is reported as output. The second approach is grounded in the literature of team performance (not in the context of cyber competitions), where a theoretical framework is built in accordance with the literature. The results of the first approach are used to build a causal inference model using Structural Equation Modeling. Upon comparing the causal inference model to the theoretical model, they showed high resemblance, and this cross-validated both approaches. Two main findings are deduced: first, the body of literature studying teams remains valid and applicable in the context of cyber competitions. Second, coaches and researchers can test new team strategies computationally and achieve precise performance predictions. The targeted gap used methodology and findings which are novel to the study of cyber competitions. Full article
Show Figures

Figure 1

22 pages, 2825 KiB  
Article
Applications of Shaped-Charge Learning
by Boris Galitsky
Entropy 2023, 25(11), 1496; https://doi.org/10.3390/e25111496 - 30 Oct 2023
Viewed by 1279
Abstract
It is well known that deep learning (DNN) has strong limitations due to a lack of explainability and weak defense against possible adversarial attacks. These attacks would be a concern for autonomous teams producing a state of high entropy for the team’s structure. [...] Read more.
It is well known that deep learning (DNN) has strong limitations due to a lack of explainability and weak defense against possible adversarial attacks. These attacks would be a concern for autonomous teams producing a state of high entropy for the team’s structure. In our first article for this Special Issue, we propose a meta-learning/DNNkNN architecture that overcomes these limitations by integrating deep learning with explainable nearest neighbor learning (kNN). This architecture is named “shaped charge”. The focus of the current article is the empirical validation of “shaped charge”. We evaluate the proposed architecture for summarization, question answering, and content creation tasks and observe a significant improvement in performance along with enhanced usability by team members. We observe a substantial improvement in question answering accuracy and also the truthfulness of the generated content due to the application of the shaped-charge learning approach. Full article
Show Figures

Figure 1

23 pages, 636 KiB  
Article
Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
by Zachary Bishof, Jaelle Scheuerman and Chris J. Michael
Entropy 2023, 25(10), 1443; https://doi.org/10.3390/e25101443 - 12 Oct 2023
Viewed by 1461
Abstract
Though an accurate measurement of entropy, or more generally uncertainty, is critical to the success of human–machine teams, the evaluation of the accuracy of such metrics as a probability of machine correctness is often aggregated and not assessed as an iterative control process. [...] Read more.
Though an accurate measurement of entropy, or more generally uncertainty, is critical to the success of human–machine teams, the evaluation of the accuracy of such metrics as a probability of machine correctness is often aggregated and not assessed as an iterative control process. The entropy of the decisions made by human–machine teams may not be accurately measured under cold start or at times of data drift unless disagreements between the human and machine are immediately fed back to the classifier iteratively. In this study, we present a stochastic framework by which an uncertainty model may be evaluated iteratively as a probability of machine correctness. We target a novel problem, referred to as the threshold selection problem, which involves a user subjectively selecting the point at which a signal transitions to a low state. This problem is designed to be simple and replicable for human–machine experimentation while exhibiting properties of more complex applications. Finally, we explore the potential of incorporating feedback of machine correctness into a baseline naïve Bayes uncertainty model with a novel reinforcement learning approach. The approach refines a baseline uncertainty model by incorporating machine correctness at every iteration. Experiments are conducted over a large number of realizations to properly evaluate uncertainty at each iteration of the human–machine team. Results show that our novel approach, called closed-loop uncertainty, outperforms the baseline in every case, yielding about 45% improvement on average. Full article
Show Figures

Figure 1

29 pages, 6606 KiB  
Article
Shaped-Charge Learning Architecture for the Human–Machine Teams
by Boris Galitsky, Dmitry Ilvovsky and Saveli Goldberg
Entropy 2023, 25(6), 924; https://doi.org/10.3390/e25060924 - 12 Jun 2023
Cited by 2 | Viewed by 1702
Abstract
In spite of great progress in recent years, deep learning (DNN) and transformers have strong limitations for supporting human–machine teams due to a lack of explainability, information on what exactly was generalized, and machinery to be integrated with various reasoning techniques, and weak [...] Read more.
In spite of great progress in recent years, deep learning (DNN) and transformers have strong limitations for supporting human–machine teams due to a lack of explainability, information on what exactly was generalized, and machinery to be integrated with various reasoning techniques, and weak defense against possible adversarial attacks of opponent team members. Due to these shortcomings, stand-alone DNNs have limited support for human–machine teams. We propose a Meta-learning/DNN → kNN architecture that overcomes these limitations by integrating deep learning with explainable nearest neighbor learning (kNN) to form the object level, having a deductive reasoning-based meta-level control learning process, and performing validation and correction of predictions in a way that is more interpretable by peer team members. We address our proposal from structural and maximum entropy production perspectives. Full article
Show Figures

Figure 1

26 pages, 1653 KiB  
Article
Mutual Information and Multi-Agent Systems
by Ira S. Moskowitz, Pi Rogers and Stephen Russell
Entropy 2022, 24(12), 1719; https://doi.org/10.3390/e24121719 - 24 Nov 2022
Viewed by 1836
Abstract
We consider the use of Shannon information theory, and its various entropic terms to aid in reaching optimal decisions that should be made in a multi-agent/Team scenario. The methods that we use are to model how various agents interact, including power allocation. Our [...] Read more.
We consider the use of Shannon information theory, and its various entropic terms to aid in reaching optimal decisions that should be made in a multi-agent/Team scenario. The methods that we use are to model how various agents interact, including power allocation. Our metric for agents passing information are classical Shannon channel capacity. Our results are the mathematical theorems showing how combining agents influences the channel capacity. Full article
Show Figures

Figure 1

15 pages, 2182 KiB  
Article
Providing Care: Intrinsic Human–Machine Teams and Data
by Stephen Russell and Ashwin Kumar
Entropy 2022, 24(10), 1369; https://doi.org/10.3390/e24101369 - 27 Sep 2022
Viewed by 1973
Abstract
Despite the many successes of artificial intelligence in healthcare applications where human–machine teaming is an intrinsic characteristic of the environment, there is little work that proposes methods for adapting quantitative health data-features with human expertise insights. A method for incorporating qualitative expert perspectives [...] Read more.
Despite the many successes of artificial intelligence in healthcare applications where human–machine teaming is an intrinsic characteristic of the environment, there is little work that proposes methods for adapting quantitative health data-features with human expertise insights. A method for incorporating qualitative expert perspectives in machine learning training data is proposed. The method implements an entropy-based consensus construct that minimizes the challenges of qualitative-scale data such that they can be combined with quantitative measures in a critical clinical event (CCE) vector. Specifically, the CCE vector minimizes the effects where (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. The incorporation of human perspectives in machine learning training data provides encoding of human considerations in the subsequent machine learning model. This encoding provides a basis for increasing explainability, understandability, and ultimately trust in AI-based clinical decision support system (CDSS), thereby improving human–machine teaming concerns. A discussion of applying the CCE vector in a CDSS regime and implications for machine learning are also presented. Full article
Show Figures

Figure 1

22 pages, 4669 KiB  
Article
An Application of Inverse Reinforcement Learning to Estimate Interference in Drone Swarms
by Keum Joo Kim, Eugene Santos, Jr., Hien Nguyen and Shawn Pieper
Entropy 2022, 24(10), 1364; https://doi.org/10.3390/e24101364 - 27 Sep 2022
Cited by 1 | Viewed by 1963
Abstract
Despite the increasing applications, demands, and capabilities of drones, in practice they have only limited autonomy for accomplishing complex missions, resulting in slow and vulnerable operations and difficulty adapting to dynamic environments. To lessen these weaknesses, we present a computational framework for deducing [...] Read more.
Despite the increasing applications, demands, and capabilities of drones, in practice they have only limited autonomy for accomplishing complex missions, resulting in slow and vulnerable operations and difficulty adapting to dynamic environments. To lessen these weaknesses, we present a computational framework for deducing the original intent of drone swarms by monitoring their movements. We focus on interference, a phenomenon that is not initially anticipated by drones but results in complicated operations due to its significant impact on performance and its challenging nature. We infer interference from predictability by first applying various machine learning methods, including deep learning, and then computing entropy to compare against interference. Our computational framework begins by building a set of computational models called double transition models from the drone movements and revealing reward distributions using inverse reinforcement learning. These reward distributions are then used to compute the entropy and interference across a variety of drone scenarios specified by combining multiple combat strategies and command styles. Our analysis confirmed that drone scenarios experienced more interference, higher performance, and higher entropy as they became more heterogeneous. However, the direction of interference (positive vs. negative) was more dependent on combinations of combat strategies and command styles than homogeneity. Full article
Show Figures

Figure 1

22 pages, 6314 KiB  
Article
Interdependent Autonomous Human–Machine Systems: The Complementarity of Fitness, Vulnerability and Evolution
by William F. Lawless
Entropy 2022, 24(9), 1308; https://doi.org/10.3390/e24091308 - 15 Sep 2022
Cited by 4 | Viewed by 2381
Abstract
For the science of autonomous human–machine systems, traditional causal-time interpretations of reality in known contexts are sufficient for rational decisions and actions to be taken, but not for uncertain or dynamic contexts, nor for building the best teams. First, unlike game theory where [...] Read more.
For the science of autonomous human–machine systems, traditional causal-time interpretations of reality in known contexts are sufficient for rational decisions and actions to be taken, but not for uncertain or dynamic contexts, nor for building the best teams. First, unlike game theory where the contexts are constructed for players, or machine learning where contexts must be stable, when facing uncertainty or conflict, a rational process is insufficient for decisions or actions to be taken; second, as supported by the literature, rational explanations cannot disaggregate human–machine teams. In the first case, interdependent humans facing uncertainty spontaneously engage in debate over complementary tradeoffs in a search for the best path forward, characterized by maximum entropy production (MEP); however, in the second case, signified by a reduction in structural entropy production (SEP), interdependent team structures make it rationally impossible to discern what creates better teams. In our review of evidence for SEP–MEP complementarity for teams, we found that structural redundancy for top global oil producers, replicated for top global militaries, impedes interdependence and promotes corruption. Next, using UN data for Middle Eastern North African nations plus Israel, we found that a nation’s structure of education is significantly associated with MEP by the number of patents it produces; this conflicts with our earlier finding that a U.S. Air Force education in air combat maneuvering was not associated with the best performance in air combat, but air combat flight training was. These last two results exemplify that SEP–MEP interactions by the team’s best members are made by orthogonal contributions. We extend our theory to find that competition between teams hinges on vulnerability, a complementary excess of SEP and reduced MEP, which generalizes to autonomous human–machine systems. Full article
Show Figures

Figure 1

Other

Jump to: Research

24 pages, 799 KiB  
Opinion
Artificial Intelligence (AI) Trust Framework and Maturity Model: Applying an Entropy Lens to Improve Security, Privacy, and Ethical AI
by Michael Mylrea and Nikki Robinson
Entropy 2023, 25(10), 1429; https://doi.org/10.3390/e25101429 - 9 Oct 2023
Cited by 10 | Viewed by 8089
Abstract
Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. [...] Read more.
Recent advancements in artificial intelligence (AI) technology have raised concerns about the ethical, moral, and legal safeguards. There is a pressing need to improve metrics for assessing security and privacy of AI systems and to manage AI technology in a more ethical manner. To address these challenges, an AI Trust Framework and Maturity Model is proposed to enhance trust in the design and management of AI systems. Trust in AI involves an agreed-upon understanding between humans and machines about system performance. The framework utilizes an “entropy lens” to root the study in information theory and enhance transparency and trust in “black box” AI systems, which lack ethical guardrails. High entropy in AI systems can decrease human trust, particularly in uncertain and competitive environments. The research draws inspiration from entropy studies to improve trust and performance in autonomous human–machine teams and systems, including interconnected elements in hierarchical systems. Applying this lens to improve trust in AI also highlights new opportunities to optimize performance in teams. Two use cases are described to validate the AI framework’s ability to measure trust in the design and management of AI systems. Full article
Show Figures

Figure 1

Back to TopTop