Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans
Abstract
:1. Introduction
2. Background
2.1. Main Existing Criteria (or Descriptors)
- Category 1: an increase in the value associated with the descriptor requires a cooperative situation;
- Category 2: a descriptor with a low value implies a situation of assistance (an intervention is recommended).
2.2. Methodology Principles
- 1.
- The different actors perceive the evolution of the environment and determine the level of criticality: for (in the same way, for );
- 2.
- Agent estimates the criticality from the point of view of (denoted ), which may differ from that determined by (There is no reason why we should have the equality ). In the following, we assume that the criticality (based on the evaluation of the environment) of the human and that of the agent are identical;
- 3.
- In the same way, estimates the workload and the experience level of each , denoted by . In the following, we assume that these two descriptors of the agent are identical to those of ;
- 4.
- Each agent builds the matrix . We will come back to this in the following;
- 5.
- determines the Nash equilibrium for the matrix computed from the strategies, where is the strategy of Agent and the strategy of any actor other than ;
- 6.
- We assume that there are exchanges between the different actors (for example, informative acts for the chosen strategies);
- 7.
- We also assume that there are exchanges between the different actors, for example requests about the action to be carried out (the strategy that an actor would like to be selected by another actor). Note that these last two phases are a cyclical process that should converge fairly quickly to a consensus;
- 8.
- The actors perform their respective actions (doing nothing is also an action), which take a certain time;
- 9.
- We assume that the workload and the experience level can be updated by . The most difficult problem is to consider an update of the two descriptors for . Depending on the strategy selected , the experience level may increase (a failure could also bring additional knowledge) depending on the success or failure of the action chosen by the players. Similarly, in the previous steps of estimating the two descriptors of , Agent could propose an estimation of the experience level according to the success/failure of , as well as the strategy considered optimal. In the end, the updating of these two descriptors leads to their valuation at .
2.3. Hypothesis and Concepts of Equilibria for a Matrix Game
- : and , knowing that H cooperates, A has an interest in cooperating (for example, the task is complex enough that H felt the need to call on A and A detects an interest in cooperating with the human);
- : and , knowing that H is defecting (does not cooperate), A has an interest in cooperating (for example, the task is complex enough for A to feel an interest in cooperating with the human, even if the latter was acting individually);
- : and , knowing that H cooperates, A has an interest in not cooperating (for example, H felt the need to call on A but A does not consider the task complex enough and, occupied by other tasks, A does not detect an interest in cooperating with the human);
- : and , knowing that H is defecting (does not cooperate), it is in A’s interest not to cooperate (for example, H did not feel the need to call on A, and A does not consider the task complex enough to offer cooperation or assistance).
3. Building Two-Player Matrix Game
3.1. Representation of the Two-Player Matrix for Category 1
3.1.1. Building the Matrix Game for Category 1
- When the value of the descriptor is equal to for A, the agent should decide to intervene to assist the human. The strategy in this case would be a cooperative situation, denoted (or if this value is low for the human). A high value of the descriptor should therefore lead to a cooperative strategy on the part of the agent. By using the notations of Equation (1)), we have: , , , and . Thus, having this inequality (a value greater than or equal to the fixed threshold), will be the chosen strategy for the agent A;
- Similarly, when the value of the descriptor is low for A, it is not in the interest of the agent to intervene. Strategies such as and are then necessary, to indicate its non-intervention. To obtain the strategy, the first inequality is satisfied as soon as the current value of the descriptor, for A, is less than (or equal to) the fixed threshold. The second inequality must also be satisfied. If we consider the current value (), this will be verified for the operation where (H considers the descriptor to be of relative importance). Strategy supposes the satisfaction of , with and (H partially considers the descriptor).
3.1.2. Illustration of the Criticality and Workload Descriptors
3.2. Representation of the Two-Player Matrix for Category 2
3.2.1. Building the Matrix Game for Category 2
- When the value of the descriptor is low, the assistant agent may have an interest in intervening (strategies or ), before the global system deteriorates. To respect these constraints, one solution would be to swap the values proposed in Section 3.1.1. Let us then take , , and ;
- When the value of the descriptor tends towards , the agent will select a non-intervention action. The strategies in this case would be and . Similarly, the permutation of values proposed in Section 3.1.1 follows the same analysis. For and , several values are then possible; for example, with two values: and .
3.2.2. Illustration of Experience Level
- For Category 1, we notice that the agent cooperates when it evaluates the descriptor characterizing the situation more strictly than the human. In this case, the agent finds it more useful not to cooperate, even if the human asks for it; because it judges the descriptor weaker than the human values it. For example, if the agent judges the situation as critical, whatever the human says, it will offer assistance.
- For Category 2, we note that the agent cooperates when it evaluates the descriptor describing the situation as weaker than the human. In this case, the agent thinks it is more useful not to cooperate, even if the human asks for it; because it judges the descriptor more strictly than the human. For example, if the agent judges the user to be very inexperienced, whatever the human says, it will offer assistance.
3.3. Combination of Descriptors for the Two-Player Matrix
3.4. Illustration with Combinations of Two and Three Descriptors
3.4.1. Combination of Two Descriptors of the Same Category
3.4.2. Combination of Two Descriptors from Different Categories
3.4.3. Illustration with a Combination of Three Descriptors
4. Generalization for Agents and Humans
4.1. Centralized Approach for the Decision of Agents
4.1.1. Determination of Gains for Assistant Agents
- : Number of agents wishing to intervene
- : Number of agents not wishing to intervene
- : Number of humans wishing to cooperate with agents
- : Number of humans not wishing to be assisted by agents
4.1.2. Determination of Gains for Human Beings
4.2. Distributed Approach for the Decision of Each Agent
4.2.1. Determination of Gains for Assistant Agents
4.2.2. Determination of Gains for Human Beings
5. Case Study: Scenario Based on Two Agents and Two Humans
5.1. Description of Scenario
5.2. Determination of the Initial Two-Player Matrix According to the Three Predefined Descriptors
5.3. Building the Centralized Matrix for Two Agents Two Humans
5.4. Building the Distributed Matrix According to the Point of View of Each Agent
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Razakatiana, M.; Kolski, C.; Mandiau, R.; Mahatody, T. Game theory-based human-assistant agent interaction model: Feasibility study for a complex task. In Proceedings of the HAI ’20: 8th International Conference on Human-Agent Interaction, Virtual Event, Australia, 10–13 November 2020; Obaid, M., Mubin, O., Nagai, Y., Osawa, H., Abdelrahman, Y., Fjeld, M., Eds.; ACM: New York, NY, USA, 2020; pp. 187–195. [Google Scholar] [CrossRef]
- Razakatiana, M.; Kolski, C.; Mandiau, R.; Mahatody, T. Human-agent interaction based on game theory: Case of a road traffic supervision task. In Proceedings of the 13th International Conference on Human System Interaction, HSI 2020, Tokyo, Japan, 6–8 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 88–93. [Google Scholar] [CrossRef]
- Murphy, R.R. Human-robot interaction in rescue robotics. IEEE Trans. Syst. Man Cybern. Part C 2004, 34, 138–153. [Google Scholar] [CrossRef]
- Adam, E.; Mandiau, R. Design of a MAS into a Human Organization: Application to an Information Multi-agent System. In Proceedings of the Agent-Oriented Information Systems, 5th International Bi-Conference Workshop, AOIS 2003, Melbourne, Australia, 14 July 2003; Chicago, IL, USA, 13 October 2003. Giorgini, P., Henderson-Sellers, B., Winikoff, M., Eds.; Revised Selected Papers, Lecture Notes in Computer Science. Springer: Berlin/Heidelberg, Germany, 2003; Volume 3030, pp. 1–15. [Google Scholar] [CrossRef]
- Rosenfeld, A.; Richardson, A. Explainability in human–agent systems. Auton. Agents Multi-Agent Syst. 2019, 33, 673–705. [Google Scholar] [CrossRef]
- Chaib-draa, B.; Moulin, B.; Mandiau, R.; Millot, P. Trends in distributed artificial intelligence. Artif. Intell. Rev. 1992, 6, 35–66. [Google Scholar] [CrossRef]
- Müller, J.P. Architecture and application of intelligent agent: A survey. Knowl. Eng. Rev. 1998, 13, 353–380. [Google Scholar] [CrossRef]
- van der Hoek, W.; Wooldridge, M.J. Multi-agent systems. In Handbook of Knowledge Representation; van Harmelen, F., Lifschitz, V., Porter, B.W., Eds.; Foundations of Artificial Intelligence; Elsevier: Amsterdam, The Netherlands, 2008; Volume 3, pp. 887–928. [Google Scholar] [CrossRef]
- Hutter, M. Open Problems in Universal Induction & Intelligence. Algorithms 2009, 2, 879–906. [Google Scholar]
- Wooldridge, M. An introduction to MultiAgent Systems; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
- Maybury, M.; Wahlster, W. Readings in Intelligent User Interfaces; Morgan Kaufmann: Burlington, MA, USA, 1998. [Google Scholar]
- Lew, M.; Bakker, E.M.; Sebe, N.; Huang, T.S. Human-computer intelligent interaction: A survey. In Proceedings of the International Workshop on Human-Computer Interaction, Rio de Janeiro, Brazil, 20 October 2007. [Google Scholar]
- Boy, G.A. Human-centered design of complex systems: An experience-based approach. Des. Sci. 2017, 3, 147–154. [Google Scholar] [CrossRef]
- Völkel, S.T.; Schneegass, C.; Eiband, M.; Buschek, D. What is “intelligent” in intelligent user interfaces?: A meta-analysis of 25 years of IUI. In Proceedings of the IUI ’20: 25th International Conference on Intelligent User Interfaces, Cagliari, Italy, 17–20 March 2020; Paternò, F., Oliver, N., Conati, C., Spano, L.D., Tintarev, N., Eds.; ACM: New York, NY, USA, 2020. [Google Scholar]
- Mandiau, R.; Kolski, C.; Chaib-Draa, B.; Millot, P. A new approach for the cooperation between human(s) and assistance system(s): A system based on intentional states. In Proceedings of the World Congress onExpert Systems, Orlando, FL, USA, 16–19 December 1991. [Google Scholar]
- Millot, P.; Mandiau, R. Man-Machine Cooperative Organizations: Formal and Pragmatic Implementation Methods; Chapter Expertise and Technology: Cognition & Human-Computer Cooperation; Lawrence Erlbaum Associates: London, UK, 1995; pp. 213–228. [Google Scholar]
- Azaria, A.; Gal, Y.; Kraus, S.; Goldman, C.V. Strategic advice provision in repeated human-agent interactions. Auton. Agent Multiagent Syst. 2015, 30, 4–29. [Google Scholar] [CrossRef]
- Kolski, C.; Boy, G.; Mélançon, G.; Ochs, M.; Vanderdonckt, J. Cross-fertilisation between human-computer interaction and artificial intelligence. In A Guided Tour of Artificial Intelligence Research; Marquis, P., Papini, O., Prade, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2020; Volume 3, pp. 1117–1141. [Google Scholar]
- Badeig, F.; Adam, E.; Mandiau, R.; Garbay, C. Analyzing multi-agent approaches for the design of advanced interactive and collaborative systems. J. Ambient Intell. Smart Environ. 2016, 8, 325–346. [Google Scholar] [CrossRef]
- Kubicki, S.; Lebrun, Y.; Lepreux, S.; Adam, E.; Kolski, C.; Mandiau, R. Simulation in contexts involving an interactive table and tangible objects. Simul. Model. Pract. Theory 2013, 31, 116–131. [Google Scholar] [CrossRef]
- Holzinger, A.; Plass, M.; Kickmeier-Rust, M.; Holzinger, K. Interactive machine learning: Experimental evidence for the human in the algorithmic loop: A case study on Ant Colony Optimization. Appl. Intell. 2019, 49, 2401–2414. [Google Scholar] [CrossRef]
- Russell, S. It ’s Not Too Soon to Be Wary of AI. IEEE Sprectrum. 2019, 56, 47–51. [Google Scholar]
- Russell, S. Human-Compatible Artificial Intelligence. In Human-Like Machine Intelligence; Muggleton, S.H., Chater, N., Eds.; Oxford University Press: Oxford, UK, 2022; pp. 3–23. [Google Scholar]
- Russell, S. Artificial Intelligence and the Problem of Control. In Perspectives on Digital Humanism; Werthner, H., Prem, E., Lee, E.A., Ghezzi, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2022; pp. 19–24. [Google Scholar]
- Mandiau, R.; Champion, A.; Auberlet, J.; Espié, S.; Kolski, C. Behaviour based on decision matrices for a coordination between agents in a urban traffic simulation. Appl. Intell. 2008, 28, 121–138. [Google Scholar] [CrossRef]
- Osborne, M.J. An Introduction to Game Theory; MIT Press: Cambridge, MA, USA, 2004. [Google Scholar]
- Kaviari, F.; Mesgari, M.S.; Seidi, E.; Motieyan, H. Simulation of urban growth using agent-based modeling and game theory with different temporal resolutions. Cities 2019, 95, 102387. [Google Scholar] [CrossRef]
- Tan, R.; Liu, Y.; Zhou, K.; Jiao, L.; Tang, W. A game-theory based agent-cellular model for use in urban growth simulation: A case study of the rapidly urbanizing Wuhan area of central China. Comput. Environ. Urban Syst. 2015, 49, 15–29. [Google Scholar] [CrossRef]
- Shults, F.L.; Gore, R.; Wildman, W.J.; Lynch, C.J.; Lane, J.E.; Toft, M.D. A Generative Model of the Mutual Escalation of Anxiety Between Religion Groups. J. Artif. Soc. Soc. Simul. 2018, 21, 1–25. [Google Scholar] [CrossRef]
- Schelling, T.C. Dynamic models of segregation. J. Math. Sociol. 1971, 1, 143–186. [Google Scholar] [CrossRef]
- Schelling, T.C. Micromotives and Macrobehavior; W. W. Norton and Company: New York, NY, USA, 2006. [Google Scholar]
- Axelrod, J. The dissemination of culture: A model with local convergence and global polarization. J. Confl. Resolut. 1997, 41, 203–226. [Google Scholar] [CrossRef]
- Lemos, C.M.; Gore, R.J.; Lessard-Phillips, L.; Shults, F.L. A network agent-based model of ethnocentrism and intergroup cooperation. Qual. Quant. 2020, 54, 463–489. [Google Scholar] [CrossRef]
- Santos, F.P.; Santos, F.C.; Pacheco, J.M. Social norms of cooperation in small-scale societies. PLoS Comput. Biol. 2016, 12, 1–13. [Google Scholar] [CrossRef]
- Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
- Akhbari, M.; Grigg, N.S. A framework for an agent-based model to manage water resources conflicts. J. Water Resour. Manag. 2013, 27, 4039–4052. [Google Scholar] [CrossRef]
- Anebagilu, P.K.; Dietrich, J.; Prado-Stuardo, L.; Morales, B.; Winter, E.; Arumi, J.L. Application of the theory of planned behavior with agent-based modeling for sustainable management of vegetative filter strips. J. Environ. Manag. 2021, 284, 112014. [Google Scholar] [CrossRef] [PubMed]
- Noori, M.; Emadi, A.; Fazloula, R. An agent-based model for water allocation optimization and comparison with the game theory approach. Water Supply 2021, 21, 3584–3601. [Google Scholar] [CrossRef]
- Kefi, H.; Besson, E.; Sokolova, K.; Chiraz, A.M. Privacy and intelligent virtual assistants usage across generations. Systèmes Inf. Manag. 2021, 26, 43–76. [Google Scholar] [CrossRef]
- Roentgen, U.R.; Gelderblom, G.J.; Soede, M.; de Witte, L.P. Inventory of electronic mobility aids for persons with visual impairments: A literature review. J. Vis. Impair. Blind. 2008, 102, 702–724. [Google Scholar] [CrossRef]
- Dhiman, H.; Wächter, C.; Fellmann, M.; Röcker, C. Intelligent assistants. Bus. Inf. Syst. Eng. 2022, 64, 645–665. [Google Scholar] [CrossRef]
- Wandke, H. Assistance in human–machine interaction: A conceptual framework and a proposal for a taxonomy. Theor. Issues Ergon. Sci. 2007, 6, 129–155. [Google Scholar] [CrossRef]
- Lecerf, U. Robust Learning for Autonomous Agents in Stochastic Environments. Ph.D. Thesis, Sorbonne University, Paris, France, 2022. [Google Scholar]
- Eckhoff, R.K. Explosion Hazards in the Process Industries; Gulf Professional Publishing: Houston, TX, USA, 2016. [Google Scholar]
- Gursel, E.; Reddy, B.; Khojandi, A.; Madadi, M.; Coble, J.B.; Agarwal, V.; Yadav, V.; Boring, R.L. Using artificial intelligence to detect human errors in nuclear power plants: A case in operation and maintenance. Nucl. Eng. Technol. 2023, 55, 603–622. [Google Scholar] [CrossRef]
- Masson, M.; de Keyser, V. Human error: Lesson learned from a field study for the specification of an intelligent error prevention system. In Proceedings of the Advances in Industrial Ergonomics and Safety IV; Taylor and Francis: London, UK, 1992; pp. 1085–1092. [Google Scholar]
- Bastien, J.M.C.; Scapin, D.L. Evaluating a user interface with ergonomic criteria. Int. J. Hum.-Comput. Interact. 1995, 7, 105–121. [Google Scholar] [CrossRef]
- Rubio, S.; Diaz, E.; Martin, J.; Puente, J.M. Evaluation of subjective mental workload: A comparison of SWAT, NASA-TLX, and workload profile methods. Appl. Psychol. 2004, 53, 61–86. [Google Scholar] [CrossRef]
- Maes, P. Agents that reduce work and information overload. Commun. ACM 1994, 37, 30–40. [Google Scholar] [CrossRef]
- Dennis, A.; Wixom, B.; Roth, R.M. Systems Analysis and Design, 8th ed.; Wiley: Hoboken, NJ, USA, 2021. [Google Scholar]
- EU. Directive 2012/18/EU of the European Parliament and of the Council of 4 July 2012 on the Control of Major-Accident Hazards Involving Dangerous Substances, Amending and Subsequently Repealing Council Directive 96/82/EC Text with EEA Relevance; Techreport; European Union: Brussels, Belgium, 2012. [Google Scholar]
- Adama, K.Y.; Konaté, J.; Maïga, O.Y.; Tembiné, H. Efficient Strategies Algorithms for Resource Allocation Problems. Algorithms 2020, 13, 270. [Google Scholar] [CrossRef]
- Neumann, J.V.; Morgenstein, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, NJ, USA, 1944. [Google Scholar]
- Nash, J. Equilibrium points in n-person games. Proc. Natl. Acad. Sci. USA 1950, 36, 48–49. [Google Scholar] [CrossRef]
- Yuan, A.; Cao, L.; Wang, X. Game-theory-based multi-agent interaction model. Jisuanji Gongcheng/Comput. Eng. 2005, 31, 50–51. [Google Scholar]
- Szilagyi, M.N. Investigation of N-Person Games by Agent-based Modeling. Complex Syst. 2012, 21, 201–243. [Google Scholar] [CrossRef]
- Hamila, H.; Grislin-Le Strugeon, E.; Mandiau, R.; Mouaddib, A. Strategic dominance and dynamic programming for multi-agent plannning, application to the multi-robot box-pushing problem. In Proceedings of the ICAART 2012, 4th International Conference on Agents and Artificial Intelligence, Vilamoura, Algarve, Portugal, 6–8 February 2012. [Google Scholar]
- Shoham, Y.; Leyton-Brown, K. Multiagent Systems: Algorithmic, Game Theoretic and Logicial Foundations; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
- Dufour, R.; Dufour, R. Learning by Doing: A Handbook for Professional Learning Communities at Work; Solution Tree Press: Bloomington, Indiana, 2013. [Google Scholar]
- Ramchurn, S.D.; Huynh, D.; Jennings, N.R. Trust in multi-agent systems. Knowl. Eng. Rev. 2004, 19, 1–25. [Google Scholar] [CrossRef]
- Granatyr, J.; Botelho, V.; Lessing, O.R.; Scalabrin, E.E.; Barthes, J.P.; Enembreck, F. Trust and Reputation Models for Multi-Agent Systems. ACM Comput. Surv. 2015, 48, 27:1–27:42. [Google Scholar] [CrossRef]
- Chen, M.; Yin, C.; Zhang, J.; Nazarian, S.; Deshmukh, J.; Bogdan, P. A General Trust Framework for Multi-Agent Systems. In Proceedings of the AAMAS ’21: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, Virtual Event, UK, 3–7 May 2021; Endriss, U., Nowé, A., Dignum, F., Lomuscio, A., Eds.; IFAAMAS: Richland, SC, USA, 2021; pp. 332–340. [Google Scholar]
- Aknine, S.; Pinson, S.; Shakun, M. A Multi-Agent Coalition Formation Method Based on Preference Models. Group Decis. Negot. 2004, 13, 513–538. [Google Scholar] [CrossRef]
- Guéneron, J.; Bonnet, G. Un protocole de concessions monotones pour la formation distribuée de coalitions. In Proceedings of the SMA et Smart Cities—Trentièmes Journées Francophones sur les Systèmes Multi-Agents, JFSMA 2022, Saint-Etienne, France, 27–29 June 2022; Camps, V., Ed.; Cépaduès: Toulouse, France, 2022; pp. 31–40. [Google Scholar]
- Sarkar, S.; Malta, M.C.; Dutta, A. A survey on applications of coalition formation in multi-agent systems. Concurr. Comput. Pract. Exp. 2022, 34, e6876. [Google Scholar] [CrossRef]
Category 1 | Category 2 |
---|---|
Criticality | Experience level |
Workload | Privacy |
Disability | Usability |
Stochastic environment | Performance |
Human errors | Reliability of the system |
Configuration | Workload | Experience Level | Criticality | ||
---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | |
2 | 4 | 3 | 1 | 5 | |
2 | 4 | 3 | 2 | 1 | |
2 | 2 | 3 | 2 | 3 |
Configuration | Workload | Experience Level | Criticality | Nash Equilibria | ||
---|---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | , , , , , | |
2 | 4 | 3 | 1 | 5 | ||
2 | 4 | 3 | 2 | 1 | , | |
2 | 2 | 3 | 2 | 3 |
Configuration | Workload | Experience Level | Criticality | Nash Equilibria | ||
---|---|---|---|---|---|---|
1 | 1 | 1 | 1 | 1 | , , , , , | |
2 | 4 | 3 | 1 | 5 | ||
2 | 4 | 3 | 2 | 1 | , | |
2 | 2 | 3 | 2 | 3 | , |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Adam, E.; Razakatiana, M.; Mandiau, R.; Kolski, C. Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans. Information 2023, 14, 313. https://doi.org/10.3390/info14060313
Adam E, Razakatiana M, Mandiau R, Kolski C. Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans. Information. 2023; 14(6):313. https://doi.org/10.3390/info14060313
Chicago/Turabian StyleAdam, Emmanuel, Martial Razakatiana, René Mandiau, and Christophe Kolski. 2023. "Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans" Information 14, no. 6: 313. https://doi.org/10.3390/info14060313
APA StyleAdam, E., Razakatiana, M., Mandiau, R., & Kolski, C. (2023). Matrices Based on Descriptors for Analyzing the Interactions between Agents and Humans. Information, 14(6), 313. https://doi.org/10.3390/info14060313