The Role of Trust in Dependence Networks: A Case Study
Abstract
:1. Introduction
- Firstly, it advances our understanding of the dynamics of collaboration in a hybrid society, where cognitive artificial and human agents interact.
- Secondly, it empirically explores the intricate interplay between dependence and trust in this context.
2. Related Work
2.1. Trust
2.2. Dependence Network
3. Agents and Social Dependence
4. Practical Formulation of the Model
4.1. The Blocks
- Stacks can consist of up to three elements.
- A light element can have at most one light element on top.
- A heavy element can have up to two elements of any kind on top. It is evident that, due to the previous constraint, a combination of heavy–light–heavy blocks cannot be realized.
- Cones and spheres cannot have other blocks on top of them.
4.2. The Agents
- A goal: A specific combination of blocks that the agent aims to achieve in the world. This configuration consists of a series of more or less articulated sub-goals.
- A set of plans to achieve its goal/sub-goals (ranging from 0 to n): In the absence of plans, a dependency is established (towards someone) for obtaining a plan.
- A defined level of competence, indicating the agent’s ability to perform certain tasks.
- Membership category: We considered two categories—human or artificial agents. The category influences the characteristics of the agent. For instance, we assume that humans can manipulate cylinders and cones, while robots can manipulate cubes and spheres.
- Resources (blocks): Initially, each agent possesses five blocks.
- Beliefs about themselves, the world, and others. Agents’ entire perception of the world, processing, and planning are based on beliefs, thus reflecting their individual interpretation of reality. These beliefs can be more or less accurate or even absent.
- A threshold that determines how trustworthy potential partners must be for the agent to consider their dependence usable. This threshold value, specific to each agent, is used to verify if the partner is capable of performing certain actions. Nonetheless, there remains a certain probability of error.
4.3. Goals and Plans
- Atomic: For example, moving a single block onto the table. This kind of task is useful for modeling the presence of simpler tasks in the world that do not require complex planning skills and do not need to be performed in multiple steps.
- Complex: Creating a stack, which is an ordered sequence of blocks. Constructing a stack introduces the requirement to perform a series of actions in a specific sequence (effectively a plan) to achieve a single sub-goal. Accomplishing only part of it is insufficient; all actions must be executed.
- There exists a set of unused blocks that, when properly used, satisfies it.
- There is someone (either Agent A1 or an Agent A2 dependent on A1) who can potentially move these blocks.
- The plan is physically achievable, meaning that it adheres to the rules governing block composition as defined in Section 4.1.
4.4. Agents’ Trust and Trustworthiness
- Obtaining a plan.
- Acquisition of a block.
- Repositioning of a block.
4.5. Beliefs
- Their own goals;
- Their own abilities;
- Their own plans;
- The blocks that exist in the world;
- Who the owners of the blocks are;
- The other agents that exist in the world;
- The goals of the other agents;
- The abilities of the other agents;
- The plans of the other agents (they know which ones they possess, but not how these plans are articulated);
- Dependencies on actions;
- Dependencies on resources (blocks);
- Dependencies on plans.
4.6. The Blackboard
4.7. Workflow
- The agent establishes how to proceed in obtaining the required elements, following internal priority criteria.
- It checks previous requests, updating its subjective view on the dependence network, and verifies in the blackboard if any of the agents having active dependence on it can provide the needed resource. Where this is the case, a mutual dependence is explicit, and the agent attempts a negotiation to formalize the exchange. If a partner is found, both requests are executed. If no partner is found, the agent declares on the blackboard its request and the goal it is pursuing. Then, it waits for a future mutual dependence.
- Abstraction level of the sub-goal: It will prioritize less abstract sub-goals since, as the need for the sub-goal becomes more specific, the availability of blocks in the world that can satisfy this request becomes more restricted.
- Reasoning about others’ goals: Starting with knowledge about other agents’ goals, an agent estimates which blocks it need that are most likely to be used by other agents. It might even find it better to take possession of the final block of a stack, even if the base has not been constructed yet (typical market problem: offer/demand dynamics).
5. Simulation Experiments
Comparison Metric
- Successfully placing a block of interest on the table is worth one point;
- Successfully completing a stack earns an additional point;
- Successfully completing all goals grants an additional point;
6. Results
6.1. First Simulation
- Number of human agents: 3;
- Number of artificial agents: 3;
- Three blocks per agent;
- Agents’ randomly assigned in the range [0, 1];
- Threshold randomly assigned between 0.25, 0.5, and 0.75.
6.2. Second Simulation
6.3. Third Simulation
7. Discussion
8. Conclusions and Future Directions
- Dependence networks have a clear impact on agents’ performance.
- A complex relationship with the concept of trust is established, where it is not always better to preclude interaction with less reliable partners.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviation
BDI | Beliefs, desires, intentions |
References
- Guare, J. Six Degrees of Separation; Vintage: New York, NY, USA, 1990. [Google Scholar]
- Wasserman, S.; Faust, K. Social Network Analysis; Cambridge University Press: Cambridge, UK, 1994. [Google Scholar]
- Watts, D.J. Small Worlds; Princeton University Press: Princeton, NJ, USA, 1999. [Google Scholar]
- Christakis, N.A.; Fowler, J.H. The spread of obesity in a large social network over 32 years. N. Engl. J. Med. 2007, 357, 370–379. [Google Scholar] [CrossRef]
- Cohen, S.; Wills, T.A. Stress, social support, and the buffering hypothesis. Psychol. Bull. 1985, 98, 310. [Google Scholar] [CrossRef]
- Fowler, H.J.; Christakis, A.N. Dynamic spread of happiness in a large social network: Longitudinal analysis over 20 years in the Framingham Heart Study. BMJ 2008, 337, a2338. [Google Scholar] [CrossRef]
- Uchino, B.N. Social support and health: A review of physiological processes potentially underlying links to disease outcomes. J. Behav. Med. 2006, 29, 377–387. [Google Scholar] [CrossRef]
- Cialdini, R.B.; Goldstein, N.J. Social influence: Compliance and conformity. Annu. Rev. Psychol. 2004, 55, 591–621. [Google Scholar] [CrossRef]
- Acemoglu, D.; Ozdaglar, A. Opinion dynamics and learning in social networks. Dyn. Games Appl. 2011, 1, 3–49. [Google Scholar] [CrossRef]
- Das, A.; Gollapudi, S.; Munagala, K. Modeling opinion dynamics in social networks. In Proceedings of the 7th ACM International Conference on Web Search and Data Mining, New York, NY, USA, 24–28 February 2014; pp. 403–412. [Google Scholar]
- Xia, H.; Wang, H.; Xuan, Z. Opinion dynamics: A multidisciplinary review and perspective on future research. Int. J. Knowl. Syst. Sci. (IJKSS) 2011, 2, 72–91. [Google Scholar] [CrossRef]
- Drescher, L.S.; Grebitus, C.; Roosen, J. Exploring Food Consumption Trends on Twitter with Social Media Analytics: The Example of Veganuary. EuroChoices 2023, 22, 45–52. [Google Scholar] [CrossRef]
- Yu, L.; Zhao, Y.; Tang, L.; Yang, Z. Online big data-driven oil consumption forecasting with Google trends. Int. J. Forecast. 2019, 35, 213–223. [Google Scholar] [CrossRef]
- Jin, X.; Wang, Y. Research on social network structure and public opinions dissemination of micro-blog based on complex network analysis. J. Netw. 2013, 8, 1543. [Google Scholar] [CrossRef]
- Safarnejad, L.; Xu, Q.; Ge, Y.; Krishnan, S.; Bagarvathi, A.; Chen, S. Contrasting misinformation and real-information dissemination network structures on social media during a health emergency. Am. J. Public Health 2020, 110, S340–S347. [Google Scholar] [CrossRef] [PubMed]
- Falcone, R.; Castelfranchi, C. Grounding Human Machine Interdependence Through Dependence and Trust Networks: Basic Elements for Extended Sociality. Front. Phys. 2022, 10, 946095. [Google Scholar] [CrossRef]
- Carlson, S.M.; Koenig, M.A.; Harms, M.B. Theory of mind. Wiley Interdiscip. Rev. Cogn. Sci. 2013, 4, 391–402. [Google Scholar] [CrossRef] [PubMed]
- Leslie, A.M.; Friedman, O.; German, T.P. Core mechanisms in ‘theory of mind’. Trends Cogn. Sci. 2004, 8, 528–533. [Google Scholar] [CrossRef] [PubMed]
- Nichols, S.; Stich, S. How to read your own mind: A cognitive theory of self-consciousness. In Consciousness: New Philosophical Essays; Oxford University Press: Oxford, UK, 2003; pp. 157–200. [Google Scholar]
- Lapierre, M.A. Development and persuasion understanding: Predicting knowledge of persuasion/selling intent from children’s theory of mind. J. Commun. 2015, 65, 423–442. [Google Scholar] [CrossRef]
- Sylwester, K.; Lyons, M.; Buchanan, C.; Nettle, D.; Roberts, G. The role of theory of mind in assessing cooperative intentions. Personal. Individ. Differ. 2012, 52, 113–117. [Google Scholar] [CrossRef]
- Castelfranchi, C. Founding agents’ “autonomy” on dependence theory. ECAI 2000, 1, 353–357. [Google Scholar]
- Sichman, J.S.; Conte, R.; Demazeau, Y.; Castelfranchi, C. A social reasoning mechanism based on dependence networks. In Proceedings of the 11th European Conference on Artificial Intelligence, Amsterdam, The Netherlands, 8–12 August 1998; pp. 416–420. [Google Scholar]
- Sichman, J.S.; Conte, R. Multi-agent dependence by dependence graphs. In Proceedings of the 1st International Joint Conference on Autonomous Agents and Multiagent Systems: Part 1, Bologna, Italy, 15–19 July 2002; pp. 483–490. [Google Scholar]
- Bratman, M. Intentions, Plans and Practical Reason; Harvard University Press: Cambridge, MA, USA, 1987. [Google Scholar]
- Cohen, P.R.; Levesque, H.J. Intention is choice with commitment. Artif. Intell. 1990, 42, 213–261. [Google Scholar] [CrossRef]
- Wooldridge, M. An Introduction to Multiagent Systems; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
- Castelfranchi, C.; Falcone, R. Trust Theory: A Socio-Cognitive and Computational Model; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
- Azzedin, F.; Ghaleb, M. Internet-of-Things and information fusion: Trust perspective survey. Sensors 2019, 19, 1929. [Google Scholar] [CrossRef]
- Falcone, R.; Sapienza, A.; Cantucci, F.; Castelfranchi, C. To Be Trustworthy and to Trust: The New Frontier of Intelligent Systems. In Handbook of Human-Machine Systems; Wiley-IEEE Press: Hoboken, NJ, USA, 2023; pp. 213–223. [Google Scholar]
- Castelfranchi, C.; Falcone, R. Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Proceedings of the International Conference on Multi Agent Systems (Cat. No. 98EX160), Paris, France, 3–7 July 1998; pp. 72–79. [Google Scholar]
- Drawel, N.; Bentahar, J.; Laarej, A.; Rjoub, G. Formal verification of group and propagated trust in multi-agent systems. Auton. Agents Multi-Agent Syst. 2022, 36, 19. [Google Scholar] [CrossRef]
- Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An Integrative Model of Organizational Trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
- Kuipers, B. Trust and cooperation. Front. Robot. AI 2022, 9, 676767. [Google Scholar] [CrossRef]
- Asan, O.; Bayrak, A.E.; Choudhury, A. Artificial intelligence and human trust in healthcare: Focus on clinicians. J. Med. Internet Res. 2020, 22, e15154. [Google Scholar] [CrossRef] [PubMed]
- Kelp, C.; Simion, M. What is trustworthiness? Noûs 2023, 57, 667–683. [Google Scholar] [CrossRef]
- Peels, R.; Bouter, L. Replication and trustworthiness. Account. Res. 2023, 30, 77–87. [Google Scholar] [CrossRef]
- Sapienza, A.; Falcone, R. A Bayesian Computational Model for Trust on Information Sources. In Workshop from Objects to Agents; 2016; pp. 50–55. Available online: https://ceur-ws.org/Vol-1664/w9.pdf (accessed on 7 September 2023).
- Conte, R.; Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order; Springer Science & Business Media: Berlin, Germany, 2002; Volume 6. [Google Scholar]
- Govindaraj, R.; Govindaraj, P.; Chowdhury, S.; Kim, D.; Tran, D.T.; Le, A.N. A Review on Various Applications of Reputation Based Trust Management. Int. J. Interact. Mob. Technol. 2021, 15, 87–102. [Google Scholar]
- Sabater, J.; Sierra, C. REGRET: Reputation in gregarious societies. In Proceedings of the 5th International Conference on Autonomous Agents, Montreal, QC, Canada, 28 May–1 June 2001; pp. 194–195. [Google Scholar]
- Burnett, C.; Norman, T.J.; Sycara, K. Stereotypical trust and bias in dynamic multiagent systems. ACM Trans. Intell. Syst. Technol. (TIST) 2013, 4, 1–22. [Google Scholar] [CrossRef]
- Liu, X.; Datta, A.; Rzadca, K. Trust beyond reputation: A computational trust model based on stereotypes. Electron. Commer. Res. Appl. 2013, 12, 24–39. [Google Scholar] [CrossRef]
- Conte, R.; Castelfranchi, C. Simulating multi-agent interdependencies. A two-way approach to the micro-macro link. In Social Science Microsimulation; Springer: Berlin/Heidelberg, Germany, 1996; pp. 394–415. [Google Scholar]
- Za, S.; Marzo, F.; De Marco, M.; Cavallari, M. Agent based simulation of trust dynamics in dependence networks. In Exploring Services Science, Proceedings of the 6th International Conference, IESS 2015, Porto, Portugal, 4–6 February 2015; Proceedings 6; Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 243–252. [Google Scholar]
- Chenoweth, S.V. On the NP-Hardness of Blocks World. In Proceedings of the AAAI, Anaheim, CA, USA, 14–19 July 1991; pp. 623–628. [Google Scholar]
- Slaney, J.; Thiébaux, S. Blocks world revisited. Artif. Intell. 2001, 125, 119–153. [Google Scholar] [CrossRef]
- Winograd, T. Five Lectures on Artificial Intelligence; Stanford University, Standord Artificial Intelligence Laboratory: Stanford, CA, USA, 1974; p. 0100. [Google Scholar]
- Liu, X.; Datta, A.; Rzadca, K.; Lim, E.P. Stereotrust: A group based personalized trust model. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, Hong Kong, China, 2–6 November 2009; pp. 7–16. [Google Scholar]
- Sapienza, A.; Falcone, R. Evaluating agents’ trustworthiness within virtual societies in case of no direct experience. Cogn. Syst. Res. 2020, 64, 164–173. [Google Scholar] [CrossRef]
- Choi, D.; Jin, S.; Lee, Y.; Park, Y. Personalized eigentrust with the beta distribution. ETRI J. 2010, 32, 348–350. [Google Scholar] [CrossRef]
- Fang, W.; Zhang, C.; Shi, Z.; Zhao, Q.; Shan, L. BTRES: Beta-based trust and reputation evaluation system for wireless sensor networks. J. Netw. Comput. Appl. 2016, 59, 88–94. [Google Scholar] [CrossRef]
- Fang, W.; Zhang, W.; Yang, Y.; Liu, Y.; Chen, W. A resilient trust management scheme for defending against reputation time-varying attacks based on BETA distribution. Sci. China Inf. Sci. 2017, 60, 040305. [Google Scholar] [CrossRef]
- Kanchana Devi, V.; Ganesan, R. Trust-based selfish node detection mechanism using beta distribution in wireless sensor network. J. ICT Res. Appl. 2019, 13, 79–92. [Google Scholar]
- Wilensky, U. NetLogo; Center for Connected Learning and Computer-Based Modeling, Northwestern University: Evanston, IL, USA, 1999. [Google Scholar]
- Antonelli, G. Interconnected dynamic systems: An overview on distributed control. IEEE Control Syst. Mag. 2013, 33, 76–88. [Google Scholar]
- Ji, Z.; Wang, Z.; Lin, H.; Wang, Z. Interconnection topologies for multi-agent coordination under leader–follower framework. Automatica 2009, 45, 2857–2863. [Google Scholar] [CrossRef]
- Wooldridge, M.; Jennings, N.R. Intelligent agents: Theory and practice. Knowl. Eng. Rev. 1995, 10, 115–152. [Google Scholar] [CrossRef]
- Gaur, V.; Soni, A. A novel approach to explore inter agent dependencies from user requirements. Procedia Technol. 2012, 1, 412–419. [Google Scholar] [CrossRef]
- Scholten, K.; Sharkey Scott, P.; Fynes, B. Mitigation processes–antecedents for building supply chain resilience. Supply Chain Manag. Int. J. 2014, 19, 211–228. [Google Scholar] [CrossRef]
- Motes, J.; Sandström, R.; Lee, H.; Thomas, S.; Amato, N.M. Multi-robot task and motion planning with subtask dependencies. IEEE Robot. Autom. Lett. 2020, 5, 3338–3345. [Google Scholar] [CrossRef]
- Ravnborg, H.M.; Westermann, O. Understanding interdependencies: Stakeholder identification and negotiation for collective natural resource management. Agric. Syst. 2002, 73, 41–56. [Google Scholar] [CrossRef]
- Raposo, A.B.; Fuks, H. Defining Task Interdependencies and Coordination Mechanism for Colaborative Systems. In Proceedings of the COOP, Saint-Raphaël, France, 4–7 June 2002; pp. 88–103. [Google Scholar]
- Sapienza, A.; Falcone, R. An Autonomy-Based Collaborative Trust Model for User-IoT Systems Interaction. In International Symposium on Intelligent and Distributed Computing; Springer International Publishing: Cham, Switzerland, 2022; pp. 178–187. [Google Scholar]
Average Score | Completed Tasks | Percentage of Delegated Task | Success Rate of Delegated Tasks | |
---|---|---|---|---|
0.25 | 1.39 | 13.31 | 0.32 | 0.53 |
0.5 | 1.35 | 13.0 | 0.18 | 0.68 |
0.75 | 1.27 | 12.64 | 0.06 | 0.76 |
Average Score | Completed Tasks | Percentage of Delegated Task | Success Rate of Delegated Tasks | |
---|---|---|---|---|
0.25 | 1.21 | 12.2 | 0.13 | 0.52 |
0.5 | 1.22 | 12.38 | 0.09 | 0.65 |
0.75 | 1.25 | 12.5 | 0.04 | 0.68 |
Average Score | Completed Tasks | Percentage of Delegated Task | Success Rate of Delegated Tasks | |
---|---|---|---|---|
0.25 | 1.55 | 8.45 | 0.22 | 0.63 |
0.75 | 1.57 | 8.8 | 0.07 | 0.78 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Falcone, R.; Sapienza, A. The Role of Trust in Dependence Networks: A Case Study. Information 2023, 14, 652. https://doi.org/10.3390/info14120652
Falcone R, Sapienza A. The Role of Trust in Dependence Networks: A Case Study. Information. 2023; 14(12):652. https://doi.org/10.3390/info14120652
Chicago/Turabian StyleFalcone, Rino, and Alessandro Sapienza. 2023. "The Role of Trust in Dependence Networks: A Case Study" Information 14, no. 12: 652. https://doi.org/10.3390/info14120652
APA StyleFalcone, R., & Sapienza, A. (2023). The Role of Trust in Dependence Networks: A Case Study. Information, 14(12), 652. https://doi.org/10.3390/info14120652