Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics
Abstract
:1. Introduction
1.1. Background
1.2. Research Purpose
2. Literature Review
2.1. Effects of AI Trust
2.2. Agreement on AI
2.3. Integrity and AI
2.4. Ethics and AI
2.5. AI Customisation
2.6. Trust in AI Customisation
3. Materials and Methods
- A variety of opinions from all parties involved, along with their underlying assumptions and influences, were gathered. Such diverse data transformed this method significantly, making it distinctive. This was also crucial in understanding faith as well as scepticism about AI.
- Engaging a broader range of participants guaranteed that the research was never conducted just as a means of direct connection with AI, but also as a means of influencing participants’ opinions through advertisements, societal beliefs, and personal values.
- The questionnaire’s extensive range of questions enabled the identification of multiple themes. A larger sample size made it possible to determine the degree to which different groups trust AI. These models may be examined using a variety of variables, including age, education level, cultural background, and prior experience with technology.
- The flexibility of comparison research is another benefit of this method. The range of those who expressed trust spanned from total atheists to strong proponents of AI. This kind of study can help in understanding why certain groups or people seem to trust AI more than others.
- This method also gives the option of maximising the generalisation of data across groups by increasing the sample size. These initiatives are particularly crucial for research aiming to understand public attitudes and opinions. While the research may possibly focus just on those who trust AI to understand their unique motivators, integrating the broader public will give a more thorough insight into society’s perceptions of AI, including why some people may be hesitant to believe it. This comprehensive approach is critical for gaining a holistic understanding of public confidence in AI technology.
3.1. Sampling
3.2. Participant Selection
3.3. Data Collection
3.4. Model
3.5. Explanation of Factors
4. Results and Discussion
- ◾
- As a machine, it has no interest (41% of responses);
- ◾
- It can neutrally evaluate situations (61% of responses);
- ◾
- AI will be smarter than humans (38% of responses);
- ◾
- It combines all the knowledge and is not like individual humans who use their knowledge only partially and that too much for their benefit (21% of responses).
- ◾
- Humans have self-interest that they prioritise (49% of responses);
- ◾
- Humans lie most of the time (37% of responses);
- ◾
- One cannot trust media anymore (52% responses).
- Trust Level 1 (Full Trust in AI)
- 1.
- Proportion: 0.235 indicates that 23.5% of the participants have a full level of trust in AI over humans.
- 2.
- BF10: 4.568 × 1027 suggests extremely strong evidence against the null hypothesis (that the true proportion is 0.5). This indicates that the level of full trust is significantly less than the neutral expectation but notably strong among certain segments of the sample.
- Trust Level 2 (High Trust in AI)
- 3.
- Proportion: 0.439, where 43.9% of respondents show high trust in AI.
- 4.
- BF10: 1.687 provides moderate evidence against the null hypothesis. This value indicates that high trust in AI, while not reaching the neutral expectation of 0.5, is relatively substantial.
- Trust Level 5 (Low Trust in AI)
- 5.
- Proportion: 0.038, only 3.8% of participants display a low level of trust in AI.
- 6.
- BF10: 4.699 × 10102 provides extraordinarily strong evidence against the null hypothesis. This extremely high Bayes factor indicates that very few respondents have low trust in AI, which is significantly below what would be expected if opinions were neutral.
- Trust Level 6 (No Trust in AI)
- 7.
- Proportion: 0.288 indicates that 28.8% of respondents have no trust in AI at all.
- 8.
- BF10: 7.229 × 1016 suggests very strong evidence against the null hypothesis. This indicates that a significant minority of the population explicitly distrusts AI compared to humans, more than expected under a neutral scenario.
5. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
- 1.
- What is your age?
- 18–24
- 25–34
- 35–44
- 45–54
- 55–64
- Above 64
- 2.
- What is your gender?
- Female
- Male
- Other (specify)
- 3.
- What is your education?
- High school (freshmen)
- Trade/vocational/technical
- Bachelors
- Masters
- Doctorate
- 4.
- General Trust in Technology:
- How much do you trust technology to improve your quality of life? (1 = No trust at all, 6 = Complete trust)
- To what extent do you believe technology has a positive impact on society? (1 = Strongly disagree, 6 = Strongly agree)
- 5.
- Specific Trust in AI:
- What aspects of AI do you find most trustworthy? (Open-ended)
- Rate your trust in AI to make decisions without human intervention. (1 = No trust at all, 6 = Complete trust)
- 6.
- Comparative Trust (AI vs. Humans):
- In what situations do you trust AI more than human judgment? (Open-ended)
- Do you perceive AI as more objective than human decision-making? (1 = Strongly disagree, 6 = Strongly agree)
- 7.
- Sociocultural Influences:
- Does your cultural background influence your trust in AI? (1 = Not at all, 6 = Significantly)
- How do societal norms and values shape your perception of AI? (Open-ended)
- 8.
- Psychological Aspects of Trust:
- Do personal experiences with technology affect your trust in AI? (1 = Not at all, 6 = Significantly)
- To what degree do media portrayals of AI impact your trust in it? (1 = No impact, 6 = Major impact)
- 9.
- Risk Perception:
- What are your primary concerns about trusting AI? (Open-ended)
- Rate your level of concern about potential misuse of AI. (1 = No concern, 6 = Extremely concerned)
- 10.
- Perceived Benefits of AI:
- What benefits of AI do you think contribute to its trustworthiness? (Open-ended)
- How does the potential of AI in solving complex problems influence your trust in it? (1 = Not at all, 6 = Significantly)
- 11.
- Ethical Considerations:
- How do ethical considerations around AI affect your trust in it? (1 = No effect, 6 = Significant effect)
- Rate your agreement: “AI can be trusted to act ethically”. (1 = Strongly disagree, 6 = Strongly agree)
- 12.
- Future Orientation:
- How optimistic are you about the future developments of AI? (1 = Very pessimistic, 6 = Very optimistic)
- What is your perception of the long-term implications of trusting AI? (Open-ended)
- 13.
- Experiences with Standard Computing Tools:
- Describe your level of trust in standard computing tools (e.g., software) for accuracy and reliability. (1 = No trust at all, 6 = Complete trust)
- Have your experiences with standard computing tools influenced your trust in AI? Please explain. (Open-ended)
- 14.
- Perception of Neutrality and Bias in AI:
- To what extent do you believe AI is free from biases compared to human beings? (1 = Strongly disagree, 6 = Strongly agree)
- In your opinion, how does the perceived neutrality of computers influence your trust in AI? (Open-ended)
- 15.
- Influence of Marketing and Influencers:
- How does marketing and the role of influencers affect your trust in human statements? (1 = No effect, 6 = Significant effect)
- Compare your trust in information disseminated by AI vs. that shared by human influencers. (Open-ended)
- 16.
- Political Factors and Trust in Human Statements:
- Rate your level of trust in statements made by political figures. (1 = No trust at all, 6 = Complete trust)
- How do political factors influence your trust in human statements versus statements made by AI? (Open-ended)
- 17.
- Projection of Computer Experiences onto AI:
- To what degree do you think your experiences with computers (like using Excel) shape your expectations and trust in AI? (1 = Not at all, 6 = significantly)
- In what ways do you differentiate between your experiences with traditional software and AI systems? (Open-ended)
- 18.
- Mistrust in Human Statements:
- What are the main reasons for your mistrust in human statements (if any)? (Open-ended)
- Compare your trust in data or information provided by AI systems versus human sources. (1 = Always trust AI more, 6 = Always trust humans more)
References
- Ameen, Nisreen, Tarhini Ali, Reppel Alexander, and Anand Amitabh. 2021. Customer experiences in the age of AI. Computers in Human Behavior 114: 106548. [Google Scholar] [CrossRef]
- Araujo, Theo, Natali Helberger, Sanne Kruikemeier, and Claes H. de Vreese. 2020. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & Soc 35: 611–23. [Google Scholar] [CrossRef]
- Baabdullah, Abullah M., Ali Abdallah Alalwan, Raed S. Algharabat, Bhimaraya Metri, and Nripendra P. Rana. 2022. Virtual agents and flow experience: An empirical examination of AI-powered chatbots. Technological Forecasting and Social Change 181: 121772. [Google Scholar] [CrossRef]
- Bedué, Patrick, and Albrecht Fritzsche. 2021. Can we trust AI? An empirical investigation of trust requirements and guide to successful AI adoption. Journal of Enterprise Information Management 35: 530–49. [Google Scholar] [CrossRef]
- Bell, Emma, Alan Bryman, and Bill Harley. 2019. Business Research Methods, 5th ed. Oxford: Oxford University Press. [Google Scholar]
- Bochniarz, Klaudia T., Stanislaw K. Czerwinski, Artur Sawicki, and Pawel A. Atroszko. 2022. Attitudes to AI among high school students: Understanding distrust towards humans will not help us understand distrust towards AI. Personality and Individual Differences 185: 111299. [Google Scholar] [CrossRef]
- Brill, Thomas M., Laura Munoz, and Richard J. Miller. 2019. Siri, Alexa, and other digital assistants: A study of customer satisfaction with AI applications. Journal of Marketing Management 35: 1401–36. [Google Scholar] [CrossRef]
- Bryman, Alan. 2012. Social Research Methods. Oxford: Oxford University Press. [Google Scholar]
- Bryson, Joanna J. 2018. Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics and Information Technology 20: 15–26. [Google Scholar] [CrossRef]
- Butt, Asad H., Hassan Ahmad, Muhammad A. S. Goraya, Muhammad S. Akram, and Muhammad N. Shafique. 2021. Let’s play: Me and my AI-powered avatar as one team. Psychol Mark 38: 1014–25. [Google Scholar] [CrossRef]
- Chang, Shuchih E., Anne Y. Liu, and Wei Shen. 2017. User trust in social networking services: A comparison of Facebook and LinkedIn. Computers in Human Behavior 69: 207–17. [Google Scholar] [CrossRef]
- Chen, Lija, Pingping Chen, and Zhijan Lin. 2020. Artificial Intelligence in Education: A Review. IEEE Access 8: 75264–78. [Google Scholar] [CrossRef]
- Cho, Gyeongcheol, Heungsun Hwang, Marko Sarstedt, and Christian Ringle. 2020. Cutoff criteria for overall model fit indexes in generalized structured component analysis. Journal of Marketing Analytics 8: 189–202. [Google Scholar] [CrossRef]
- Choung, Hyesun, Prabu David, and Arun Ross. 2022. Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction 39: 1727–39. [Google Scholar] [CrossRef]
- Christakis, Nicholas. 2019. How AI Will Rewire Us. For Better and for Worse, Robots Will Alter Humans’ Capacity for Altruism, Love, and Friendship. Available online: https://www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/ (accessed on 20 February 2024).
- Davenport, Thomas, Abhijit Guha, Dhruv Grewal, and Timna Bressgott. 2020. How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science 48: 24–42. [Google Scholar] [CrossRef]
- Dietterich, Thomas G., and Eric J. Horvitz. 2015. Rise of concerns about AI: Reflections and directions. Communications of the ACM 58: 38–40. [Google Scholar] [CrossRef]
- Dietvorst, Berkeley J., Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144: 114–26. [Google Scholar] [CrossRef]
- Etikan, Iker, Sulaiman A. Musa, and Rukayya S. Alkassim. 2016. Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics 5: 1–4. [Google Scholar] [CrossRef]
- Fan, Hua, Bing Han, Wei Gao, and Wenqian Li. 2022. How AI chatbots have reshaped the frontline interface in China: Examining the role of sales–service ambidexterity and the personalization–privacy paradox. International Journal of Emerging Markets 17: 967–86. [Google Scholar] [CrossRef]
- Fast, Ethan, and Eric Horvitz. 2017. Long-Term Trends in the Public Perception of AI. Paper presented at Thirty-First AAAI Conference on AI (AAAI’17), San Francisco, CA, USA, February 4–9; Washington: AAAI Press, pp. 963–69. [Google Scholar] [CrossRef]
- Gerlich, Michael. 2023a. Perceptions and Acceptance of AI: A Multi-Dimensional Study. Social Sciences 12: 502. [Google Scholar] [CrossRef]
- Gerlich, Michael. 2023b. The Power of Virtual Influencers: Impact on Consumer Behaviour and Attitudes in the Age of AI. Administrative Sciences 13: 178. [Google Scholar] [CrossRef]
- Gerlich, Michael, Walaa Elsayed, and Konstantin Sokolovskiy. 2023. Artificial intelligence as toolset for analysis of public opinion and social interaction in marketing: Identification of micro and nano influencers. Frontiers in Communication 8: 1075654. [Google Scholar] [CrossRef]
- Giroux, Marilyn, Jungkeun Kim, Jacob C. Lee, and Jongwon Park. 2022. AI and Declined Guilt: Retailing Morality Comparison Between Human and AI. Journal of Business Ethics 178: 1027–41. [Google Scholar] [CrossRef]
- Hackett, Paul M. W. 2016. Consumer Psychology: A Study Guide to Qualitative Research Methods. Leverkusen: Barbara Budrich. [Google Scholar]
- Helberger, Natali, Huh Jisu, Milne George, Strycharz Joanna, and Sundaram Hari. 2020. Macro and Exogenous Factors in Computational Advertising: Key Issues and New Research Directions. Journal of Advertising 49: 377–93. [Google Scholar] [CrossRef]
- Hengstler, Monika, Ellen Enkel, and Selina Duelli. 2016. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technological Forecasting and Social Change 105: 105–20. [Google Scholar] [CrossRef]
- European Commission, Directorate-General for Communications Networks, Content and Technology. 2020. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for Self Assessment. Brussels: European Commission. Available online: https://data.europa.eu/doi/10.2759/002360 (accessed on 27 February 2024).
- Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence 1: 389–99. [Google Scholar] [CrossRef]
- Keiningham, Timothy, Joan Ball, Sabine Benoit, Helen L. Bruce, Alexander Buoye, Julija Dzenkovska, Linda Nasr, Yi-Chun Ou, and Mohamed Zaki. 2017. The interplay of customer experience and commitment. Journal of Services Marketing 31: 148–60. [Google Scholar] [CrossRef]
- Kieslich, Kimon, Birte Keller, and Christopher Starke. 2022. AI ethics by design. Evaluating public perception on the importance of ethical design principles of AI. Big Data and Society 9. [Google Scholar] [CrossRef]
- Klaus, Phil, and Judy Zaichkowsky. 2020. AI voice bots: A services marketing research agenda. Journal of Services Marketing 34: 389–98. [Google Scholar] [CrossRef]
- Klockmann, Victor, von Alicia Schenk, and Marie C. Villeval. 2022. AI, ethics, and intergenerational responsibility. Journal of Economic Behavior and Organization 203: 284–317. [Google Scholar] [CrossRef]
- Lambillotte, Laetitia, Nathan Magrofuoco, Ingrid Poncin, and Jean Vanderdonckt. 2022. Enhancing playful customer experience with personalization. Journal of Retailing and Consumer Services 68: 103017. [Google Scholar] [CrossRef]
- Li, Cong. 2016. When does web-based personalization really work? The distinction between actual personalization and perceived personalization. Computers in Human Behavior 54: 25–33. [Google Scholar] [CrossRef]
- Logg, Jennifer M., Julia Minson, and Don A. Moore. 2019. Algorithm Appreciation: People Prefer Algorithmic To Human Judgment. Organizational Behavior and Human Decision Processes 151: 90–103. [Google Scholar] [CrossRef]
- Madhavan, Poornima, and Douglas A. Wiegmann. 2007. Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science 8: 277–301. [Google Scholar] [CrossRef]
- McKnight, D. Harrison, Vivek Choudhury, and Charles Kacmar. 2002a. The impact of initial consumer trust on intentions to transact with a web site: A trust building model. The Journal of Strategic Information Systems 11: 297–323. [Google Scholar] [CrossRef]
- McKnight, D. Harrison, Vivek Choudhury, and Charles Kacmar. 2002b. Developing and validating trust measures for e-commerce: An integrative typology. Information Systems Research 13: 334–59. [Google Scholar] [CrossRef]
- Meyers, Lawrence, Glenn Gamst, and Anthony Guarino. 2013. Applied Multivariate Research: Design and Interpretation. Los Angeles: Sage Publications, Inc. [Google Scholar]
- Miltgen, Caroline Lancelot, Aleš Popovič, and Tiago Oliveira. 2013. Determinants of end-user acceptance of biometrics: Integrating the “Big 3” of technology acceptance with privacy context. Decision Support Systems 56: 103–14. [Google Scholar] [CrossRef]
- Morey, Tim, Theodore Forbath, and Allison Schoop. 2015. Customer Data: Designing for Transparency and Trust. Harvard Business Review. Available online: https://hbr.org/2015/05/customer-data-designing-for-transparency-and-trust (accessed on 2 February 2024).
- Mostafa, Rania Badr, and Tamara Kasamani. 2021. Antecedents and consequences of chatbot initial trust. European Journal of Marketing 56: 1748–71. [Google Scholar] [CrossRef]
- Murphy, Margi. 2017. A Mind of Its Own Humanity Is Already Losing Control of Artificial Intelligence and It Could Spell Disaster for Our Species, Warn Experts. Available online: https://www.thesun.co.uk/tech/3306890/humanity-is-already-losing-control-of-artificial-intelligence-and-it-could-spell-disaster-for-our-species/ (accessed on 2 February 2024).
- Nagy, Szaboles, and Noémi Hajdú. 2021. Consumer Acceptance of the Use of AI in Online Shopping: Evidence from Hungary. The Amfiteatru Economic Journal 23: 155. [Google Scholar] [CrossRef]
- Nienaber, Ann-Marie, and Gerhard Schewe. 2014. Enhancing trust or reducing perceived risk, what matters more when launching a new product? International Journal of Innovation Management 18: 1–24. [Google Scholar] [CrossRef]
- O’Riordan, Peter. 2019. Using AI and Personalization to Provide a Complete Brand Experience. Available online: https://www.aithority.com/guest-authors/using-ai-and-personalization-to-provide-a-complete-brand-experience/ (accessed on 2 February 2024).
- Pappas, Ilias O., Panos E. Kourouthanassis, Michail N. Giannakos, and George Lekakos. 2017. The interplay of online shopping motivations and experiential factors on personalized e-commerce: A complexity theory approach. Telematics and Informatics 34: 730–42. [Google Scholar] [CrossRef]
- Prakash, Ashish V., Arun Joshi, Shuhi Nim, and Saini Das. 2023. Determinants and consequences of trust in AI-based customer service chatbots. The Service Industries Journal 43: 642–75. [Google Scholar] [CrossRef]
- Rahaman, Mizanur. 2023. Digital Marketing in the Era of AI (AI). Available online: https://www.linkedin.com/pulse/digital-marketing-era-ai-artificial-intelligence-mizanur-rahaman (accessed on 2 February 2024).
- Rogers, Kristina. 2023. How Consumers Rely on Technology but Don’t Trust It|EY—Global. Available online: https://www.ey.com/en_gl/consumer-products-retail/how-to-serve-consumers-who-rely-on-tech-but-dont-trust-tech (accessed on 2 February 2024).
- Ryan, Mark. 2020. In AI we trust: Ethics, artificial intelligence, and reliability. Science and Engineering Ethics 26: 2749–67. [Google Scholar] [CrossRef]
- Siau, Keng, and Weiyu Wang. 2018. Building Trust in Artificial Intelligence, Machine Learning, and Robotics. Cutter Business Technology Journal 31: 47–53. [Google Scholar]
- Song, Mengmeng, Xinyu Xing, Yucong Duan, Jason Cohen, and Jian Mou. 2022. Will AI replace human customer service? The impact of communication quality and privacy risks on adoption intention. Journal of Retailing and Consumer Services 66: 102900. [Google Scholar] [CrossRef]
- Stai, Bethany, Nick Heller, Sean McSweeney, Jack Rickman, Paul Blake, Ranveer Vasdev, Zach Edgerton, Resha Tejpaul, Matt Peterson, Joel Rosenberg, and et al. 2020. Public perceptions of AI and robotics in medicine. Journal of Endourology 34: 1041–48. [Google Scholar] [CrossRef]
- Sundar, Shyam, and Jinyoung Kim. 2019. Machine Heuristic: When We Trust Computers More than Humans with Our Personal Information. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, May 4–9; pp. 1–9. [Google Scholar] [CrossRef]
- Unemyr, Magnus, and Martin Wass. 2018. Data-Driven Marketing with AI: Harness the Power of Predictive Marketing and Machine Learning. Jönköping: Independently Published. [Google Scholar]
- Van Duin, Stefan, and Naser Bakhshi. 2017. Part 1: AI Defined|Deloitte|Technology Services. Deloitte Sweden. Available online: https://www2.deloitte.com/content/dam/Deloitte/nl/Documents/deloitte-analytics/deloitte-nl-data-analytics-artificial-intelligence-whitepaper-eng.pdf (accessed on 2 February 2024).
- Wang, Xuequn, Mina Tajvidi, Xiaolin Lin, and Nick Hajli. 2019. Towards an ethical and trustworthy social commerce community for brand value co-creation: A trust-commitment perspective. Journal of Business Ethics 167: 137–52. [Google Scholar] [CrossRef]
- Xu, Yingzi, Chih-Hui Shieh, van Patrick Esch, and I-Ling Ling. 2020. AI customer service: Task complexity, problem-solving ability, and usage intention. Australasian Marketing Journal 28: 189–99. [Google Scholar] [CrossRef]
- Yang, Rongbin, and Santoso Wibowo. 2022. User trust in AI: A comprehensive conceptual framework. Electronic Markets 56: 347–69. [Google Scholar] [CrossRef]
- Zanker, Markus, Laurens Rook, and Dietmar Jannach. 2019. Measuring the impact of online personalisation: Past, present and future. International Journal of Human-Computer Studies 131: 160–68. [Google Scholar] [CrossRef]
- Zerilli, John, Umang Bhatt, and Adrian Weller. 2022. How transparency modulates trust in AI. Patterns 3: 100455. [Google Scholar] [CrossRef]
Main Factor | Perception | Experiences | Messaging | Data Analytics |
---|---|---|---|---|
Subfactor 1 | Quality of Life | Personal Experiences | Marketing | Tools |
Subfactor 2 | Positive Impact | Media | Influencers | Data |
Subfactor 3 | Decision Approach | Misuse | Political | Objectivity |
Subfactor 4 | Potential | Ethical | Office bearers | Biases |
Subfactor 5 | Futuristic | Cultural |
FIT | GFI | SRMR |
---|---|---|
0.655 | 0.972 | 0.115 |
Perception | Experiential | Messaging | Analytics | |
---|---|---|---|---|
Perception | 1 | 0.782 | −0.911 | 0.96 |
Experiential | 0.782 | 1 | −0.849 | 0.826 |
Messaging | −0.911 | −0.849 | 1 | −0.964 |
Analytics | 0.96 | 0.826 | −0.964 | 1 |
Level | Counts | Total | Proportion | BF10 | |
---|---|---|---|---|---|
Trust more AI than Humans | 1 | 106 | 451 | 0.235 | 4.568 × 1027 |
2 | 198 | 451 | 0.439 | 1.687 | |
5 | 17 | 451 | 0.038 | 4.699 × 10102 | |
6 | 130 | 451 | 0.288 | 7.229 × 1016 |
Pearson’s Correlations | |||
---|---|---|---|
Variable | Trust AI | Distrust Human | |
1. Trust AI | Pearson’s r | — | |
p-value | — | ||
2. Distrust Human | Pearson’s r | 0.960 | — |
p-value | <0.001 | — |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gerlich, M. Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Soc. Sci. 2024, 13, 251. https://doi.org/10.3390/socsci13050251
Gerlich M. Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Social Sciences. 2024; 13(5):251. https://doi.org/10.3390/socsci13050251
Chicago/Turabian StyleGerlich, Michael. 2024. "Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics" Social Sciences 13, no. 5: 251. https://doi.org/10.3390/socsci13050251
APA StyleGerlich, M. (2024). Exploring Motivators for Trust in the Dichotomy of Human—AI Trust Dynamics. Social Sciences, 13(5), 251. https://doi.org/10.3390/socsci13050251