Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation
Abstract
:1. Introduction
2. Theoretical Background
2.1. AI in Financial Inclusion
2.2. Organizational Justice Theory
2.3. Heuristics–Systematic Model
2.4. Perceived Algorithmic Fairness
3. Hypotheses Development
3.1. Ethical Considerations and Users’ Perceived Algorithmic Fairness
3.2. Perceived Algorithmic Fairness, Users’ Satisfaction, and Recommendation
3.3. Satisfaction with AI-Driven Financial Inclusion and Recommendation
4. Research Methodology and Research Design
4.1. Questionnaire Design and Measurements
4.2. Sampling and Data Collection
5. Data Analysis and Results
5.1. Measurement Model
5.2. Structural Model
5.3. Mediating Effect of Perceived Algorithmic Fairness between Ethical Considerations, Users’ Satisfaction, and Recommendation
6. Discussion and Implications for Research and Practice
6.1. Discussion of Key Findings
6.2. Implications for Research
6.3. Implications for Practice
7. Limitations and Future Research Directions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Measurement Items
Constructs | Measurements | Source(s) |
---|---|---|
Algorithm Transparency | The criteria and evaluation processes of AI-driven financial inclusion services are publicly disclosed and easily understandable to users. | Shin (2021) [52]; Liu and Sun (2024) [29] |
The AI-driven financial inclusion services provide clear explanations for its decisions and outputs that are comprehensible to affected users. | ||
The AI-driven financial inclusion services provide insight into how its internal processes lead to specific outcomes or decisions. | ||
Algorithm Accountability | The AI-driven financial inclusion services have a dedicated department responsible for monitoring, auditing, and ensuring the accountability of its algorithmic systems. | Liu and Sun (2024) [29] |
The AI-driven financial inclusion services are subject to regular audits and oversight by independent third-party entities, such as market regulators and relevant authorities. | ||
The AI-driven financial inclusion services have established clear mechanisms for detecting, addressing, and reporting any biases or errors in its algorithmic decision-making processes. | ||
Algorithm Legitimacy | I believe that the AI-driven financial inclusion services align with industry standards and societal expectations for fair and inclusive financial practices. | Shin (2021) [52] |
I believe that the AI-driven financial inclusion services comply with relevant financial regulations, data protection laws, and ethical guidelines for AI use in finance. | ||
I believe that the AI-driven financial inclusion services operate in an ethical manner, promoting fair access to financial services without bias or discrimination. | ||
Perceived Algorithmic Fairness | I believe the AI-driven financial inclusion services treat all users equally and does not discriminate based on personal characteristics unrelated to financial factors. | Shin (2021) [52]; Liu and Sun (2024) [29] |
I trust that the AI-driven financial inclusion services use reliable and unbiased data sources to make fair decisions. | ||
I believe the AI-driven financial inclusion services make impartial decisions without prejudice or favoritism. | ||
Satisfaction with AI-Driven Financial Inclusion | Overall, I am satisfied with the AI-driven financial inclusion services I have experienced. | Shin and Park (2019) [42] |
The AI-driven financial inclusion services meet or exceed my expectations in terms of accessibility, efficiency, and fairness. | ||
I am pleased with the range and quality of services provided through AI-driven financial inclusion platforms. | ||
Recommendation of AI-Driven Financial Inclusion | I will speak positively about the benefits and features of AI-driven financial inclusion services to others. | Mukerjee (2020) [53] |
I would recommend AI-driven financial inclusion services to someone seeking my advice on financial services. | ||
I will encourage my friends, family, and colleagues to consider using AI-driven financial inclusion services. |
References
- Mhlanga, D. Industry 4.0 in finance: The impact of artificial intelligence (AI) on digital financial inclusion. Int. J. Financ. Stud. 2020, 8, 45. [Google Scholar] [CrossRef]
- Shin, D. User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
- Martin, K.; Waldman, A. Are algorithmic decisions legitimate? The effect of process and outcomes on perceptions of legitimacy of AI decisions. J. Bus. Ethics 2023, 183, 653–670. [Google Scholar] [CrossRef]
- Colquitt, J.A. On the dimensionality of organizational justice: A construct validation of a measure. J. Appl. Psychol. 2001, 86, 386–400. [Google Scholar] [CrossRef] [PubMed]
- Todorov, A.; Chaiken, S.; Henderson, M.D. The heuristic-systematic model of social information processing. In The Persuasion Handbook: Developments in Theory and Practice; Dillard, J.P., Pfau, M., Eds.; Sage: Thousand Oaks, CA, USA, 2002; pp. 195–211. [Google Scholar] [CrossRef]
- Jejeniwa, T.O.; Mhlongo, N.Z.; Jejeniwa, T.O. AI solutions for developmental economics: Opportunities and challenges in financial inclusion and poverty alleviation. Int. J. Adv. Econ. 2024, 6, 108–123. [Google Scholar] [CrossRef]
- Uzougbo, N.S.; Ikegwu, C.G.; Adewusi, A.O. Legal accountability and ethical considerations of AI in financial services. GSC Adv. Res. Rev. 2024, 19, 130–142. [Google Scholar] [CrossRef]
- Yasir, A.; Ahmad, A.; Abbas, S.; Inairat, M.; Al-Kassem, A.H.; Rasool, A. How Artificial Intelligence Is Promoting Financial Inclusion? A Study on Barriers of Financial Inclusion. In Proceedings of the 2022 International Conference on Business Analytics for Technology and Security (ICBATS), Dubai, United Arab Emirates, 16 February 2022; pp. 1–6. [Google Scholar]
- Kshetri, N. The role of artificial intelligence in promoting financial inclusion in developing countries. J. Glob. Inf. Technol. Manag. 2021, 24, 1–6. [Google Scholar] [CrossRef]
- Max, R.; Kriebitz, A.; Von Websky, C. Ethical considerations about the implications of artificial intelligence in finance. In Handbook on Ethics in Finance; Springer: Cham, Switzerland, 2021; pp. 577–592. [Google Scholar] [CrossRef]
- Aldboush, H.H.; Ferdous, M. Building Trust in Fintech: An Analysis of Ethical and Privacy Considerations in the Intersection of Big Data, AI, and Customer Trust. Int. J. Financ. Stud. 2023, 11, 90. [Google Scholar] [CrossRef]
- Telukdarie, A.; Mungar, A. The impact of digital financial technology on accelerating financial inclusion in developing economies. Procedia Comput. Sci. 2023, 217, 670–678. [Google Scholar] [CrossRef]
- Ozili, P.K. Financial inclusion, sustainability and sustainable development. In Smart Analytics, Artificial Intelligence and Sustainable Performance Management in a Global Digitalised Economy; Springer: Cham, Switzerland, 2023; pp. 233–241. [Google Scholar] [CrossRef]
- Lee, C.C.; Lou, R.; Wang, F. Digital financial inclusion and poverty alleviation: Evidence from the sustainable development of China. Econ. Anal. Policy 2023, 77, 418–434. [Google Scholar] [CrossRef]
- Adeoye, O.B.; Addy, W.A.; Ajayi-Nifise, A.O.; Odeyemi, O.; Okoye, C.C.; Ofodile, O.C. Leveraging AI and data analytics for enhancing financial inclusion in developing economies. Financ. Account. Res. J. 2024, 6, 288–303. [Google Scholar] [CrossRef]
- Owolabi, O.S.; Uche, P.C.; Adeniken, N.T.; Ihejirika, C.; Islam, R.B.; Chhetri, B.J.T. Ethical implication of artificial intelligence (AI) adoption in financial decision making. Comput. Inf. Sci. 2024, 17, 49–56. [Google Scholar] [CrossRef]
- Mhlanga, D. The role of big data in financial technology toward financial inclusion. Front. Big Data 2024, 7, 1184444. [Google Scholar] [CrossRef]
- Akter, S.; McCarthy, G.; Sajib, S.; Michael, K.; Dwivedi, Y.K.; D’Ambra, J.; Shen, K.N. Algorithmic bias in data-driven innovation in the age of AI. Int. J. Inf. Manag. 2021, 60, 102387. [Google Scholar] [CrossRef]
- Ntoutsi, E.; Fafalios, P.; Gadiraju, U.; Iosifidis, V.; Nejdl, W.; Vidal, M.E.; Ruggieri, S.; Turini, F.; Papadopoulos, S.; Krasanakis, E.; et al. Bias in data-driven artificial intelligence systems—An introductory survey. WIREs Data Min. Knowl. Discov. 2020, 10, e1356. [Google Scholar] [CrossRef]
- Munoko, I.; Brown-Liburd, H.L.; Vasarhelyi, M. The ethical implications of using artificial intelligence in auditing. J. Bus. Ethics 2020, 167, 209–234. [Google Scholar] [CrossRef]
- Schönberger, D. Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. Int. J. Law Inf. Technol. 2019, 27, 171–203. [Google Scholar] [CrossRef]
- Agarwal, A.; Agarwal, H.; Agarwal, N. Fairness Score and process standardization: Framework for fairness certification in artificial intelligence systems. AI Ethics 2023, 3, 267–279. [Google Scholar] [CrossRef]
- Purificato, E.; Lorenzo, F.; Fallucchi, F.; De Luca, E.W. The use of responsible artificial intelligence techniques in the context of loan approval processes. Int. J. Hum.-Comput. Interact. 2023, 39, 1543–1562. [Google Scholar] [CrossRef]
- Greenberg, J. Organizational justice: Yesterday, today, and tomorrow. J. Manag. 1990, 16, 399–432. [Google Scholar] [CrossRef]
- Robert, L.P.; Pierce, C.; Marquis, L.; Kim, S.; Alahmad, R. Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Hum.-Comput. Interact. 2020, 35, 545–575. [Google Scholar] [CrossRef]
- Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2023, 39, 1871–1882. [Google Scholar] [CrossRef]
- Busuioc, M. Accountable artificial intelligence: Holding algorithms to account. Public Adm. Rev. 2021, 81, 825–836. [Google Scholar] [CrossRef] [PubMed]
- Morse, L.; Teodorescu, M.H.M.; Awwad, Y.; Kane, G.C. Do the ends justify the means? Variation in the distributive and procedural fairness of machine learning algorithms. J. Bus. Ethics 2021, 181, 1083–1095. [Google Scholar] [CrossRef]
- Liu, Y.; Sun, X. Towards more legitimate algorithms: A model of algorithmic ethical perception, legitimacy, and continuous usage intentions of e-commerce platforms. Comput. Hum. Behav. 2024, 150, 108006. [Google Scholar] [CrossRef]
- Shin, D. Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. J. Inf. Sci. 2023, 49, 18–31. [Google Scholar] [CrossRef]
- Shin, D.; Zhong, B.; Biocca, F.A. Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
- König, P.D.; Wenzelburger, G. The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it. Technol. Soc. 2021, 67, 101688. [Google Scholar] [CrossRef]
- Cabiddu, F.; Moi, L.; Patriotta, G.; Allen, D.G. Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. Eur. Manag. J. 2022, 40, 685–706. [Google Scholar] [CrossRef]
- Shulner-Tal, A.; Kuflik, T.; Kliger, D. Fairness, explainability and in-between: Understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol. 2022, 24, 2. [Google Scholar] [CrossRef]
- Narayanan, D.; Nagpal, M.; McGuire, J.; Schweitzer, S.; De Cremer, D. Fairness perceptions of artificial intelligence: A review and path forward. Int. J. Hum.-Comput. Interact. 2024, 40, 4–23. [Google Scholar] [CrossRef]
- Grimmelikhuijsen, S. Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Adm. Rev. 2023, 83, 241–262. [Google Scholar] [CrossRef]
- Starke, C.; Baleis, J.; Keller, B.; Marcinkowski, F. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data Soc. 2022, 9, 1–16. [Google Scholar] [CrossRef]
- Qin, S.; Jia, N.; Luo, X.; Liao, C.; Huang, Z. Perceived fairness of human managers compared with artificial intelligence in employee performance evaluation. J. Manag. Inf. Syst. 2023, 40, 1039–1070. [Google Scholar] [CrossRef]
- Sonboli, N.; Smith, J.J.; Cabral Berenfus, F.; Burke, R.; Fiesler, C. Fairness and transparency in recommendation: The users’ perspective. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 274–279. [Google Scholar] [CrossRef]
- Shin, D.; Lim, J.S.; Ahmad, N.; Ibahrine, M. Understanding user sensemaking in fairness and transparency in algorithms: Algorithmic sensemaking in over-the-top platform. AI Soc. 2024, 39, 477–490. [Google Scholar] [CrossRef]
- Kieslich, K.; Keller, B.; Starke, C. Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data Soc. 2022, 9, 1–15. [Google Scholar] [CrossRef]
- Shin, D.; Park, Y.J. Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Hum. Behav. 2019, 98, 277–284. [Google Scholar] [CrossRef]
- Ababneh, K.I.; Hackett, R.D.; Schat, A.C. The role of attributions and fairness in understanding job applicant reactions to selection procedures and decisions. J. Bus. Psychol. 2014, 29, 111–129. [Google Scholar] [CrossRef]
- Ochmann, J.; Michels, L.; Tiefenbeck, V.; Maier, C.; Laumer, S. Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Inf. Syst. J. 2024, 34, 384–414. [Google Scholar] [CrossRef]
- Wu, W.; Huang, Y.; Qian, L. Social trust and algorithmic equity: The societal perspectives of users’ intention to interact with algorithm recommendation systems. Decis. Support Syst. 2024, 178, 114115. [Google Scholar] [CrossRef]
- Bambauer-Sachse, S.; Young, A. Consumers’ intentions to spread negative word of mouth about dynamic pricing for services: Role of confusion and unfairness perceptions. J. Serv. Res. 2023, 27, 364–380. [Google Scholar] [CrossRef]
- Schinkel, S.; van Vianen, A.E.; Ryan, A.M. Applicant reactions to selection events: Four studies into the role of attributional style and fairness perceptions. Int. J. Sel. Assess. 2016, 24, 107–118. [Google Scholar] [CrossRef]
- Yun, J.; Park, J. The effects of chatbot service recovery with emotion words on customer satisfaction, repurchase intention, and positive word-of-mouth. Front. Psychol. 2022, 13, 922503. [Google Scholar] [CrossRef] [PubMed]
- Jo, H. Understanding AI tool engagement: A study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 2023, 85, 102067. [Google Scholar] [CrossRef]
- Li, Y.; Ma, X.; Li, Y.; Li, R.; Liu, H. How does platform’s fintech level affect its word of mouth from the perspective of user psychology? Front. Psychol. 2023, 14, 1085587. [Google Scholar] [CrossRef] [PubMed]
- Barbu, C.M.; Florea, D.L.; Dabija, D.C.; Barbu, M.C.R. Customer experience in fintech. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1415–1433. [Google Scholar] [CrossRef]
- Shin, D. Why does explainability matter in news analytic systems? Proposing explainable analytic journalism. Journal. Stud. 2021, 22, 1047–1065. [Google Scholar] [CrossRef]
- Mukerjee, K. Impact of self-service technologies in retail banking on cross-buying and word-of-mouth. Int. J. Retail Distrib. Manag. 2020, 48, 485–500. [Google Scholar] [CrossRef]
- Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.; Tatham, R. Multivariate Data Analysis, 6th ed.; Pearson Prentice-Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
- Hair, J.F.; Gabriel, M.; Patel, V. AMOS Covariance-Based Structural Equation Modeling (CBSEM): Guidelines on its Application as a Marketing Research Tool. Braz. J. Mark. 2014, 13, 44–55. [Google Scholar]
- Raza, S.A.; Qazi, W.; Khan, K.A.; Salam, J. Social isolation and acceptance of the learning management system (LMS) in the time of COVID-19 pandemic: An expansion of the UTAUT model. J. Educ. Comput. Res. 2021, 59, 183–208. [Google Scholar] [CrossRef]
- Fornell, C.; Larcker, D.F. Structural equation models with unobservable variables and measurement error: Algebra and statistics. J. Mark. Res. 1981, 18, 382–388. [Google Scholar] [CrossRef]
- Podsakoff, P.M.; Organ, D.W. Self-reports in organizational research: Problems and prospects. J. Manag. 1986, 12, 531–544. [Google Scholar] [CrossRef]
- Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef] [PubMed]
- Newman, D.T.; Fast, N.J.; Harmon, D.J. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organ. Behav. Hum. Decis. Process. 2020, 160, 149–167. [Google Scholar] [CrossRef]
- Birzhandi, P.; Cho, Y.S. Application of fairness to healthcare, organizational justice, and finance: A survey. Expert Syst. Appl. 2023, 216, 119465. [Google Scholar] [CrossRef]
- Chen, S.; Chaiken, S. The heuristic-systematic model in its broader context. In Dual-Process Theories in Social Psychology; Chaiken, S., Trope, Y., Eds.; Guilford Press: New York, NY, USA, 1999; pp. 73–96. [Google Scholar]
- Shi, S.; Gong, Y.; Gursoy, D. Antecedents of trust and adoption intention toward artificially intelligent recommendation systems in travel planning: A heuristic-systematic model. J. Travel Res. 2021, 60, 1714–1734. [Google Scholar] [CrossRef]
- Belanche, D.; Casaló, L.V.; Flavián, C. Artificial Intelligence in FinTech: Understanding robo-advisors adoption among customers. Ind. Manag. Data Syst. 2019, 119, 1411–1430. [Google Scholar] [CrossRef]
- Bao, L.; Krause, N.M.; Calice, M.N.; Scheufele, D.A.; Wirz, C.D.; Brossard, D.; Newman, T.P.; Xenos, M.A. Whose AI? How different publics think about AI and its social impacts. Comput. Hum. Behav. 2022, 130, 107182. [Google Scholar] [CrossRef]
- Khogali, H.O.; Mekid, S. The blended future of automation and AI: Examining some long-term societal and ethical impact features. Technol. Soc. 2023, 73, 102232. [Google Scholar] [CrossRef]
Categories | N | % | |
---|---|---|---|
Gender | Male | 385 | 57% |
Female | 290 | 43% | |
Age | ≤20 | 90 | 13.3% |
21–30 | 274 | 40.6% | |
31–40 | 160 | 23.7% | |
41–50 | 98 | 14.5% | |
51–60 | 46 | 6.8% | |
≥61 | 7 | 1.1% | |
Education | High school and below | 27 | 4% |
College | 93 | 13.8% | |
Bachelor | 397 | 58.8% | |
Master and above | 158 | 23.4% | |
Monthly income (RMB) | Less than 5000 | 389 | 57.6% |
5000–10,000 | 227 | 33.6% | |
More than 10,000 | 59 | 8.8% | |
Experience of using AI-driven financial inclusion services | Less than 6 months | 75 | 11.1% |
6 months-1 year | 201 | 29.8% | |
More than 1 year | 399 | 59.1% | |
Residential area | First-tier city | 257 | 38.1% |
Second-tier city | 271 | 40.1% | |
Third-tier city | 100 | 14.8% | |
Fourth-tier city | 30 | 4.4% | |
Fifth-tier city and others | 17 | 2.5% |
Constructs | Items | Item Loadings | Cronbach’s Alpha | AVE | CR |
---|---|---|---|---|---|
Algorithm Transparency | AT1 | 0.805 | 0.829 | 0.62 | 0.83 |
AT2 | 0.793 | ||||
AT3 | 0.763 | ||||
Algorithm Accountability | AA1 | 0.77 | 0.801 | 0.578 | 0.803 |
AA2 | 0.838 | ||||
AA3 | 0.662 | ||||
Algorithm Legitimacy | AL1 | 0.831 | 0.813 | 0.595 | 0.814 |
AL2 | 0.753 | ||||
AL3 | 0.726 | ||||
Perceived Algorithmic Fairness | PAF1 | 0.756 | 0.816 | 0.598 | 0.817 |
PAF2 | 0.808 | ||||
PAF3 | 0.755 | ||||
Satisfaction | SAT1 | 0.772 | 0.814 | 0.595 | 0.815 |
SAT2 | 0.788 | ||||
SAT3 | 0.753 | ||||
Recommendation | REC1 | 0.731 | 0.838 | 0.625 | 0.832 |
REC2 | 0.894 | ||||
REC3 | 0.735 |
AT | AA | AL | PAF | SAT | REC | |
---|---|---|---|---|---|---|
AT | 0.787 | |||||
AA | 0.513 ** | 0.76 | ||||
AL | 0.515 ** | 0.525 ** | 0.771 | |||
PAF | 0.469 ** | 0.446 ** | 0.483 ** | 0.773 | ||
SAT | 0.336 ** | 0.330 ** | 0.352 ** | 0.527 ** | 0.771 | |
REC | 0.364 ** | 0.339 ** | 0.354 ** | 0.549 ** | 0.542 ** | 0.791 |
Hypotheses | Path | β | p-Value | R2 | Remarks | ||
---|---|---|---|---|---|---|---|
H1 | AT | → | PAF | 0.28 | <0.001 | 30.7% | Supported |
H2 | AA | → | PAF | 0.239 | <0.001 | Supported | |
H3 | AL | → | PAF | 0.383 | <0.001 | Supported | |
H4 | PAF | → | SAT | 0.572 | <0.001 | 37.8% | Supported |
H5 | PAT | → | REC | 0.47 | <0.001 | 52.5 | Supported |
H6 | SAT | → | REC | 0.276 | <0.001 | Supported |
Fit Indices | X2/df | GFI | AGFI | NFI | CFI | PGFI | RMR | RMSEA |
---|---|---|---|---|---|---|---|---|
Recommended value | <3.0 | >0.9 | >0.8 | >0.9 | >0.9 | >0.6 | <0.08 | <0.08 |
Actual value | 2.664 | 0.952 | 0.931 | 0.947 | 0.966 | 0.668 | 0.027 | 0.05 |
Path | Mediating Effect | Bootstrap 95%CI | |||||
---|---|---|---|---|---|---|---|
LLCI | ULCI | ||||||
AT | → | PAF | → | SAT | 0.2134 *** | 0.1478 | 0.2861 |
AA | → | PAF | → | SAT | 0.2267 *** | 0.1569 | 0.3018 |
AL | → | PAF | → | SAT | 0.2129 *** | 0.145 | 0.2819 |
AT | → | PAF | → | REC | 0.2131 *** | 0.1468 | 0.2838 |
AA | → | PAF | → | REC | 0.2313 *** | 0.1585 | 0.31 |
AL | → | PAF | → | REC | 0.2196 *** | 0.1511 | 0.2919 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yang, Q.; Lee, Y.-C. Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data Cogn. Comput. 2024, 8, 105. https://doi.org/10.3390/bdcc8090105
Yang Q, Lee Y-C. Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data and Cognitive Computing. 2024; 8(9):105. https://doi.org/10.3390/bdcc8090105
Chicago/Turabian StyleYang, Qin, and Young-Chan Lee. 2024. "Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation" Big Data and Cognitive Computing 8, no. 9: 105. https://doi.org/10.3390/bdcc8090105
APA StyleYang, Q., & Lee, Y. -C. (2024). Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data and Cognitive Computing, 8(9), 105. https://doi.org/10.3390/bdcc8090105