Next Article in Journal
Performance and Board Diversity: A Practical AI Perspective
Previous Article in Journal
A Hybrid Segmentation Algorithm for Rheumatoid Arthritis Diagnosis Using X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation

by
Qin Yang
and
Young-Chan Lee
*
Department of Information Management, Dongguk University, Gyeongju 38066, Gyeongbuk, Republic of Korea
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2024, 8(9), 105; https://doi.org/10.3390/bdcc8090105
Submission received: 18 July 2024 / Revised: 17 August 2024 / Accepted: 30 August 2024 / Published: 3 September 2024

Abstract

:
This study investigates the impact of artificial intelligence (AI) on financial inclusion satisfaction and recommendation, with a focus on the ethical dimensions and perceived algorithmic fairness. Drawing upon organizational justice theory and the heuristic–systematic model, we examine how algorithm transparency, accountability, and legitimacy influence users’ perceptions of fairness and, subsequently, their satisfaction with and likelihood to recommend AI-driven financial inclusion services. Through a survey-based quantitative analysis of 675 users in China, our results reveal that perceived algorithmic fairness acts as a significant mediating factor between the ethical attributes of AI systems and the user responses. Specifically, higher levels of transparency, accountability, and legitimacy enhance users’ perceptions of fairness, which, in turn, significantly increases both their satisfaction with AI-facilitated financial inclusion services and their likelihood to recommend them. This research contributes to the literature on AI ethics by empirically demonstrating the critical role of transparent, accountable, and legitimate AI practices in fostering positive user outcomes. Moreover, it addresses a significant gap in the understanding of the ethical implications of AI in financial inclusion contexts, offering valuable insights for both researchers and practitioners in this rapidly evolving field.

1. Introduction

The integration of artificial intelligence (AI) in financial services has catalyzed a paradigm shift in traditional banking and finance, heralding an era of enhanced financial inclusion. AI-driven technologies promise to democratize access to financial services, potentially improving customer satisfaction and expanding economic participation among traditionally underserved populations [1]. However, the increasing prevalence of AI in these critical sectors raises significant ethical concerns, particularly regarding the transparency, accountability, and legitimacy of algorithmic decisions—factors that critically influence user trust and perceptions of fairness [2,3].
While the extant literature has extensively explored the utility and efficiency of AI-driven systems in enhancing access to financial inclusion services [4,5], there remains a significant gap in our understanding of how the ethical dimensions of these algorithms impact user responses. Specifically, the nuanced relationships between algorithmic transparency, accountability, legitimacy, and user satisfaction and recommendation behavior have been underexplored. This oversight is particularly concerning given the growing evidence of algorithmic bias, which can potentially exacerbate rather than alleviate financial exclusion [6,7].
This study is grounded in the following two key theoretical perspectives: organizational justice theory, which posits that perceptions of fairness significantly influence individual attitudes and behaviors [8], and the heuristic--systematic model, which provides a framework for understanding how users process information about AI systems and form perceptions of algorithmic fairness [9].
The primary objectives of this study are to examine how ethical constructs of algorithm transparency, accountability, and legitimacy influence perceived algorithmic fairness in AI-driven financial inclusion services; to investigate the mediating role of perceived algorithmic fairness between these ethical constructs and user satisfaction and recommendation behavior; and to provide a comprehensive model linking the ethical considerations in AI to the user responses in the context of financial inclusion.
This study makes several significant contributions to the literature. It quantitatively demonstrates the impact of ethical considerations in AI deployment on perceived fairness and user responses in financial inclusion contexts. It advances the application of organizational justice theory and the heuristic–systematic model in AI and financial inclusion research. It expands existing research on financial technology behavior [10,11] by incorporating ethical dimensions. Lastly, it provides actionable insights for financial institutions and Fintech developers on fostering user satisfaction and positive recommendation behavior through ethical AI practices.

2. Theoretical Background

2.1. AI in Financial Inclusion

Artificial Intelligence (AI) has emerged as a transformative force in the financial sector, particularly in advancing financial inclusion. AI-driven technologies, including machine learning algorithms and automated decision-making systems, have expanded the reach of financial services to previously underserved populations. These technologies enable more efficient and accurate assessments of creditworthiness, fraud detection, and personalization of financial products [12,13].
The potential of AI to bridge the financial inclusion gap is significant. By reducing operational costs, AI makes it economically viable for financial institutions to serve low-income and remote populations [14]. This technological advancement promises to promote economic empowerment for marginalized communities, a key goal of financial inclusion initiatives [15].
However, the integration of AI in financial inclusion is not without challenges. Concerns about fairness, transparency, and accountability have emerged as critical issues [16,17]. AI systems, if not properly designed and managed, can perpetuate or even amplify existing biases. For instance, biased training data can lead to discriminatory outcomes, potentially exacerbating the very inequalities that financial inclusion aims to address [18,19].
This paradox highlights a crucial research gap as follows: while the ethical implications of AI have been studied in various domains [20,21], there is a limited understanding of these issues specifically within the context of AI-driven financial inclusion. The unique characteristics of financial inclusion—its focus on marginalized communities and its potential for significant socioeconomic impact—necessitate a targeted investigation into how ethical principles and algorithmic fairness can be effectively implemented in this field.
Addressing this gap requires a multifaceted approach involving diverse stakeholders, including regulators, financial institutions, and civil society organizations. The development of guidelines and best practices for responsible AI deployment in financial inclusion is essential [22,23]. By prioritizing research into ethical considerations and algorithmic fairness in this context, we can work towards ensuring that the benefits of AI-driven financial inclusion are equitably distributed, protecting the rights and interests of the communities it aims to serve.

2.2. Organizational Justice Theory

Organizational justice theory offers a valuable lens for examining user responses to AI-driven financial inclusion. At its core, this theory proposes that individuals assess the fairness of organizational processes and outcomes based on the following three key dimensions: distributive, procedural, and interactional justice [24].
In the context of AI-driven financial services, these justice dimensions can be mapped onto specific aspects of algorithmic design and implementation.
First, procedural justice aligns with algorithm transparency, reflecting the extent to which users can understand the processes behind algorithmic decisions. Greater transparency in how algorithms operate and make decisions tends to enhance perceptions of fairness [25].
Second, algorithm accountability, which involves mechanisms for auditing and explaining algorithmic decisions, relates to both procedural and interactional justice. It ensures that algorithms operate as intended and that users receive respectful treatment and adequate explanations for decisions affecting them [26,27].
Third, algorithm legitimacy, closely tied to distributive justice, refers to users’ perceptions of algorithms as appropriate and justified within societal norms. When users view algorithmic decisions as legitimate, they are more likely to accept outcomes as fair, even if unfavorable to them [28,29].
By applying these principles of organizational justice to AI-driven financial inclusion, we can better understand how algorithm transparency, accountability, and legitimacy influence user perceptions and responses. This framework suggests that adherence to these justice principles in the design and implementation of AI systems can foster greater trust, satisfaction, and acceptance among users of financial inclusion services.

2.3. Heuristics–Systematic Model

The heuristic–systematic model provides a framework for understanding how individuals process information and make judgments. This model posits two types of cognitive processing—heuristic (quick, based on simple cues) and systematic (slower, involving thorough analysis) [9]. In the context of AI-driven financial inclusion, this model helps explain how users evaluate algorithmic fairness based on transparency, accountability, and legitimacy.
Algorithm transparency can engage both processing types. For heuristic processing, clear summaries or visual representations of the algorithmic operations can serve as quick trust cues. For systematic processing, detailed documentation and access to algorithmic audits allow for in-depth evaluation [2,30].
Algorithm accountability similarly affects both processing routes. Heuristic cues might include visible customer support for AI-related issues, while systematic processing is supported by comprehensive accountability mechanisms like audit trails and clear error rectification protocols [31].
Algorithm legitimacy also influences both processing types. Simple regulatory endorsements can act as heuristic cues, while evidence of bias mitigation efforts and compliance with ethical standards supports systematic evaluation [32].
By addressing both the heuristic and systematic processing routes, financial institutions can foster trust and satisfaction in AI-driven financial services. This approach can lead to a more inclusive user experience, potentially enhancing the success of AI-driven financial inclusion initiatives.

2.4. Perceived Algorithmic Fairness

Perceived algorithmic fairness is a crucial factor in user responses to AI-driven financial inclusion. It encompasses users’ perceptions of the transparency, accountability, and legitimacy of algorithmic decisions. Research indicates that when users perceive algorithms as fair, they are more likely to trust the technology and feel satisfied with the services provided [33,34,35].
The concept of perceived algorithmic fairness in financial inclusion is built upon the following three key pillars:
First, algorithm transparency involves providing users with clear information about decision-making processes, including data used and algorithmic logic. Grimmelikhuijsen [36] noted that transparency is a key aspect of procedural fairness, demonstrating impartial and thoughtful decision-making.
Second, algorithm accountability is crucial in algorithmic decision-making contexts. Starke [37] found that perceived fairness is closely related to attributes such as transparency and accountability, highlighting the interconnected nature of these concepts.
Third, algorithm legitimacy plays a significant role in perceived fairness. Qin et al. [38] demonstrated that favorable attitudes towards AI-driven evaluations enhance perceived legitimacy, which, in turn, fosters a sense of fairness.
In the context of financial inclusion, the importance of perceived algorithmic fairness is beginning to be recognized. Adeoye et al. [15] emphasized the need to address challenges such as data privacy, security, and promoting fairness and transparency in AI algorithms when leveraging AI for financial inclusion.
However, empirical studies on AI-driven financial inclusion fairness and its antecedents remain scarce. As the field evolves, further research is needed to understand the nuances of perceived algorithmic fairness and develop effective strategies for promoting it, ultimately advancing the goal of inclusive financial services.

3. Hypotheses Development

3.1. Ethical Considerations and Users’ Perceived Algorithmic Fairness

The ethical dimensions of AI algorithms—transparency, accountability, and legitimacy—play a crucial role in shaping users’ perceptions of algorithmic fairness. These factors work in concert to create an overall perception of fairness, which is critical for user acceptance and trust in AI-driven financial inclusion services.
Transparency enables users to understand how algorithmic decisions are made [39]. When AI systems provide clear explanations and insights into their decision-making processes, users are more likely to perceive them as fair [40].
Accountability is another critical ethical dimension that ensures AI systems can be audited, questioned, and rectified when necessary [41]. When users know that there are mechanisms in place to hold AI systems accountable for their decisions, they are more likely to perceive them as fair [33]. Accountable AI systems afford users a sense of fairness, which, in turn, promotes a sense of satisfaction and recommendation. In the financial inclusion context, accountability measures such as clear dispute resolution processes and human oversight can enhance the users’ perceptions of algorithmic fairness.
Legitimacy refers to the extent to which users perceive the use of AI algorithms as appropriate and justified within a given context. When AI systems align with societal norms and values, users are more likely to accept their decisions as fair [3]. In the case of AI-driven financial inclusion, legitimacy can be established through compliance with ethical standards, regulatory approval, and alignment with financial inclusion goals. Studies have shown that perceived legitimacy enhances trust and continuous usage intention of AI systems [29]. Hence, based on the above discussion, we propose the following hypotheses:
H1: 
Algorithm transparency positively influences perceived Algorithmic fairness.
H2: 
Algorithm accountability positively influences perceived Algorithmic fairness.
H3: 
Algorithm legitimacy positively influences perceived Algorithmic fairness.
These three ethical considerations—transparency, accountability, and legitimacy—work together to shape users’ overall perceptions of algorithmic fairness. By addressing these aspects in AI system design and implementation, financial institutions can foster positive response among users of AI-driven financial inclusion services.

3.2. Perceived Algorithmic Fairness, Users’ Satisfaction, and Recommendation

Perceived algorithmic fairness is a critical factor that influences users’ attitudes and behaviors towards AI-driven systems. When users perceive AI algorithms as fair, they are more likely to be satisfied with the services provided and recommend them to others [42,43].
In the context of financial inclusion, perceived algorithmic fairness can significantly impact users’ satisfaction with AI-driven services. Users who view an AI system as fair tend to be more satisfied with its outputs. Conversely, those who perceive the AI system as unfair are likely to be dissatisfied with the results it generates [44]. This satisfaction can stem from the belief that the AI system treats them equitably and makes unbiased decisions [45].
Furthermore, perceived algorithmic fairness can also influence users’ likelihood to recommend AI-driven financial inclusion services to others. Users who perceive an AI system as fair are more inclined to recommend it to others. In contrast, those who view it as unfair tend to respond negatively, potentially spreading unfavorable opinions [46]. This phenomenon may be explained by reciprocity. Users who believe AI-driven financial inclusion systems employ fair evaluation processes are likely to respond positively in return. Conversely, those who feel fairness principles have been violated are prone to express negative reactions [47]. We propose that users’ perception of fairness in the algorithmic evaluation process directly influences their likelihood of reciprocating positively. Specifically, the fairer users perceive the process to be, the more likely they are to recommend the AI-driven financial inclusion system to others. Thus, we hypothesize the following:
H4: 
Perceived algorithmic fairness positively influences users’ satisfaction with AI-driven financial inclusion.
H5: 
Perceived algorithmic fairness positively influences users’ recommendation of AI-driven financial inclusion.
These hypotheses reflect the different ways in which perceived fairness can impact user behavior. While satisfaction is a personal response to the service, recommendation involves sharing one’s positive experience with others, potentially expanding the reach of AI-driven financial inclusion services.

3.3. Satisfaction with AI-Driven Financial Inclusion and Recommendation

Users’ satisfaction with AI-driven financial inclusion services can have a significant impact on their recommendation behavior. When users are satisfied with their experiences with an AI system, they are more likely to engage in positive word-of-mouth and recommend the services to others [48,49].
In the context of financial inclusion, user satisfaction can stem from various factors, such as the ease of access to financial services, the quality of personalized offerings, and the overall experience with the AI-driven system. When users feel that the AI-driven financial services meet their needs and expectations, they are more likely to be satisfied and, in turn, recommend these services to others [50]. This recommendation behavior can help expand the reach of financial inclusion initiatives and attract new users [51]. Therefore, based on the above evidence, we hypothesize the following:
H6: 
Users’ satisfaction with AI-driven financial inclusion positively influences their recommendation of it.
The relationship between users’ satisfaction and their likelihood to recommend completes the logical chain from ethical considerations to perceived fairness, satisfaction, and ultimately, recommendation. It emphasizes the importance of not only ensuring algorithmic fairness but also delivering a satisfying user experience to promote the wider use of AI-driven financial inclusion services. The research model based on the research hypotheses so far is shown in Figure 1.

4. Research Methodology and Research Design

4.1. Questionnaire Design and Measurements

Our investigation began with the development of a comprehensive questionnaire designed to capture the relevant data for our analysis. Recognizing the importance of expert input, we sought evaluations from esteemed Korean and Chinese professors in the Finance and Information Technology departments. Their invaluable feedback led to refinements in the questionnaire, enhancing its precision and relevance.
The questionnaire was structured to assess six key dimensions of AI-driven financial inclusion services. These included the extent to which the algorithms used in the evaluation process were transparent, accountable, and legitimate; users’ perceived fairness of the algorithms; their satisfaction with the AI-driven financial inclusion services; and their likelihood to recommend the services to others (see Appendix A).
The introductory section of the questionnaire outlined the study’s purpose and assured participants of confidentiality and anonymity. It also provided survey instructions. The first part collected basic demographic information such as age, gender, income level, and education to establish a foundational understanding of respondents’ backgrounds. The second part comprised items carefully crafted to assess the six constructs under investigation.
The measurement items for algorithmic transparency, accountability, legitimacy, and perceived fairness [29,52] evaluated the respondents’ perceptions of AI-driven financial inclusion in terms of openness, responsibility, and morality. These items assessed how users viewed the fairness, explainability, and trustworthiness of the AI systems used in financial inclusion services. The satisfaction construct [42] evaluated respondents’ views on the services’ ability to meet their needs and expectations. This included assessing users’ overall contentment, perceived value, and the effectiveness of AI-driven solutions in addressing financial requirements. Lastly, the recommendation construct [53] measured the extent to which the respondents were likely to endorse or suggest these services to others, assessing their willingness to recommend them based on their experiences.

4.2. Sampling and Data Collection

This study targeted users of AI-driven financial inclusion services. To ensure a diverse and representative sample, participants were recruited from various demographics, including different age groups, income levels, educational backgrounds, and geographical locations. The inclusion criteria required participants to have experience with AI-facilitated financial services, ensuring informed responses regarding algorithmic fairness, satisfaction, and recommendation likelihood.
A stratified random sampling technique was employed to ensure representation across key demographic segments, including age, gender, income, education level, and geographical region. This approach helped obtain a balanced sample, reflecting the diversity of the AI-driven financial inclusion services’ user base.
Data collection utilized Wenjuanxing, a reliable and widely used professional online survey platform in China. This platform provided a wide reach and participant convenience. The survey was distributed via multiple channels to maximize participation such as Wenjuanxing’s built-in participant pool, popular Chinese social media platforms, including WeChat and Weibo, partnerships with Chinese financial service providers who shared the survey with their customers, and e-mail invitations to relevant professional networks in the financial sector.
Prior to participation, respondents were informed about the study’s purpose, the voluntary nature of their participation, and the confidentiality of their responses. Informed consent was obtained from all participants. The survey was designed to take approximately 15–20 min, with assurances that responses would be anonymized to protect privacy.
This rigorous methodology and design aimed to provide robust and reliable insights into the influence of ethical considerations and perceived algorithmic fairness on user satisfaction and recommendation in the context of AI-driven financial inclusion. By carefully structuring the questionnaire, selecting diverse participants, and employing stratified sampling, the study sought to capture a comprehensive and accurate picture of user perceptions and experiences with these innovative financial services.
The survey targeted users with experience in AI-driven financial inclusion services in China and was conducted from late April to early May 2024. Out of the 697 questionnaires received, 675 were deemed valid after excluding 22 with incomplete responses. Table 1 presents the demographic profile of the respondents. The sample comprised 57% male (n = 385) and 43% female (n = 290) participants. Age distribution skewed younger, with 21–30-year-olds representing the largest group (40.6%), followed by 31–40-year-olds (23.7%). These two cohorts accounted for over 64% of the sample, while only 7.9% were over 50. The respondents were predominantly well-educated, with 58.8% holding bachelor’s degrees and 23.4% possessing master’s degrees or higher. Merely 4% had a high school education or below. Regarding monthly income, 57.6% earned less than 5000 RMB, 33.6% earned between 5000 and 10,000 RMB, and 8.8% earned over 10,000 RMB. Most participants demonstrated substantial experience with AI-driven financial services, where 59.1% had used them for over a year, 29.8% had used them for 6–12 months, and 11.1% for less than 6 months. Geographically, respondents were concentrated in first-tier (38.1%) and second-tier (40.1%) cities, totaling 78.2% of the sample, while 21.7% resided in third-tier cities or below.

5. Data Analysis and Results

This study utilized covariance-based structural equation modeling (CB-SEM) to examine the complex relationships among multiple independent and dependent variables. CB-SEM is a sophisticated statistical approach that allows for the concurrent analysis of intricate interrelationships between constructs. This methodology is particularly well-suited for testing theoretical models with multiple pathways and latent variables, offering a comprehensive framework for assessing both direct and indirect effects within a single analytical model.
The research model was evaluated using a two-step approach, comprising a measurement model and a structural model. Factor analysis and reliability tests were conducted to assess the factor structure and dimensionality of the following key constructs: algorithm transparency, accountability, legitimacy, perceived algorithmic fairness, satisfaction with AI-driven financial inclusion, and recommendation of AI-driven financial inclusion services. Convergent validity was examined to determine how effectively items reflected their corresponding factors, while discriminant validity was assessed to ensure statistical distinctiveness between factors. Mediation analysis was employed to investigate the intermediary role of perceived algorithmic fairness. The following sections detail the results of these analyses, providing a comprehensive overview of the model’s validity and the relationships between constructs.

5.1. Measurement Model

In the measurement model, we evaluated the convergent and discriminant validity of the measures. As shown in Table 2, standardized item loadings ranged from 0.662 to 0.894, exceeding the minimum acceptable threshold of 0.60 proposed by Hair et al. [54]. Cronbach’s α values for each construct ranged from 0.801 to 0.838, surpassing the recommended 0.7 threshold [55] thus providing strong evidence of scale reliability. Composite reliability (CR) was also employed to assess the internal consistency, with higher values indicating greater reliability. According to Raza et al. [56], CR values between 0.6 and 0.7 are considered acceptable, while values between 0.7 and 0.9 are deemed satisfactory to good. In this study, all the CR values exceeded 0.80, indicating satisfactory composite reliability. Furthermore, all the average variance extracted (AVE) values surpassed 0.50, meeting the criteria established by Fornell and Larcker [57] for convergent validity. These results collectively demonstrate that our survey instrument possesses robust reliability and convergent validity.
Table 3 presents Pearson’s correlation coefficients for all research variables, revealing significant correlations among most respondent perceptions. To establish discriminant validity, we employed the Fornell–Larcker criterion, comparing the square root of the Average Variance Extracted (AVE) with factor correlation coefficients. As illustrated in Table 3, the square root of AVE for each factor substantially exceeds its correlation coefficients with other factors. This aligns with Fornell and Larcker’s [57] assertion that constructs are distinct if the square root of the AVE for a given construct surpasses the absolute value of its standardized correlation with other constructs in the analysis. These findings provide robust evidence of the scale’s discriminant validity, confirming that each construct captures a unique aspect of the phenomenon under investigation and is empirically distinguishable from other constructs in the model.
Self-reported data inherently carry the potential for common method bias or variance, which can stem from multiple sources, including social desirability [58,59]. To address this concern, we implemented statistical analyses as recommended by Podsakoff and Organ [58] to assess the presence and extent of common method bias. Specifically, we employed the Harman one-factor test to evaluate whether the measures were significantly affected by common method bias, which can either inflate or deflate intercorrelations among measures depending on various factors. This approach allows us to gauge the potential impact of method effects on our findings and ensure the robustness of our results.
The Harman one-factor test involves conducting an exploratory factor analysis on all relevant variables without rotation. Our analysis results reveal the emergence of one factor that explains only 39.116% of the total variance, which is well below the critical threshold of 50% that would indicate problematic common method bias. Consequently, we can reasonably conclude that our data are not substantially affected by common method bias. This finding enhances the validity of our results and mitigates concerns about systematic measurement error influencing the observed relationships between constructs in our study.

5.2. Structural Model

We evaluated the structural model to validate the relationships between constructs in the research model. The analysis revealed all paths were positive and significant at the 0.05 level, with Table 4 presenting standardized path coefficients, significance levels, and explanatory power (R2) for each construct. The R2 values for perceived algorithmic fairness (30.7%), satisfaction with AI-driven financial inclusion (37.8%), and recommendation of AI-driven financial inclusion (52.5%) indicated acceptable levels of explanation. Our findings supported all hypotheses as follows: algorithm transparency (β = 0.28, p < 0.001), accountability (β = 0.239, p < 0.001), and legitimacy (β = 0.383, p < 0.001) positively influenced perceived algorithm fairness, collectively explaining 30.7% of its variance (H1-H3). Perceived algorithm fairness significantly affected users’ satisfaction (β = 0.572, p < 0.001, R2 = 37.8%) and recommendation (β = 0.47, p < 0.001) of AI-driven financial inclusion services (H4-H5). Additionally, users’ satisfaction positively impacted their recommendation of these services (β = 0.276, p < 0.001), with perceived fairness and satisfaction jointly explaining 52.5% of the variance in recommendations (H6).
Figure 2 presents a visual representation of the standardized path coefficients and the significance levels for each hypothesis. Furthermore, the structural model demonstrated an acceptable fit, with detailed hypothesis testing results and model fit indices presented in Table 4 and Table 5.

5.3. Mediating Effect of Perceived Algorithmic Fairness between Ethical Considerations, Users’ Satisfaction, and Recommendation

The mediating effect of value alignment was analyzed using the PROCESS macro in SPSS, extending the basic linear regression model by introducing a mediator variable. Our study results, presented in Table 6, indicate a significant mediating effect when the 95% confidence interval does not include zero. We employed the following three-step approach: first, testing the relationship between the independent (X) and dependent (Y) variables, then examining the relationship between X and the mediating (M) variable, and finally assessing the combined effect of X and M on Y. This process determines whether full or partial mediation occurs based on the relative magnitudes of the coefficients (β11, β21, β31). In this study, we investigated the mediating role of perceived algorithm fairness in the relationships between ethical considerations of AI-driven financial inclusion services and users’ satisfaction and recommendation. Our findings demonstrate that perceived algorithm fairness positively mediated both the relationship between ethical considerations and users’ satisfaction with AI-driven financial inclusion services, and the relationship between perceived algorithm fairness and users’ recommendation of these services. These results highlight the crucial role of perceived fairness in shaping user attitudes and behaviors towards AI-driven financial inclusion services, emphasizing its importance in the ethical implementation and user acceptance of such technologies.

6. Discussion and Implications for Research and Practice

6.1. Discussion of Key Findings

Through the lens of the organizational justice theory and the heuristic–systematic model, this study examines the impact of ethical considerations on user perceptions, satisfaction, and recommendation behavior in AI-driven financial inclusion services. We adopt an ethics-centered approach to assess the effects of algorithm transparency, accountability, and legitimacy on perceived algorithmic fairness, user satisfaction, and service recommendation likelihood. This framework utilizes ethical considerations as key determinants of user experience in AI-driven financial inclusion services. Moreover, we posit that perceived algorithmic fairness serves as a crucial psychological mediator between these ethical considerations and user response (satisfaction and recommendation). By investigating these relationships, our study aims to enhance the understanding of how ethical considerations shape user perceptions and behaviors in the rapidly evolving domain of AI-driven financial services. This research contributes to the growing body of knowledge at the intersection of AI ethics, user experience, and financial inclusion, offering valuable insights for both practitioners and policymakers in this field.
This study’s findings align with existing research that underscores the importance of algorithmic fairness, accountability, and transparency as key determinants of individual behavior in the context of AI technology [31,42]. To the best of our knowledge, this research represents the first empirical investigation into ethical considerations within the domain of AI-driven financial inclusion and their impact on user responses. Our study uniquely highlights the mediating role of perceived algorithmic fairness, offering a novel perspective on the relationship between users’ ethical perceptions of algorithms and their subsequent satisfaction with and recommendation of these services. This approach contributes to a more nuanced understanding of the interplay between AI ethics and user experience in the context of financial inclusion technologies.
The results of this study yield several significant findings and contributions to the field. Firstly, our results demonstrate that ethical considerations (algorithm transparency, accountability, and legitimacy) in AI-driven financial inclusion are strong predictors (β = 0.28, 0.239, 0.383; p < 0.001) of perceived algorithm fairness. This aligns with and extends previous research findings [29] in the context of AI-driven technologies. Secondly, our findings reveal that perceived algorithm fairness significantly predicts both users’ satisfaction and their likelihood to recommend AI-driven financial inclusion services (β = 0.572, 0.47; p < 0.001). This provides a valuable contribution to the existing literature [44,45], as few studies have empirically addressed this issue in the context of AI-powered financial inclusion, despite growing interest in AI ethics. Thirdly, we found that perceived algorithm fairness positively mediates the relationship between ethical considerations and user responses. This suggests that ethical considerations are crucial factors affecting users’ perceived algorithm fairness in AI-driven financial inclusion, which, in turn, influences users’ satisfaction and recommendation behavior.
Our research contributes to the literature by demonstrating the value of an ethics-centered approach to AI-driven financial inclusion, developing constructs to measure ethical considerations and empirically validating the role of perceived algorithmic fairness as a psychological mediator. This study provides a foundation for further research into the ethical aspects of AI-driven financial inclusion and offers valuable insights for service providers and policymakers. It highlights the importance of ethical considerations in designing and implementing AI-driven financial inclusion services to enhance user satisfaction and promote wider adoption.

6.2. Implications for Research

This study contributes several theoretical implications for existing AI technology and financial inclusion research. Firstly, this research broadens the application of organizational justice theory [24] beyond traditional settings and into the realm of digital finance. By demonstrating the relevance of justice principles in understanding user perceptions of algorithmic fairness in AI-driven financial services, we extend the theory’s scope and applicability. This aligns with the growing need to understand fairness in technological contexts, as highlighted by recent work on algorithmic fairness [60,61]. Our empirical validation of perceived algorithmic fairness as a mediator between ethical considerations and user responses contributes to this literature, offering insights into how fairness perceptions influence user behavior in AI-driven systems.
Furthermore, our findings reinforce the importance of the heuristic–systematic model [62,63] in explaining how users process information about AI systems. The significant impact of ethical considerations on perceived algorithmic fairness supports the idea that users employ both heuristic and systematic processing when evaluating AI-driven services. This extends our understanding of user cognitive processes in the context of complex technological systems.
Our research also bridges a crucial gap between the user experience studies in Fintech [64] and the AI ethics literature [65]. By providing a holistic framework that connects ethical considerations, user perceptions, and behavioral outcomes in AI-driven financial services, we offer a more comprehensive approach to studying these interconnected aspects. This addresses the need for interdisciplinary research in the rapidly evolving field of AI-driven financial technology. By providing concrete evidence of how ethical considerations in AI design can influence user perceptions and behaviors in the critical area of financial inclusion, we contribute to the growing body of literature on AI’s broader societal implications [66]. This work helps to ground theoretical discussions about AI ethics in empirical reality, offering valuable insights for both researchers and practitioners working to ensure that AI technologies are developed and deployed in socially beneficial ways.
Lastly, the significant relationship we found between perceived algorithmic fairness and user satisfaction/recommendation behavior aligns with and extends previous work [44]. Our results suggest that ethical considerations could be incorporated into these frameworks, particularly for AI-driven services. This provides a more nuanced understanding of the factors influencing technology adoption in ethically sensitive contexts.

6.3. Implications for Practice

In addition to the theoretical implications, several practical implications emerge for stakeholders in the AI-driven financial inclusion sector. First, financial service providers and AI developers should prioritize ethical considerations in the design and implementation of AI-driven financial inclusion services. This includes focusing on algorithm transparency, accountability, and legitimacy. By embedding these ethical principles into their systems from the outset, companies can enhance user trust and satisfaction, potentially leading to higher adoption rates and customer loyalty. Organizations should strive to make their AI algorithms more transparent to users, providing clear, understandable explanations of how AI systems make decisions, particularly in areas such as loan approvals or credit scoring. Implementing user-friendly interfaces that offer insights into the decision-making process can help build trust and improve user perceptions of fairness.
Second, financial institutions should develop robust accountability frameworks for their AI systems, including regular audits of AI decision-making processes, clear channels for users to contest decisions, and mechanisms to rectify errors or biases identified in the system. To enhance perceptions of algorithm legitimacy, organizations should ensure their AI systems comply with relevant regulations and industry standards. They should also actively engage with regulatory bodies and participate in the development of ethical guidelines for AI in financial services. Communicating these efforts to users can further reinforce the legitimacy of their AI-driven services.
Third, developers and providers should adopt a user-centric approach in designing AI-driven financial inclusion services. This involves conducting regular user surveys, focus groups, and usability tests to understand user perceptions of algorithmic fairness and to identify areas for improvement in the user experience. Additionally, financial service providers should invest in training programs for their staff to understand the ethical implications of AI in financial inclusion. This knowledge can then be translated into better customer service and more informed interactions with users, potentially improving overall satisfaction and trust in the services. Organizations should implement comprehensive fairness metrics and monitoring systems for their AI algorithms. Regular assessment and reporting on these metrics can help identify potential biases or unfair practices early, allowing for timely interventions and adjustments to maintain high levels of perceived fairness. Service providers should also develop personalized communication strategies to explain AI-driven decisions to users, especially when those decisions might be perceived as unfavorable. Clear, empathetic, and individualized explanations can help maintain user trust and satisfaction, even in challenging situations.
Lastly, financial institutions, technology companies, and regulatory bodies should collaborate to establish industry-wide standards for ethical AI in financial inclusion. This could include developing shared guidelines for algorithm transparency, accountability, and fairness, which can help create a more consistent and trustworthy ecosystem for users. Furthermore, organizations should conduct regular assessments of the long-term impacts of their AI-driven financial inclusion services on user financial health and overall well-being. This can help ensure that the services are truly beneficial and align with the broader goals of financial inclusion and ethical AI deployment.

7. Limitations and Future Research Directions

Based on our findings and discussions, several limitations of the current study and potential directions for future research can be identified. The sample characteristics, focused on users of AI-driven financial inclusion services in a specific geographic context, may not fully represent the diverse global population that could benefit from such services. The cross-sectional design of our study limits our ability to infer causality and observe changes in perceptions and behaviors over time. Additionally, while we focused on algorithm transparency, accountability, and legitimacy, there may be other ethical dimensions relevant to AI-driven financial inclusion that were not captured in our study.
To address these limitations and further advance the field, we propose several directions for future research. Longitudinal studies could track changes in user perceptions, satisfaction, recommendation behavior, and other responses over time, providing insights into how ethical considerations and perceived fairness evolve as users become more familiar with AI-driven financial services. Cross-cultural comparisons could reveal how cultural values and norms influence perceptions of algorithmic fairness and ethical considerations in AI-driven financial inclusion. Future studies could also explore additional ethical dimensions, such as privacy concerns, data ownership, or the potential for algorithmic discrimination based on protected characteristics. Experimental designs could help establish causal relationships between specific ethical design features and user perceptions or behaviors, providing more concrete guidance for designing ethical AI systems in financial services.
On the other hand, expanding the research to include perspectives from other stakeholders, such as regulators, policymakers, and AI developers, could offer a more comprehensive understanding of the challenges and opportunities in implementing ethical AI in financial inclusion. Finally, investigating the role of user education and AI literacy in shaping the perceptions of algorithmic fairness and ethical considerations could provide insights for developing effective user education programs. Studies exploring how different regulatory approaches to AI in financial services influence user perceptions, provider behaviors, and overall market dynamics could inform policy development in this rapidly evolving sector.

Author Contributions

Conceptualization, Q.Y. and Y.-C.L.; methodology, Q.Y. and Y.-C.L.; software, Q.Y.; validation, Q.Y. and Y.-C.L.; formal analysis, Q.Y. and Y.-C.L.; investigation, Q.Y.; data curation, Q.Y. and Y.-C.L.; writing—original draft preparation, Q.Y.; writing—review and editing, Y.-C.L.; visualization, Q.Y. and Y.-C.L.; supervision, Y.-C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data that support the findings of this study are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Measurement Items

Table A1. Constructs, Measurements, and Sources.
Table A1. Constructs, Measurements, and Sources.
ConstructsMeasurementsSource(s)
Algorithm TransparencyThe criteria and evaluation processes of AI-driven financial inclusion services are publicly disclosed and easily understandable to users.Shin (2021) [52]; Liu and Sun (2024) [29]
The AI-driven financial inclusion services provide clear explanations for its decisions and outputs that are comprehensible to affected users.
The AI-driven financial inclusion services provide insight into how its internal processes lead to specific outcomes or decisions.
Algorithm AccountabilityThe AI-driven financial inclusion services have a dedicated department responsible for monitoring, auditing, and ensuring the accountability of its algorithmic systems.Liu and Sun (2024) [29]
The AI-driven financial inclusion services are subject to regular audits and oversight by independent third-party entities, such as market regulators and relevant authorities.
The AI-driven financial inclusion services have established clear mechanisms for detecting, addressing, and reporting any biases or errors in its algorithmic decision-making processes.
Algorithm LegitimacyI believe that the AI-driven financial inclusion services align with industry standards and societal expectations for fair and inclusive financial practices.Shin (2021) [52]
I believe that the AI-driven financial inclusion services comply with relevant financial regulations, data protection laws, and ethical guidelines for AI use in finance.
I believe that the AI-driven financial inclusion services operate in an ethical manner, promoting fair access to financial services without bias or discrimination.
Perceived Algorithmic FairnessI believe the AI-driven financial inclusion services treat all users equally and does not discriminate based on personal characteristics unrelated to financial factors.Shin (2021) [52]; Liu and Sun (2024) [29]
I trust that the AI-driven financial inclusion services use reliable and unbiased data sources to make fair decisions.
I believe the AI-driven financial inclusion services make impartial decisions without prejudice or favoritism.
Satisfaction with AI-Driven Financial InclusionOverall, I am satisfied with the AI-driven financial inclusion services I have experienced. Shin and Park (2019) [42]
The AI-driven financial inclusion services meet or exceed my expectations in terms of accessibility, efficiency, and fairness.
I am pleased with the range and quality of services provided through AI-driven financial inclusion platforms.
Recommendation of AI-Driven Financial InclusionI will speak positively about the benefits and features of AI-driven financial inclusion services to others.Mukerjee (2020) [53]
I would recommend AI-driven financial inclusion services to someone seeking my advice on financial services.
I will encourage my friends, family, and colleagues to consider using AI-driven financial inclusion services.

References

  1. Mhlanga, D. Industry 4.0 in finance: The impact of artificial intelligence (AI) on digital financial inclusion. Int. J. Financ. Stud. 2020, 8, 45. [Google Scholar] [CrossRef]
  2. Shin, D. User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. J. Broadcast. Electron. Media 2020, 64, 541–565. [Google Scholar] [CrossRef]
  3. Martin, K.; Waldman, A. Are algorithmic decisions legitimate? The effect of process and outcomes on perceptions of legitimacy of AI decisions. J. Bus. Ethics 2023, 183, 653–670. [Google Scholar] [CrossRef]
  4. Colquitt, J.A. On the dimensionality of organizational justice: A construct validation of a measure. J. Appl. Psychol. 2001, 86, 386–400. [Google Scholar] [CrossRef] [PubMed]
  5. Todorov, A.; Chaiken, S.; Henderson, M.D. The heuristic-systematic model of social information processing. In The Persuasion Handbook: Developments in Theory and Practice; Dillard, J.P., Pfau, M., Eds.; Sage: Thousand Oaks, CA, USA, 2002; pp. 195–211. [Google Scholar] [CrossRef]
  6. Jejeniwa, T.O.; Mhlongo, N.Z.; Jejeniwa, T.O. AI solutions for developmental economics: Opportunities and challenges in financial inclusion and poverty alleviation. Int. J. Adv. Econ. 2024, 6, 108–123. [Google Scholar] [CrossRef]
  7. Uzougbo, N.S.; Ikegwu, C.G.; Adewusi, A.O. Legal accountability and ethical considerations of AI in financial services. GSC Adv. Res. Rev. 2024, 19, 130–142. [Google Scholar] [CrossRef]
  8. Yasir, A.; Ahmad, A.; Abbas, S.; Inairat, M.; Al-Kassem, A.H.; Rasool, A. How Artificial Intelligence Is Promoting Financial Inclusion? A Study on Barriers of Financial Inclusion. In Proceedings of the 2022 International Conference on Business Analytics for Technology and Security (ICBATS), Dubai, United Arab Emirates, 16 February 2022; pp. 1–6. [Google Scholar]
  9. Kshetri, N. The role of artificial intelligence in promoting financial inclusion in developing countries. J. Glob. Inf. Technol. Manag. 2021, 24, 1–6. [Google Scholar] [CrossRef]
  10. Max, R.; Kriebitz, A.; Von Websky, C. Ethical considerations about the implications of artificial intelligence in finance. In Handbook on Ethics in Finance; Springer: Cham, Switzerland, 2021; pp. 577–592. [Google Scholar] [CrossRef]
  11. Aldboush, H.H.; Ferdous, M. Building Trust in Fintech: An Analysis of Ethical and Privacy Considerations in the Intersection of Big Data, AI, and Customer Trust. Int. J. Financ. Stud. 2023, 11, 90. [Google Scholar] [CrossRef]
  12. Telukdarie, A.; Mungar, A. The impact of digital financial technology on accelerating financial inclusion in developing economies. Procedia Comput. Sci. 2023, 217, 670–678. [Google Scholar] [CrossRef]
  13. Ozili, P.K. Financial inclusion, sustainability and sustainable development. In Smart Analytics, Artificial Intelligence and Sustainable Performance Management in a Global Digitalised Economy; Springer: Cham, Switzerland, 2023; pp. 233–241. [Google Scholar] [CrossRef]
  14. Lee, C.C.; Lou, R.; Wang, F. Digital financial inclusion and poverty alleviation: Evidence from the sustainable development of China. Econ. Anal. Policy 2023, 77, 418–434. [Google Scholar] [CrossRef]
  15. Adeoye, O.B.; Addy, W.A.; Ajayi-Nifise, A.O.; Odeyemi, O.; Okoye, C.C.; Ofodile, O.C. Leveraging AI and data analytics for enhancing financial inclusion in developing economies. Financ. Account. Res. J. 2024, 6, 288–303. [Google Scholar] [CrossRef]
  16. Owolabi, O.S.; Uche, P.C.; Adeniken, N.T.; Ihejirika, C.; Islam, R.B.; Chhetri, B.J.T. Ethical implication of artificial intelligence (AI) adoption in financial decision making. Comput. Inf. Sci. 2024, 17, 49–56. [Google Scholar] [CrossRef]
  17. Mhlanga, D. The role of big data in financial technology toward financial inclusion. Front. Big Data 2024, 7, 1184444. [Google Scholar] [CrossRef]
  18. Akter, S.; McCarthy, G.; Sajib, S.; Michael, K.; Dwivedi, Y.K.; D’Ambra, J.; Shen, K.N. Algorithmic bias in data-driven innovation in the age of AI. Int. J. Inf. Manag. 2021, 60, 102387. [Google Scholar] [CrossRef]
  19. Ntoutsi, E.; Fafalios, P.; Gadiraju, U.; Iosifidis, V.; Nejdl, W.; Vidal, M.E.; Ruggieri, S.; Turini, F.; Papadopoulos, S.; Krasanakis, E.; et al. Bias in data-driven artificial intelligence systems—An introductory survey. WIREs Data Min. Knowl. Discov. 2020, 10, e1356. [Google Scholar] [CrossRef]
  20. Munoko, I.; Brown-Liburd, H.L.; Vasarhelyi, M. The ethical implications of using artificial intelligence in auditing. J. Bus. Ethics 2020, 167, 209–234. [Google Scholar] [CrossRef]
  21. Schönberger, D. Artificial intelligence in healthcare: A critical analysis of the legal and ethical implications. Int. J. Law Inf. Technol. 2019, 27, 171–203. [Google Scholar] [CrossRef]
  22. Agarwal, A.; Agarwal, H.; Agarwal, N. Fairness Score and process standardization: Framework for fairness certification in artificial intelligence systems. AI Ethics 2023, 3, 267–279. [Google Scholar] [CrossRef]
  23. Purificato, E.; Lorenzo, F.; Fallucchi, F.; De Luca, E.W. The use of responsible artificial intelligence techniques in the context of loan approval processes. Int. J. Hum.-Comput. Interact. 2023, 39, 1543–1562. [Google Scholar] [CrossRef]
  24. Greenberg, J. Organizational justice: Yesterday, today, and tomorrow. J. Manag. 1990, 16, 399–432. [Google Scholar] [CrossRef]
  25. Robert, L.P.; Pierce, C.; Marquis, L.; Kim, S.; Alahmad, R. Designing fair AI for managing employees in organizations: A review, critique, and design agenda. Hum.-Comput. Interact. 2020, 35, 545–575. [Google Scholar] [CrossRef]
  26. Novelli, C.; Taddeo, M.; Floridi, L. Accountability in artificial intelligence: What it is and how it works. AI Soc. 2023, 39, 1871–1882. [Google Scholar] [CrossRef]
  27. Busuioc, M. Accountable artificial intelligence: Holding algorithms to account. Public Adm. Rev. 2021, 81, 825–836. [Google Scholar] [CrossRef] [PubMed]
  28. Morse, L.; Teodorescu, M.H.M.; Awwad, Y.; Kane, G.C. Do the ends justify the means? Variation in the distributive and procedural fairness of machine learning algorithms. J. Bus. Ethics 2021, 181, 1083–1095. [Google Scholar] [CrossRef]
  29. Liu, Y.; Sun, X. Towards more legitimate algorithms: A model of algorithmic ethical perception, legitimacy, and continuous usage intentions of e-commerce platforms. Comput. Hum. Behav. 2024, 150, 108006. [Google Scholar] [CrossRef]
  30. Shin, D. Embodying algorithms, enactive artificial intelligence and the extended cognition: You can see as much as you know about algorithm. J. Inf. Sci. 2023, 49, 18–31. [Google Scholar] [CrossRef]
  31. Shin, D.; Zhong, B.; Biocca, F.A. Beyond user experience: What constitutes algorithmic experiences? Int. J. Inf. Manag. 2020, 52, 102061. [Google Scholar] [CrossRef]
  32. König, P.D.; Wenzelburger, G. The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it. Technol. Soc. 2021, 67, 101688. [Google Scholar] [CrossRef]
  33. Cabiddu, F.; Moi, L.; Patriotta, G.; Allen, D.G. Why do users trust algorithms? A review and conceptualization of initial trust and trust over time. Eur. Manag. J. 2022, 40, 685–706. [Google Scholar] [CrossRef]
  34. Shulner-Tal, A.; Kuflik, T.; Kliger, D. Fairness, explainability and in-between: Understanding the impact of different explanation methods on non-expert users’ perceptions of fairness toward an algorithmic system. Ethics Inf. Technol. 2022, 24, 2. [Google Scholar] [CrossRef]
  35. Narayanan, D.; Nagpal, M.; McGuire, J.; Schweitzer, S.; De Cremer, D. Fairness perceptions of artificial intelligence: A review and path forward. Int. J. Hum.-Comput. Interact. 2024, 40, 4–23. [Google Scholar] [CrossRef]
  36. Grimmelikhuijsen, S. Explaining why the computer says no: Algorithmic transparency affects the perceived trustworthiness of automated decision-making. Public Adm. Rev. 2023, 83, 241–262. [Google Scholar] [CrossRef]
  37. Starke, C.; Baleis, J.; Keller, B.; Marcinkowski, F. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data Soc. 2022, 9, 1–16. [Google Scholar] [CrossRef]
  38. Qin, S.; Jia, N.; Luo, X.; Liao, C.; Huang, Z. Perceived fairness of human managers compared with artificial intelligence in employee performance evaluation. J. Manag. Inf. Syst. 2023, 40, 1039–1070. [Google Scholar] [CrossRef]
  39. Sonboli, N.; Smith, J.J.; Cabral Berenfus, F.; Burke, R.; Fiesler, C. Fairness and transparency in recommendation: The users’ perspective. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, Utrecht, The Netherlands, 21–25 June 2021; pp. 274–279. [Google Scholar] [CrossRef]
  40. Shin, D.; Lim, J.S.; Ahmad, N.; Ibahrine, M. Understanding user sensemaking in fairness and transparency in algorithms: Algorithmic sensemaking in over-the-top platform. AI Soc. 2024, 39, 477–490. [Google Scholar] [CrossRef]
  41. Kieslich, K.; Keller, B.; Starke, C. Artificial intelligence ethics by design. Evaluating public perception on the importance of ethical design principles of artificial intelligence. Big Data Soc. 2022, 9, 1–15. [Google Scholar] [CrossRef]
  42. Shin, D.; Park, Y.J. Role of fairness, accountability, and transparency in algorithmic affordance. Comput. Hum. Behav. 2019, 98, 277–284. [Google Scholar] [CrossRef]
  43. Ababneh, K.I.; Hackett, R.D.; Schat, A.C. The role of attributions and fairness in understanding job applicant reactions to selection procedures and decisions. J. Bus. Psychol. 2014, 29, 111–129. [Google Scholar] [CrossRef]
  44. Ochmann, J.; Michels, L.; Tiefenbeck, V.; Maier, C.; Laumer, S. Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting. Inf. Syst. J. 2024, 34, 384–414. [Google Scholar] [CrossRef]
  45. Wu, W.; Huang, Y.; Qian, L. Social trust and algorithmic equity: The societal perspectives of users’ intention to interact with algorithm recommendation systems. Decis. Support Syst. 2024, 178, 114115. [Google Scholar] [CrossRef]
  46. Bambauer-Sachse, S.; Young, A. Consumers’ intentions to spread negative word of mouth about dynamic pricing for services: Role of confusion and unfairness perceptions. J. Serv. Res. 2023, 27, 364–380. [Google Scholar] [CrossRef]
  47. Schinkel, S.; van Vianen, A.E.; Ryan, A.M. Applicant reactions to selection events: Four studies into the role of attributional style and fairness perceptions. Int. J. Sel. Assess. 2016, 24, 107–118. [Google Scholar] [CrossRef]
  48. Yun, J.; Park, J. The effects of chatbot service recovery with emotion words on customer satisfaction, repurchase intention, and positive word-of-mouth. Front. Psychol. 2022, 13, 922503. [Google Scholar] [CrossRef] [PubMed]
  49. Jo, H. Understanding AI tool engagement: A study of ChatGPT usage and word-of-mouth among university students and office workers. Telemat. Inform. 2023, 85, 102067. [Google Scholar] [CrossRef]
  50. Li, Y.; Ma, X.; Li, Y.; Li, R.; Liu, H. How does platform’s fintech level affect its word of mouth from the perspective of user psychology? Front. Psychol. 2023, 14, 1085587. [Google Scholar] [CrossRef] [PubMed]
  51. Barbu, C.M.; Florea, D.L.; Dabija, D.C.; Barbu, M.C.R. Customer experience in fintech. J. Theor. Appl. Electron. Commer. Res. 2021, 16, 1415–1433. [Google Scholar] [CrossRef]
  52. Shin, D. Why does explainability matter in news analytic systems? Proposing explainable analytic journalism. Journal. Stud. 2021, 22, 1047–1065. [Google Scholar] [CrossRef]
  53. Mukerjee, K. Impact of self-service technologies in retail banking on cross-buying and word-of-mouth. Int. J. Retail Distrib. Manag. 2020, 48, 485–500. [Google Scholar] [CrossRef]
  54. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.; Tatham, R. Multivariate Data Analysis, 6th ed.; Pearson Prentice-Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
  55. Hair, J.F.; Gabriel, M.; Patel, V. AMOS Covariance-Based Structural Equation Modeling (CBSEM): Guidelines on its Application as a Marketing Research Tool. Braz. J. Mark. 2014, 13, 44–55. [Google Scholar]
  56. Raza, S.A.; Qazi, W.; Khan, K.A.; Salam, J. Social isolation and acceptance of the learning management system (LMS) in the time of COVID-19 pandemic: An expansion of the UTAUT model. J. Educ. Comput. Res. 2021, 59, 183–208. [Google Scholar] [CrossRef]
  57. Fornell, C.; Larcker, D.F. Structural equation models with unobservable variables and measurement error: Algebra and statistics. J. Mark. Res. 1981, 18, 382–388. [Google Scholar] [CrossRef]
  58. Podsakoff, P.M.; Organ, D.W. Self-reports in organizational research: Problems and prospects. J. Manag. 1986, 12, 531–544. [Google Scholar] [CrossRef]
  59. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879–903. [Google Scholar] [CrossRef] [PubMed]
  60. Newman, D.T.; Fast, N.J.; Harmon, D.J. When eliminating bias isn’t fair: Algorithmic reductionism and procedural justice in human resource decisions. Organ. Behav. Hum. Decis. Process. 2020, 160, 149–167. [Google Scholar] [CrossRef]
  61. Birzhandi, P.; Cho, Y.S. Application of fairness to healthcare, organizational justice, and finance: A survey. Expert Syst. Appl. 2023, 216, 119465. [Google Scholar] [CrossRef]
  62. Chen, S.; Chaiken, S. The heuristic-systematic model in its broader context. In Dual-Process Theories in Social Psychology; Chaiken, S., Trope, Y., Eds.; Guilford Press: New York, NY, USA, 1999; pp. 73–96. [Google Scholar]
  63. Shi, S.; Gong, Y.; Gursoy, D. Antecedents of trust and adoption intention toward artificially intelligent recommendation systems in travel planning: A heuristic-systematic model. J. Travel Res. 2021, 60, 1714–1734. [Google Scholar] [CrossRef]
  64. Belanche, D.; Casaló, L.V.; Flavián, C. Artificial Intelligence in FinTech: Understanding robo-advisors adoption among customers. Ind. Manag. Data Syst. 2019, 119, 1411–1430. [Google Scholar] [CrossRef]
  65. Bao, L.; Krause, N.M.; Calice, M.N.; Scheufele, D.A.; Wirz, C.D.; Brossard, D.; Newman, T.P.; Xenos, M.A. Whose AI? How different publics think about AI and its social impacts. Comput. Hum. Behav. 2022, 130, 107182. [Google Scholar] [CrossRef]
  66. Khogali, H.O.; Mekid, S. The blended future of automation and AI: Examining some long-term societal and ethical impact features. Technol. Soc. 2023, 73, 102232. [Google Scholar] [CrossRef]
Figure 1. Research model.
Figure 1. Research model.
Bdcc 08 00105 g001
Figure 2. Standardized path coefficients and the significance levels for each hypothesis. *** p < 0.001.
Figure 2. Standardized path coefficients and the significance levels for each hypothesis. *** p < 0.001.
Bdcc 08 00105 g002
Table 1. Demographics of respondents.
Table 1. Demographics of respondents.
CategoriesN%
GenderMale38557%
Female29043%
Age≤209013.3%
21–3027440.6%
31–4016023.7%
41–509814.5%
51–60466.8%
≥6171.1%
EducationHigh school and below274%
College9313.8%
Bachelor39758.8%
Master and above15823.4%
Monthly income (RMB)Less than 500038957.6%
5000–10,00022733.6%
More than 10,000598.8%
Experience of using AI-driven financial inclusion servicesLess than 6 months 7511.1%
6 months-1 year20129.8%
More than 1 year39959.1%
Residential area First-tier city25738.1%
Second-tier city27140.1%
Third-tier city10014.8%
Fourth-tier city304.4%
Fifth-tier city and others172.5%
Table 2. Factor loadings, Cronbach’s α values, AVE, and CR.
Table 2. Factor loadings, Cronbach’s α values, AVE, and CR.
ConstructsItemsItem LoadingsCronbach’s AlphaAVECR
Algorithm TransparencyAT10.8050.8290.620.83
AT20.793
AT30.763
Algorithm AccountabilityAA10.770.8010.5780.803
AA20.838
AA30.662
Algorithm LegitimacyAL10.8310.8130.5950.814
AL20.753
AL30.726
Perceived Algorithmic FairnessPAF10.7560.8160.5980.817
PAF20.808
PAF30.755
SatisfactionSAT10.7720.8140.5950.815
SAT20.788
SAT30.753
RecommendationREC10.7310.8380.6250.832
REC20.894
REC30.735
Note. AVE: average variance extracted; CR: composite reliability; AT: Algorithm Transparency; AA: Algorithm Accountability; AL: Algorithm Legitimacy; PAF: Perceived Algorithmic Fairness; SAT: Satisfaction with AI-Driven Financial Inclusion; REC: Recommendation of AI-Driven Financial Inclusion.
Table 3. Discriminant validity.
Table 3. Discriminant validity.
ATAAALPAFSATREC
AT0.787
AA0.513 **0.76
AL0.515 **0.525 **0.771
PAF0.469 **0.446 **0.483 **0.773
SAT0.336 **0.330 **0.352 **0.527 **0.771
REC0.364 **0.339 **0.354 **0.549 **0.542 **0.791
Note. Diagonal numbers are AVE square root; ** p < 0.01; AT: Algorithm Transparency; AA: Algorithm Accountability; AL: Algorithm Legitimacy; PAF: Perceived Algorithmic Fairness; SAT: Satisfaction with AI-Driven Financial Inclusion; REC: Recommendation of AI-Driven Financial Inclusion.
Table 4. Hypothesis test results.
Table 4. Hypothesis test results.
HypothesesPathβp-ValueR2Remarks
H1ATPAF0.28<0.00130.7%Supported
H2AAPAF0.239<0.001Supported
H3ALPAF0.383<0.001Supported
H4PAFSAT0.572<0.00137.8%Supported
H5PATREC0.47<0.00152.5Supported
H6SATREC0.276<0.001Supported
Note. AT: Algorithm Transparency; AA: Algorithm Accountability; AL: Algorithm Legitimacy; PAF: Perceived Algorithmic Fairness; SAT: Satisfaction with AI-Driven Financial Inclusion; REC: Recommendation of AI-Driven Financial Inclusion.
Table 5. Model fit.
Table 5. Model fit.
Fit IndicesX2/dfGFIAGFINFICFIPGFIRMRRMSEA
Recommended value<3.0>0.9>0.8>0.9>0.9>0.6<0.08<0.08
Actual value2.6640.9520.9310.9470.9660.6680.0270.05
Table 6. Mediating effect of perceived algorithmic fairness.
Table 6. Mediating effect of perceived algorithmic fairness.
PathMediating EffectBootstrap 95%CI
LLCIULCI
ATPAFSAT0.2134 ***0.14780.2861
AAPAFSAT0.2267 ***0.15690.3018
ALPAFSAT0.2129 ***0.1450.2819
ATPAFREC0.2131 ***0.14680.2838
AAPAFREC0.2313 ***0.15850.31
ALPAFREC0.2196 ***0.15110.2919
Note. The lower bound and the upper bound of the confidence interval (95% CI) does not contain 0 indicates the mediator has a significant mediating effect between X and Y. *** p < 0.001; AT: Algorithm Transparency; AA: Algorithm Accountability; AL: Algorithm Legitimacy; PAF: Perceived Algorithmic Fairness; SAT: Satisfaction with AI-Driven Financial Inclusion; REC: Recommendation of AI-Driven Financial Inclusion.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Q.; Lee, Y.-C. Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data Cogn. Comput. 2024, 8, 105. https://doi.org/10.3390/bdcc8090105

AMA Style

Yang Q, Lee Y-C. Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data and Cognitive Computing. 2024; 8(9):105. https://doi.org/10.3390/bdcc8090105

Chicago/Turabian Style

Yang, Qin, and Young-Chan Lee. 2024. "Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation" Big Data and Cognitive Computing 8, no. 9: 105. https://doi.org/10.3390/bdcc8090105

APA Style

Yang, Q., & Lee, Y. -C. (2024). Ethical AI in Financial Inclusion: The Role of Algorithmic Fairness on User Satisfaction and Recommendation. Big Data and Cognitive Computing, 8(9), 105. https://doi.org/10.3390/bdcc8090105

Article Metrics

Back to TopTop