Next Article in Journal
An Algorithm for the Fisher Information Matrix of a VARMAX Process
Next Article in Special Issue
End-to-End Approach for Autonomous Driving: A Supervised Learning Method Using Computer Vision Algorithms for Dataset Creation
Previous Article in Journal
Deep-Reinforcement-Learning-Based Planner for City Tours for Cruise Passengers
Previous Article in Special Issue
Physics-Informed Deep Learning for Traffic State Estimation: A Survey and the Outlook
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact

1
Computer Science Department, Capitol Technology University, Laurel, MD 20708, USA
2
Computer Science Department, Grambling State University, Grambling, LA 71245, USA
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(8), 363; https://doi.org/10.3390/a16080363
Submission received: 17 June 2023 / Revised: 10 July 2023 / Accepted: 21 July 2023 / Published: 28 July 2023

Abstract

:
The purpose of this study is to show how machine learning can be leveraged as a tool to govern social impact and drive fair and equitable investments. Many organizations today are establishing financial inclusion goals to promote social impact and have been increasing their investments in this space. Financial inclusion is the opportunity for individuals and businesses to have access to affordable financial products including loans, credit, and insurance that they may otherwise not have access to with traditional financial institutions. Peer-to-peer (P2P) lending serves as a platform that can support and foster financial inclusion and influence social impact and is becoming more popular today as a resource to underserved communities. Loans issued through P2P lending can fund projects and initiatives focused on climate change, workforce diversity, women’s rights, equity, labor practices, natural resource management, accounting standards, carbon emissions, and several other areas. With this in mind, AI can be a powerful governance tool to help manage risks and promote opportunities for an organization’s financial inclusion goals. In this paper, we explore how AI, specifically machine learning, can help manage the P2P platform Kiva’s investment risks and deliver impact, emphasizing the importance of prediction model retraining to account for regulatory and other changes across the P2P landscape to drive better decision-making. As part of this research, we also explore how changes in important model variables affect aggregate model predictions.

1. Introduction

There continues to be an exponential set of use cases for the application of AI. P2P lending is another area where researchers have begun to explore and apply AI. Examples of the use of AI in the financial industry have been highlighted throughout various research, including the use of AI to (1) avoid challenges with related data, such as analyzing complex and large volumes of financial data and processing information at scale with tools such as natural language processing (NLP); to (2) automate credit and loan application decision-making; and also to (3) develop deep learning models to deliver increased insights to help address risks with potential biased decision-making and establish predictive models to help quantify future opportunities, measure outcomes, and increase investments in accountable, responsible, and equitable ways. The latter example is where we will focus on expanding our research in the area of P2P lending. More specifically, we explore the probability of a successful loan application in P2P lending to fund several social-impact-related activities (food, agriculture, housing, women, refugees, education, and housing) and develop a framework to help companies manage risks with bias and decision-making, while increasing investment opportunities—all driven by predictive modeling and model retraining. The idea of model retraining is to make certain that your models are reflective of current data and changes in the underlying environment in order to provide the most up-to-date predictions. As the underlying business environment changes, the accuracy of machine learning models can decline in comparison to model performance during the testing phase. This concept is known as model drift, referring to the weakening of model performance over time.
Many investments have led to social impact across the world today, from Kiara Nirghin’s research and studies, which provide hope for increasing food security across the globe; to Katherine Johnson’s expertise as a mathematician that promoted U.S. space exploration; to Marcia Barbosa, the physicist whose discovery efforts on the complex structures of water molecules could help solve water shortage concerns [1]; and Hedy Lamarr, who is credited for her communication techniques which are said to be the foundation to the wireless technologies we experience today [2]. These accomplishments, coupled with so many other contributions specifically made by women to date, continue to highlight the need to provide resources and increase investments in research to deliver impact globally—much of which contributes to environmental, sustainability, and governance (ESG) areas including environmental justice. While there exists some notable progress, research to date continues to amplify that women-owned small business funding is not equivalent to the resources and funding received by men-led start-ups, which has been attributed to structural inequalities and steadfast biases [3]. To help address this known gap, many organizations are intentionally creating and increasing access to funding and resources for women start-ups—including grants, loans, mentoring, education, and revenue-generating opportunities. In fact, as part of an organization’s ESG aspirational goals, the establishment of funding platforms providing access to capital for underserved communities, including women, to tap into through an application process is on the rise.
Organizations that are focused on addressing the gap with funding allocated to underserved start-ups include IFundWomen, Visa, Grants for Women, the Small Business Administration, Open Meadows Foundation, Cartier Women’s Initiative Award, and so many others [4]. One organization in particular, Kiva, an international non-profit, is also focused on increasing financial access to help under-resourced communities flourish. Kiva accomplishes this objective through crowdfunding loans—the use of small amounts of capital from a large number of individuals and/or groups to help finance new business ideas for many under-served cohorts including women [5]. To date, 81% of Kiva’s borrowers are women, with an overall repayment rate across all populations of borrowers of 96.4%. Kiva noted in their 2021 annual impact report that women across the globe have less access to fair credit, with 46% of men having access to financial services whereas only 27% of women do [6]. Kiva’s global reach spans from the United States to Europe and the Middle East, to Latin America and the Caribbean, to Africa, to Asia and Oceania—impacting over 4.5 M lives since its establishment and touching at least 10 sectors including food, agriculture, housing, health, and education [6]. Organizations like Kiva continue to measure and highlight their social impact, and today, in some cases, organizations are monitoring financial inclusion through using independent ESG rating agencies to assess risks and uncover issues along with opportunities.
For example, private equity investments that incorporate ESG (environmental, social, and governance) aspects are becoming more and more common as investors look to align their investments with their beliefs and reduce the risks associated with ESG concerns. Using mathematical equations and references, the following is a basic description of how ESG issues can be included into private equity investments:
Scoring ESG: Evaluating the portfolio firms’ ESG performance is the first stage in incorporating ESG criteria into private equity investments. This can be accomplished through employing ESG scoring models, which award organizations scores depending on their ESG performance. An ESG score can be expressed mathematically as follows [7,8]:
ESG Score = w1 ∗ E + w2 ∗ S + w3 ∗ G
where E, S, and G are the scores for environmental, social, and governance factors, respectively, and w1, w2, and w3 are the weights assigned to each factor.
This can be accomplished through incorporating the ESG scores into the financial models used to evaluate potential investments. The mathematical expression for an ESG-integrated financial model can be written as:
V = (1 + r) ∗ (1 − L) ∗ (1 − C) ∗ (1 − ESG)
where V is the value of the investment, r is the expected return, L is the expected loss, C is the expected cost, and ESG is the ESG score. The ESG score is subtracted from 1 to reflect the negative impact of poor ESG performance on the investment value [9,10].
Kiva, as described above, has a vast amount of publicly available (non-personally identifiable information) profile data that we leverage using AI to help more broadly establish a sustainable governance framework that offers increased insights, continuous testing for biases, and predictive modeling on timely funding and allocations for Kiva applicants and can be expanded for use in other P2P lending institutions. In our previous research, we conducted process evaluation for scholarship award decision-making utilizing a sample dataset and refined prior algorithms to determine if our preliminary findings were advanced when examining the types of responses that increase or decrease a student’s chance of receiving a scholarship from a non-profit organization that focuses on the success of diverse students and professionals [11]. We ultimately provided stakeholders with a set of algorithms that lay the foundation for future automation work for the scholarship award decision-making process supporting this non-profit’s aspiration goals to drive equitable outcomes for higher education. In this research, we introduce the concept of model retraining to drive sustainable goals relative to equitable outcomes for P2P funding. This is especially important given the evolving regulations, laws, and rules regarding equity crowdfunding.

Significance of Research

We focus on addressing the following questions in this research:
(1)
How can a company (specifically in the peer-to-peer lending industry) monitor financial inclusion, given the wide range of investment opportunities to promote social responsibility, with the use of AI?
(2)
How can model retraining contribute to a risk management framework that helps detect issues including bias with social impact investments?
Many organizations are using AI-enabled platforms to automate their P2P lending processes including data mining, credit scoring, loan decision-making, and predicting loan defaults. This research goes a level deeper and studies a collective set of select AI algorithms and develops a model to predict the likelihood of loans being funded within a given timeframe based on the nature of the loan request examined through feature variables. Specifically, this research explores, provides transparency, and emphasizes the importance of pre-processing activities (such as data cleansing) to model retraining to increase reliance on the data and accuracy of the models. With respect to model retraining, the research expands on the need to retrain models to achieve healthy outputs that reflect the current environment. Finally, the research explores the use of several machine learning algorithms, comparing key features of the algorithms to examine model performance (e.g., time to train the model, precision, accuracy). The research ultimately highlights the opportunity to assess ESG factors such as financial inclusion (e.g., gender, sector), particularly in P2P lending and can help detect areas such as behavioral bias or uncover issues that are not aligned to lending expectations.
To help unpack this research, we’ve structured the remaining outline of this paper as follows: Section 2: Materials and Methods defines different types of crowdfunding and includes examples of AI and peer-to-peer lending, existing challenges, and the AI connection to broader social impact. This section also discusses predictive models and retraining. Section 3: Results includes an analysis of several types of AI algorithms applied to the data from Kiva, and outcomes which can ultimately inform a risk management framework to drive continuous monitoring of lending behaviors. In Section 4: Discussion, we highlight different tools that can be leveraged today to detect bias. We wrap up our conclusions in Section 5.

2. Materials and Methods

I.
CROWDFUNDING
Crowdfunding is a well-liked method of raising money for a range of initiatives, goods, and services. Through maximizing various factors, such as audience targeting, funding goals, and success prediction, artificial intelligence (AI) has the potential to improve the efficiency and effectiveness of crowdfunding campaigns. Here are some mathematical models and references regarding AI-powered crowdfunding:
The success of campaign predictions implementing machine learning: Crowdfunding campaign success has been predicted using machine learning algorithms like Random Forest and Support Vector Machines (SVMs) based on many factors like campaign duration, financing target, number of backers, and social media activity [12].
Random Forest: Random Forest is an ensemble learning algorithm that combines multiple decision trees to improve the accuracy and stability of the predictions. The mathematical expression for Random Forest can be written as follows [13,14]:
Let D = {(x_1, y_1), (x_2, y_2), …, (x_N, y_N)} be a dataset with N instances, where x_i is the feature vector for the i-th instance and y_i is the corresponding label. A Random Forest model consists of K decision trees T_k, where each tree is trained on a random subset of dataset D’ and a random subset of the features F’ = {f_1, f_2, …, f_M’}:
T_k = DecisionTree(D′, F′)
For each instance x_i, the predicted label y_hat_i is obtained through taking the majority vote of the K decision trees:
y_hat_i = argmax_j {1/K ∗ sum_k I(T_k(x_i) = j)}
where I() is the indicator function and T_k(x_i) is the predicted label of the i-th instance by the k-th decision tree.
Support Vector Machines (SVMs): Support Vector Machines (SVMs) are a supervised learning algorithm used for classification and regression tasks. SVM finds the hyperplane that maximally separates the data points of different classes in a high-dimensional space. The mathematical expression for SVM can be written as follows [15,16]:
Given a training dataset D = {(x_1, y_1), (x_2, y_2), …, (x_N, y_N)}, where x_i is the feature vector for the i-th instance and y_i is the corresponding label (+1 or −1 for binary classification), SVM finds the hyperplane w^T x + b = 0 that maximizes the margin between the two closest points of different classes. The margin is defined as the distance between the hyperplane and the closest points and is given by:
margin = 2/‖w‖
subject to the constraints:
y_i (w^T x_i + b) ≥ 1 for all i
where ‖w‖ is the Euclidean norm of the weight vector w. The optimization problem can be formulated as:
minimize ‖w‖^2/2.
subject to y_i (w^T x_i + b) ≥ 1 for all i.
The solution can be found using Lagrange multipliers, which gives rise to the dual optimization problem:
maximize sum_i alpha_i − 1/2 sum_i sum_j alpha_i alpha_j y_i y_j x_i^T x_j
subject to 0 ≤ alpha_i ≤ C for all i
and sum_i alpha_i y_i = 0
where alpha_i are the Lagrange multipliers, C is the regularization parameter, and x_i^T x_j is the inner product of the feature vectors x_i and x_j.
Using Bayesian networks to target the right audience: Based on their connections in social networks and past behavior, Bayesian networks have been used to pinpoint the most influential backers for crowdfunding campaigns [17]. Deep learning models, such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have been used to analyze campaign content and social media activity to identify fraudulent crowdfunding projects [18,19,20].
Agent-based models have been used to simulate supporter behavior and forecast the results of crowdfunding campaigns based on variables including the funding goal, incentive structure, and social network effects [12].
Through forecasting success, improving funding targets, focusing on the correct audience, identifying fraud, and modeling supporter behavior, these mathematical models and AI techniques can increase the effectiveness and efficiency of crowdfunding campaigns. AI can be used by crowdfunding platforms and campaign managers to make data-driven decisions that will enhance the results of campaigns and help them meet their fundraising targets.
In the simplest terms, crowdfunding is a community of people putting up money to support a project or a specific cause. There are four types of crowdfunding, of which three of the four are related to raising capital for small businesses or start-ups [21].
(a)
Rewards-based crowdfunding: With this type of crowdfunding, an investor provides an online contribution in return for a reward. This can include providing a product that was launched with the funding for free or at a discount.
(b)
Equity crowdfunding: With this type of crowdfunding, investors support the goal of raising capital online in exchange for a percentage of equity ownership in the business itself.
(c)
Peer-to-peer (P2P) lending: Similar to acquiring a bank loan, this type of lending instead comes from an individual as opposed to a financial institution. The loan is expected to be repaid over a certain time.
(d)
Donations: Similar to GoFundMe, this type of crowdfunding allows individuals or groups to benefit from funding (without repayment) to support an individual cause or project.
Our research focuses on peer-to-peer funding with the assistance of Kiva’s data.
There is an increased demand for investments to fulfill social impact goals, providing many benefits to peer-to-peer lending given the societal impact resulting from such loans. Whether investing in renewable energy projects creating programs and resources to increase diversity amongst youth and their interest in STEM or creating programs to solve for food shortage of affordable housing, P2P lending shows promise to those entrepreneurs who wish to contribute and leverage untraditional sources of funding. One example is LenderKit, which offers the opportunity to launch a customized ESG crowdfunding platform, promoting environmental crowdfunding, socially responsible crowdfunding, and impact investing platforms. With the emerging use of P2P platforms, especially to help drive ESG goals, how does a company implement continuous monitoring procedures to mitigate related risks while promoting the promise of positive global impact?
On a broader ESG level, there is power in pairing ethical AI and financial inclusion. For example, AI solutions can collect and analyze large volumes of data related to ESG risks and opportunities through assessing real-time company impact as well as gaps and discrepancies in several ESG factors. AI can be used to conduct market narratives and sentiment analysis. AI can also be used to assess a company’s compliance with related ESG regulations and disclosures. Furthermore, AI can be used on publicly available data to help predict material ESG factors and inform future funding needs.
Specific applications of AI to promote social impact initiatives include:
(1)
Natural language processing (NLP), a program trained to read the transcript of a company’s quarterly earnings and investments public meetings, can analyze the CEO’s use of words, and assess which parts of the discussion focus on social-justice-related topics and, ultimately, develop an understanding of a company’s commitment to ESG factors.
(2)
Peer-to-peer lending has resulted in widely known impactful societal outcomes across the globe. There are still challenges that exist in the P2P market, including limited information to assess creditworthiness, volumes of applicant data for humans to review to inform decision-making, fraudulent activities, data privacy, biased decisions, loan defaults, the feasibility of lending platforms to collect information and process applications in a timely manner, and the pace of evolving regulation. Fortunately, as with social impact, even many of these challenges can be addressed with the use of AI, and several indicators through existing research demonstrate this thought. There are several examples demonstrating how AI can play a key and evolving role in peer-to-peer lending markets.
(3)
In the Indian P2P market, Kanwal Anil and Anil Misra explore the use of AI, concluding how implementing AI capabilities would transform core financial processes more securely and faster, including the underwriting phase (which at the time was very manual), and offering predictive intelligence as a framework to inform process efficiency, cost optimization, and client engagement [22].
(4)
Turiel and Aste researched the use of AI in the P2P loan acceptance process and default prediction and recommend an automated approach to predict loan defaults and amplify the opportunity to transform the credit screening process [23].
(5)
In research conducted by Klimowicz and Spirzewski, the use of logistic regression machine learning is used to automatically build a credit scorecard to inform P2P lending [24].
(6)
Niu Beibei researched P2P lending platforms specifically regarding the integration of social network information using machine learning to build more effective credit scoring models. To predict the likelihood of a loan default, several machine learning algorithms were used, including random forest, LightGBM, and AdaBoost along with a logistic regression, to understand if any correlation exists between social network information and loan default [25].
II.
APPLICATION OF AI ALGORITHMS ON KIVA’S FUNDING DATA
Utilizing the public loan dataset from Kiva that consists of over 2 M observations with over 30 attributes spanning a time frame of 2006 through 2021, an important set of pre-processing activities were performed. Next, AI algorithms were applied to extract unique insights into donor behaviors and to inform a governance framework that many P2P platforms can leverage to promote increased funding for ESG-related activities in a timely manner through a predictive lens. Additionally, this research also amplifies the criticality of M/L model retraining.
As P2P policies and regulations as well as increased data collection activities evolve over time, model retraining becomes necessary to reduce bias and negative effects on model performance. Figure 1 below shows the distribution of loans over the period 2006–2021. Some of the data attributes inclusive of the KIVA data are:
  • Loan ID
  • Loan Name
  • Funded Amount
  • Loan Use
  • Sector Name
  • Currency
  • Posted Time
  • Planned Expiration Time
  • Raised Time
  • Tags
  • Borrower Information
  • Status
The following steps were taken to guide this research using Kiva’s loan dataset:
(1)
Given the nature and range of values in Kiva’s raw data, preprocessing activities were performed to account for missing data, potential data quality issues, loans with multiple requestors, data attributes that were added to Kiva’s existing dataset since data inception, and changes in Kiva’s lending policies over time. To address some of this, data cleansing and formatting are applied to the loan dataset.
(2)
Furthermore, several data attributes consist of continuous features that are on different scales, making this a critical area to address before we can apply machine learning techniques to reduce bias in the model. To address this, data scaling, as depicted in Figure 2 below, is performed to incorporate standardization. We also use encoding to transform our categorical/text data into numerical data given that most machine learning models can only interpret numerical data [26].
(3)
To create the target variable, “Posted Time Plus Seven Days”, the existing data attribute “Posted Time” is used, and 7 days is added to this value. To create the target variable “Raised by Seven Days”, we denote “True” or “False” depending on whether the existing attribute “Raised Time” is less than “Posted Time Plus Seven Days”. These calculations are important to get the model to ultimately predict if a loan will be funded in less than 7 days of it being posted.
(4)
As depicted in Figure 3 below, time-series data splitting [27] is performed where the following datasets are established:
a.
Training Set (2 months of data);
b.
Validation Set (one month of data);
c.
Test Set (one month of data).
(5)
To achieve better predictions, data transformations are conducted on several data attributes (e.g., Posted Time, Borrower Gender, Video ID, Image ID), and one hot encoding, another method used to convert categorical data into numerical data (binary features 0 and 1) for use in machine learning, is performed on several data attributes (e.g., Activity Name, Currency, Country Name, Partner ID) [28].
(6)
Mix Max Scaling is conducted on several data attributes to normalize our data (e.g., Loan Amount, Lender Term).

3. Results

3.1. Machine Learning to Predict

(1)
Gradient Boosting, a technique leveraged for regression and classification analysis, is used in this research to build the desired predictive model, and reliance is placed on the Area Under the Curve (AUC) given by the equation (Percent Concordant + 0.5 * Percent Tied)/100 to evaluate the performance of the model. Table 1 below shows performance outcomes across the range of AUC values [29].
(2)
To improve model performance, Hyperparameter Optimization (HPO) is used, while model settings can take on very arbitrary amounts where one can try a number of combinations to determine those settings which give the best model performance. There are six techniques used to conduct HPO, which include manual search, random search, grid search, evolutionary algorithms, Bayesian optimization, and gradient-based methods.
(3)
In this research, Bayesian HPO is applied. Leveraging this algorithm results in the following predictions on the training set, validation set, and test set in Figure 4 and Figure 5:
As shown above, AUC performance scores for training, validating, and testing are very similar scores.

3.2. Model Retraining

Retraining models is fundamental to make certain that the output is reliable and fair. With the increasing number of ESG commitments and a company’s reliance on data and KPIs to drive outcomes and progress, the intersection of AI and ESG, while promising, comes with important continuous monitoring techniques which include model retraining.
Kiva’s data have evolved over time. Data attributes such as “Planned Expiration Time” and “Tags” were added to the dataset in late 2012 and 2013, respectively. The number of countries (“Country Name”) increased over time. Also, the maximum “Loan Amount” increased over time, and there are several more examples of such additions and changes. With these changes in mind, techniques we discussed earlier, including encoding, scaling, and others, are a part of the “re-training” model.
Consider the following min/max scaling example. Let us say that in 2008, the maximum loan value was 125 and data was scaled through dividing all loan amounts by 125. Let us say that in 2009, we had a new maximum loan value of 1000. However, we should not scale the 2009 data with the maximum loan value from 2008, which was 125. The correct approach is to scale the data according to the new maximum loan value of 1000 as depicted in Figure 6 below.
This could also be the case for a number of countries where, let us say, in 2008, “Country Names” consisted of Zimbabwe, Uganda, and South Africa, and then in 2009, let us say there were two more countries reflected in the data—Vietnam and Congo. Hence, re-encoding is necessary to capture the new values reflected under “Country Name.”
With this example in mind, re-scaling and re-encoding are performed on some attributes due to changes in the data set to demonstrate model performance when a model is trained once versus monthly. In the below example, applying these techniques, including the Gradient Boosting algorithm, and extracting the AUC in both the single-trained model and the monthly retrained model, yields the outcomes in Figure 7 and Figure 8.
As part of this analysis, we aim to understand data attributes or feature importance. Understanding those data attributes that are most important to the model can be insightful when it comes to predicting the likelihood of a loan being funded promptly (in this research, within 7 days). Using topic modeling, specifically, BERTopic, the research highlights the most important features of the model, while also incorporating Density-Based Spatial Clustering of Applications with Noise (DBSCAN)—an important technique that detects outliers in the dataset. With model retraining in mind, Figure 9 below shows a ranking of important attributes in 2010, while Figure 10 shows the ranking of important attributes in 2016.
As you can see from Figure 9 and Figure 10, “Loan Amount” and “Lender Term” continue to be the most important attributes of the model during 2010 as well as in 2016, while the Philippines was an important feature in the model during 2010, and in 2016, four other countries (not including the Philippines) were one of the top 20 important features to the model—Paraguay, Nicaragua, Peru, and Kenya.
Figure 11 below depicts an overall process from data cleansing to model retraining that can be incorporated into a governance process that relies on predictability and increased accuracy in model performance.
In summary, the process starts with data pre-processing to predict whether a loan will be raised in several days (target variable creation), and then, due to data drift, over time, the accuracy of the model decreases—informing the need for model retraining. This ultimately allows for bias tracking through assessing feature (data attribute) importance.
The analysis above was expanded to understand and compare model performance across several different algorithms on a smaller sample of data. In our initial analysis, we selected Gradient Boosting as the choice algorithm, and as you can see in Table 2 below, there are several other algorithms we can select from, highlighting the time it takes to train a model to the outcomes of the AUC scores. For example, while Gradient Boosting takes a bit longer to train than Linear Regression, we can see AUC score is higher than this algorithm. Another example is the Logistic regression; while it takes longer to train the model, it does perform slightly better when observing the AUC scores than the Gradient Boosting algorithm. This highlights the importance of understanding the tasks that we want a model to perform. Some key factors to consider when selecting an algorithm include the format of data, interpretability, volume of data features, training time, linear relationships, and prediction time.

4. Discussion

Collaboration was used as the starting point for the collection of empirical data from Kiva that consists of over 2 M observations with over 30 attributes spanning a time frame of 2006 through 2021, and subjects evolved based on practitioners’ real-world experiences.
The present literature in this field acknowledges the existence of bias in client-related decision-making, but there is a lack of specificity in the application of bias theory. Cognitive and motivational biases in decisions involving clients were acknowledged, and this gap was filled. While we selected sample data in several cases to demonstrate the importance of model retraining in this research, a leading practice is to use all available data.
There are several techniques and algorithms as highlighted in this research that, when considered together, can contribute to a governance framework for continuously monitoring the decisions and outcomes of P2P lending and addressing socially responsible questions, including:
(1)
Are more loans being funded to men in any given month, and what attributes are highly correlated with men being funded a loan versus a woman being funded a loan?
(2)
Are more loans being funded quicker for a given sector, and what factors are causing those quick decisions (e.g., market trends, etc.)? What key attributes are highly correlated with a loan request not being funded?
(3)
In any given set of loan requests, how many are likely to be funded in 7, 10, or 20 days?
(4)
With the introduction of new data and/or policies, does re-training the model and model output highlight any risks or inconsistencies in achieving P2P lending expectations (e.g., does the number of funded loans suddenly decline)?
(5)
How is a company managing behavior biases in P2P lending, for example, familiarity bias, where a lender is likely to fund a borrower who shares a similar background, experiences, or ethnicity? How is feature importance contributing to potential bias?
With respect to bias, there are many tools available today to promote bias detection. In this section, we describe three of these available tools below:
(a)
AWS Clarify is an Amazon Web Services (AWS) machine learning (ML) service that streamlines the process of developing, training, and deploying correct computer vision models. It provides a collection of tools for data labeling, data management, and model training, as well as a suite of pre-built models, to let developers quickly design and deploy unique image and video analysis applications [30,31].
Models that have already been built: Pre-built models for popular computer vision tasks such as object detection, semantic segmentation, and picture classification are available through AWS Clarify. These models are trained on large-scale datasets and can be tailored to specific use cases with custom data. Model deployment: AWS Clarify offers a simple deployment mechanism for models created using the service. Models can be deployed either on AWS or on premises [30,31].
Developers can use AWS Clarify through following these steps:
1.
Prepare the information: Collect and label the data used to train the model.
2.
Train the model: On the labeled data, use AWS Clarify to train the model.
3.
Examine the model: To determine the model’s accuracy, use the evaluation metrics offered by AWS Clarify.
4.
Deploy the model: Deploy the model on AWS infrastructure or your servers.
5.
AWS Clarify can be used for several tasks such as object detection, content moderation, and medical picture analysis.
(b)
Google What-If is a Google open-source software system that is useful in investigating and picturing the performance of ML models. The application can be used to investigate both organized and unstructured data, such as photos, text, and tabular data [32,33].
Data exploration: Google What-If enables developers to investigate the feature distributions, feature correlations, and sample data points that were used to train the model. Exploration of the model’s output, such as anticipated labels, probabilities, and scores, is possible with Google What-If. The model’s predictions can be compared to the dataset’s actual labels by developers. What-If evaluation: Google What-If enables programmers to examine many scenarios and comprehend how alterations to the input information or the model’s parameters affect the results. For instance, to understand how the model might react, developers can simulate changes to specific features or apply perturbations to the input data [32,33].
(c)
Developed by IBM, AI Fairness 360 is an open-source solution that can help detect and remove bias in large datasets and machine learning models and can be applied throughout the AI development lifecycle. This open-source Python solution is inclusive of several fairness metrics and bias mitigation algorithms that can be applied in many industries including the financial, talent and recruiting, healthcare, and law enforcement industries. The toolkit also provides a good amount of transparency and explainability around metrics and algorithms that can be applied to training data. IBM provides an on-demand user experience to explore these algorithms to gain an understanding of fairness and capabilities that can be leveraged to address bias, along with user guidance. [34]

5. Conclusions

As noted previously, this research aimed to address the following two questions:
(1)
How can a company (specifically in the peer-to-peer lending industry) monitor financial inclusion, given the wide range of investment opportunities to promote social responsibility, with the use of AI?
Predicting the likelihood of a loan being funded in a timely manner before the loan request expires is an important assessment in P2P lending. While other research focuses on credit scoring and predicting loan defaults in lending practices, we examined feature importance to help lenders understand those underlying factors that contribute to a loan being funded timely. We introduced a collective set of algorithms to prepare data and dove deeply into model performance to derive predictions. Deploying this type of predictive analysis could help lenders proactively consider any refinements to their lending processes, especially if lenders began to discover gaps or issues with promoting financial inclusion.
(2)
How can model retraining contribute to a risk management framework that helps detect issues including bias with social-impact-related investments?
We demonstrated in this research that as data gets updated, executing a predictive analysis on models without retraining them can degrade model performance and can lead to unreliable results for driving decision-making. Hence, this becomes an important aspect of the continuous monitoring of ML models to help address some of the questions identified in Section 4, along with other social impact performance-related questions.
Expanded research can include analyzing Kiva’s or other P2P datasets by sector to obtain increased insights on those P2P loans that are being funded to address certain ESG factors such as food and housing shortages and educational gaps. In this analysis, we consider all sectors for which Kiva supports P2P lending.
A company being transparent about the intersection of AI and their ESG commitments—including automated strategies—to deliver a sustainable governance model that incorporates increased insights and bias reduction (in this case in P2P lending) can be a strategic way to gain trust from customers and lenders as well as promote increased lender interest to ultimately support more causes that positively impact the world.

Author Contributions

Investigation, T.A.; Writing—review & editing, B.S.R.; Supervision, B.S.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Kiva’s data is available via http://kivatools.com/downloads (accessed on 20 July 2023).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Netz, P.A.; Starr, F.W.; Stanley, H.E.; Barbosa, M.C. Static and dynamic properties of stretched water. J. Chem. Phys. 2001, 115, 344–348. [Google Scholar] [CrossRef] [Green Version]
  2. Robbins, T. Hedy Lamarr and a Secret Communication System; Capstone: North Mankato, MN, USA, 2007. [Google Scholar]
  3. Bernhardt, S.; Braun, P.; Thomason, J. (Eds.) Gender Inequality and the Potential for Change in Technology Fields; IGI Global: Hershey, PA, USA, 2018. [Google Scholar]
  4. Pathak, S.; Raees, S.A. Digital Innovation for Financial Inclusion: With reference to Indian Women Entrepreneurs. Annu. Res. J. Scms Pune 2023, 11, 29. [Google Scholar]
  5. Uddin, M.J.; Vizzari, G.; Bandini, S.; Imam, M.O. A case-based reasoning approach to rate microcredit borrower risk in online Kiva P2P lending model. Data Technol. Appl. 2018, 52, 58–83. [Google Scholar] [CrossRef]
  6. Tedeschi, C. The Social Impact of Crowdfunding and the Increasing Microlending Potential: The Case Study of Kiva. 2023. Available online: http://dspace.unive.it/bitstream/handle/10579/22991/883594-1264955.pdf?sequence=2 (accessed on 20 July 2023).
  7. Eccles, R.G.; Ioannou, I.; Serafeim, G. The impact of corporate sustainability on organizational processes and performance. Manag. Sci. 2014, 60, 2835–2857. [Google Scholar] [CrossRef] [Green Version]
  8. Grewal, J.S.; Rohatgi, P. Incorporating ESG factors in private equity investments: Opportunities and challenges. J. Appl. Financ. Bank. 2019, 9, 95–105. [Google Scholar]
  9. Eccles, R.G.; Serafeim, G.; Seth, D.; Ming, C.C.Y. The Performance Frontier: Innovating for a Sustainable Strategy: Interaction. Harv. Bus. Rev. 2013, 91, 17–18. [Google Scholar]
  10. Eccles, R.G.; Ioannou, I.; Serafeim, G. The impact of a corporate culture of sustainability on corporate behavior and performance. Natl. Bur. Econ. Res. 2012, 17950, 2835–2857. [Google Scholar] [CrossRef]
  11. Austin, T.; Rawal, B.S.; Diehl, A.; Cosme, J. AI for Equity: Unpacking Potential Human Bias in Decision Making in Higher Education. In AI, Computer Science and Robotics Technology; IntechOpen: London, UK, 2023. [Google Scholar]
  12. Chen, Y.; Guo, L.; Zhang, Y. A comparison of machine learning algorithms for crowdfunding success prediction. J. Bus. Res. 2019, 104, 23–34. [Google Scholar]
  13. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  14. Liaw, A.; Wiener, M. Classification and regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
  15. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  16. Vapnik, V. Statistical Learning Theory; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
  17. Wang, B.; Wang, Y.; Chen, W.; Wu, J. Identifying influential backers in crowdfunding using Bayesian networks. Inf. Sci. 2016, 372, 78–94. [Google Scholar]
  18. Han, X.; Tang, Y.; Wang, Z. Optimal crowdfunding strategy based on reinforcement learning. IEEE Trans. Eng. Manag. 2020, 67, 843–855. [Google Scholar]
  19. Wang, L.; Wang, Y.; Zhao, X. Detecting fraudulent crowdfunding campaigns with deep learning. J. Bus. Res. 2020, 108, 186–198. [Google Scholar]
  20. Estrada, M.; Vargas-Quesada, B.; Chen, Z. Crowdfunding behavior modeling: An agent-based approach. J. Bus. Res. 2019, 100, 67–79. [Google Scholar]
  21. Cox, J.; Nguyen, T. Does the crowd mean business? An analysis of rewards-based crowdfunding as a source of finance for start-ups and small businesses. J. Small Bus. Enterp. Dev. 2018, 25, 147–162. [Google Scholar] [CrossRef] [Green Version]
  22. Anil, K.; Misra, A. Artificial intelligence in Peer-to-peer lending in India: A cross-case analysis. Int. J. Emerg. Mark. 2022, 17, 1085–1106. [Google Scholar] [CrossRef]
  23. Turiel, J.D.; Aste, T. Peer-to-peer loan acceptance and default prediction with artificial intelligence. R. Soc. Open Sci. 2020, 7, 191649. [Google Scholar] [CrossRef]
  24. Klimowicz, A.; Spirzewski, K. Concept of peer-to-peer lending and application of machine learning in credit scoring. J. Bank. Financ. Econ. 2021, 2, 25–55. [Google Scholar] [CrossRef]
  25. Niu, B.; Ren, J.; Zhao, A.; Li, X. Lender trust on the P2P lending: Analysis based on sentiment analysis of comment text. Sustainability 2020, 12, 3293. [Google Scholar] [CrossRef] [Green Version]
  26. Fitkov-Norris, E.; Vahid, S.; Hand, C. Evaluating the impact of categorical data encoding and scaling on neural network classification performance: The case of repeat consumption of identical cultural goods. In Proceedings of the Engineering Applications of Neural Networks: 13th International Conference, EANN 2012, London, UK, 20–23 September 2012; Proceedings 13. Springer: Berlin/Heidelberg, Germany, 2012; pp. 343–352. [Google Scholar]
  27. Cox, D.R. A note on data-splitting for the evaluation of significance levels. Biometrika 1975, 62, 441–444. [Google Scholar] [CrossRef]
  28. Jie, L.; Chen, J.; Zhang, X.; Zhou, Y.; Lin, J. One-hot encoding and convolutional neural network based anomaly detection. J. Tsinghua Univ. (Sci. Technol.) 2019, 59, 523–529. [Google Scholar]
  29. Huang, J.; Ling, C.X. Using AUC and accuracy in evaluating learning algorithms. IEEE Trans. Knowl. Data Eng. 2005, 17, 299–310. [Google Scholar] [CrossRef] [Green Version]
  30. AWS Clarify Documentation. Use Amazon SageMaker Clarify Bias Detection and Model Explainability—Amazon SageMaker. Available online: https://docs.aws.amazon.com/sagemaker/latest/dg/clarify-configure-processing-jobs.html (accessed on 25 April 2023).
  31. AWS Clarify Blog Post. Available online: https://aws.amazon.com/sagemaker/clarify/?sagemaker-data-wrangler-whats-new.sort-by=item.additionalFields.postDateTime&sagemaker-data-wrangler-whats-new.sort-order=desc (accessed on 25 April 2023).
  32. What-If Tool. Available online: https://pair-code.github.io/what-if-tool/ (accessed on 25 April 2023).
  33. Kazemi, S.M.; Goel, R.; Eghbali, S.; Ramanan, J.; Sahota, J.; Thakur, S.; Wu, S.; Smyth, C.; Poupart, P.; Brubaker, M. Time2Vec: Learning a Vector Representation of Time. arXiv 2017, arXiv:1907.05321. [Google Scholar]
  34. Bellamy, R.K.E.; Dey, K.; Hind, M.; Hoffman, S.C.; Houde, S.; Kannan, K.; Lohia, P.; Martino, J.; Mehta, S.; Mojsilovic, A.; et al. AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 2019, 63, 4:1–4:15. [Google Scholar] [CrossRef]
Figure 1. Distribution of Kiva’s loans from 2006–2021.
Figure 1. Distribution of Kiva’s loans from 2006–2021.
Algorithms 16 00363 g001
Figure 2. Data scaling.
Figure 2. Data scaling.
Algorithms 16 00363 g002
Figure 3. Data splitting (Time Series).
Figure 3. Data splitting (Time Series).
Algorithms 16 00363 g003
Figure 4. AUC results from training and validation sets.
Figure 4. AUC results from training and validation sets.
Algorithms 16 00363 g004
Figure 5. AUC results from test set.
Figure 5. AUC results from test set.
Algorithms 16 00363 g005
Figure 6. Illustrative Example: Rescaling data with updated loan values (2009).
Figure 6. Illustrative Example: Rescaling data with updated loan values (2009).
Algorithms 16 00363 g006
Figure 7. Single retrained model performance.
Figure 7. Single retrained model performance.
Algorithms 16 00363 g007
Figure 8. Monthly retrained model performance.
Figure 8. Monthly retrained model performance.
Algorithms 16 00363 g008
Figure 9. Ranking of feature (data attributes) importance (2010).
Figure 9. Ranking of feature (data attributes) importance (2010).
Algorithms 16 00363 g009
Figure 10. Ranking of feature (data attributes) importance (2016).
Figure 10. Ranking of feature (data attributes) importance (2016).
Algorithms 16 00363 g010
Figure 11. The flow of pre-processing to model retraining on select data.
Figure 11. The flow of pre-processing to model retraining on select data.
Algorithms 16 00363 g011
Table 1. Test quality levels for AUC values.
Table 1. Test quality levels for AUC values.
Area of Under the Curve (AUC) Values Test Quality
0.90–1.00Excellent
0.80–0.90Very Good
0.70–0.80Good
0.60–0.70 Satisfactory
0.50–0.60Unsatisfactory
Table 2. Comparison of model performance across select algorithms.
Table 2. Comparison of model performance across select algorithms.
AlgorithmTime to Train (ms)Inference TimeF1-ScoreAUCRecallPrecisionAccuracy
Linear Regression1020980.9740.510.9480.948
Logistic Regression4750870.7970.7520.6680.9870.677
K-Nearest Neighborhood15116,3000.8540.6180.7660.9640.751
Multinomial Naive Bayes120860.7920.6550.6690.9720.667
Gradient Boosting10301010.8930.740.8220.9780.814
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Austin, T.; Rawal, B.S. Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact. Algorithms 2023, 16, 363. https://doi.org/10.3390/a16080363

AMA Style

Austin T, Rawal BS. Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact. Algorithms. 2023; 16(8):363. https://doi.org/10.3390/a16080363

Chicago/Turabian Style

Austin, Tasha, and Bharat S. Rawal. 2023. "Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact" Algorithms 16, no. 8: 363. https://doi.org/10.3390/a16080363

APA Style

Austin, T., & Rawal, B. S. (2023). Model Retraining: Predicting the Likelihood of Financial Inclusion in Kiva’s Peer-to-Peer Lending to Promote Social Impact. Algorithms, 16(8), 363. https://doi.org/10.3390/a16080363

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop