Next Article in Journal
Market Predictability Before the Closing Bell Rings
Previous Article in Journal
Defeating the Dark Sides of FinTech: A Regression-Based Analysis of Digitalization’s Role in Fostering Consumers’ Financial Inclusion in Central and Eastern Europe
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Predicting Mutual Fund Stress Levels Utilizing SEBI’s Stress Test Parameters in MidCap and SmallCap Funds Using Deep Learning Models

by
Suneel Maheshwari
1,* and
Deepak Raghava Naik
2
1
Department of Accounting and Information Systems, Indiana University of Pennsylvania, Indiana, PA 15705, USA
2
Department of Management Studies, Ramaiah Institute of Technology, Bengaluru 560054, India
*
Author to whom correspondence should be addressed.
Risks 2024, 12(11), 179; https://doi.org/10.3390/risks12110179
Submission received: 8 October 2024 / Revised: 2 November 2024 / Accepted: 6 November 2024 / Published: 13 November 2024

Abstract

:
The Association of Mutual Funds of India (AMFI), under the direction of the Securities and Exchange Board of India (SEBI), provided open access to various risk parameters with respect to MidCap and SmallCap funds for the first time from February 2024. Our study utilizes AMFI datasets from February 2024 to September 2024 which consisted of 14 variables. Among these, the primary variable identified in grading mutual funds is the stress test parameter, expressed as number of days required to liquidate between 50% and 25% of the portfolio, respectively, on a pro-rata basis under stress conditions as a response variable. The objective of our paper is to build and test various neural network models which can help in predicting stress levels with the highest accuracy and specificity in MidCap and SmallCap mutual funds based on AMFI’s 14 parameters as predictors. The results suggest that the simpler neural network model architectures show higher accuracy. We used Artificial Neural Networks (ANN) over other machine learning methods due to its ability to analyze the impact of dynamic interrelationships among 14 variables on the dependent variable, independent of the statistical distribution of parameters considered. Predicting stress levels with the highest accuracy in MidCap and SmallCap mutual funds will benefit investors by reducing information asymmetry while allocating investments based on their risk tolerance. It will help policy makers in designing controls to protect smaller investors and provide warnings for funds with unusually high risk.

1. Introduction

The growth of mutual funds in India has been remarkable in recent years. Growing participation from retail investors can be attributed to several factors, including increasing financial awareness, favourable regulatory measures, and the attractiveness of mutual funds as a convenient and accessible investment avenue. The proliferation of mutual fund offerings and the democratization of investment access has opened new avenues for investors to participate in capital markets. As a result, mutual funds emerged as preferred investment vehicles, catering to the diverse investment objectives and risk profiles of investors across the spectrum. The mutual fund sector in India has witnessed a surge in retail investor participation, as per the AMFI data released for January 2024. MidCap funds have 13 million portfolios with an investment of INR 29 million. SmallCap funds account for 17.8 million portfolios with a value of INR 24.7 million.1 One of the key drivers behind the surge in retail investor participation in mutual funds is the advent of systematic investment plans (SIPs), which have revolutionized the way retail investors approach mutual fund investments (Kavya and Chokkamreddy 2024). SIPs offer a disciplined and hassle-free approach to wealth accumulation, allowing both seasoned investors and newcomers alike to contribute small amounts at regular intervals. This systematic approach not only helps in mitigating market volatility, but also inculcates financial discipline among investors, encouraging them to stay invested for the long term. As a result, mutual fund Assets Under Management (AUM) have witnessed a steady uptrend, reflecting the evolving preferences of investors seeking diversified and professionally managed investment avenues (Narasimha 2024; Joshi and Arora 2022; Sukumar 2020).
Furthermore, regulatory initiatives aimed at enhancing transparency and investor protection have bolstered investor confidence in mutual funds. Measures such as the categorization and rationalization of mutual fund schemes, along with stringent disclosure norms, have contributed to greater trust and credibility in the mutual fund industry. As a result of these concerted efforts, India has witnessed a significant uptick in retail investor participation in mutual funds. The democratization of investment access, coupled with robust investor education initiatives, has empowered individuals from diverse backgrounds to take charge of their financial futures and embark on the journey of wealth creation through mutual funds (Berk and van Binsbergen 2012; Li and Rossi 2020; Patton and Timmermann 2010). The onset of the global COVID-19 pandemic brought about unprecedented volatility and uncertainty in financial markets worldwide, including in India. Despite initial market turbulence, the mutual fund industry in India showcased resilience and adaptability, as evidenced by the sustained growth in AUM across various schemes. This resilience is attributable to the robust regulatory framework, proactive measures by fund managers, and the enduring trust of investors in the mutual fund ecosystem (Narasimha 2024).
However, despite the progress made, there remains a pressing need for continued awareness and education initiatives to ensure that investors make informed investment decisions. As retail investors navigate the complexities of the financial markets, it is imperative to equip them with the knowledge and tools necessary to identify suitable investment opportunities and manage risk effectively (Elton and Gruber 2020; Jones and Mo 2020; Pástor et al. 2017; Feng et al. 2020).
This investment frenzy among retail investors in MidCap and SmallCap funds has not gone unnoticed by market regulators like the Securities Exchange Board of India (SEBI). Concerned about potential risks to retail investors amidst soaring market valuations, SEBI has initiated various steps to safeguard investor interests in these funds (Joshi and Arora 2022). In response to SEBI’s directives, AMFI has taken some very bold steps such as tasking the trustees of all Asset Management Companies (AMCs) with framing policies to protect investors in MidCap and SmallCap schemes. The policies being formulated by the trustees, in consultation with the Unitholder Protection Committees of the AMCs, are designed to incorporate proactive measures to shield investors from potential risks. These measures also include moderating inflows into MidCap and SmallCap funds, portfolio rebalancing, and enhancing the disclosure of risk parameters and stress tests (Hoberg et al. 2018). The rationale behind these regulatory measures lies in the unprecedented returns witnessed by MidCap and SmallCap indices in recent times, coupled with heightened market volatility due to global events. The surge in investor inflows has led to inflated valuations, raising concerns about market stability and investor protection (Irvine et al. 2018). Hence, SEBI and AMFI are taking proactive steps to ensure that fund houses are adequately prepared to navigate market uncertainties and protect the interests of retail investors. Through these measures, they aim to uphold market integrity and promote investor confidence in India’s mutual fund industry.
Given this outlook, recently, the landscape of mutual fund investments in India has witnessed a notable transformation, marked by the introduction of innovative methodologies for evaluating fund performance and risk. One such significant development is the initiative undertaken by the Association of Mutual Funds of India (AMFI), Mumbai, India to assess stress levels in MidCap and SmallCap mutual funds, referred to as the “Stress Test”. The test involves simulating scenarios where a significant number of investors demand redemption, thereby assessing how quickly a fund can meet redemption requests from investors. For both MidCap and SmallCap funds, liquidation of either 25 percent or 50 percent of the portfolio on a pro-rata basis and the time taken to meet the liquidation request are considered in the dataset. Importantly, the stress test allows for fund managers to exclude the bottom 20 percent of the portfolio based on liquidity considerations. This provision enables fund managers to retain stocks deemed to be essential for long-term gains, enhancing the flexibility and strategic management of the portfolio. This initiative represents a proactive step towards enhancing transparency and accountability within the mutual fund industry, aiming to provide investors with deeper insights into the resilience of their investment portfolios.
The volatility and unpredictability of financial markets pose significant challenges for investors, particularly in assessing the stress levels of mutual funds. While SEBI has outlined a methodology for stress testing mutual funds, there remains a need for advanced analytical tools to accurately predict the stress levels of mutual funds, particularly in the MidCap and SmallCap segments. Traditional approaches may lack the sophistication and predictive power needed to navigate the complexities of modern financial markets. To overcome that limitation, we test various neural network models which can help in predicting stress levels with the highest accuracy and specificity in MidCap and SmallCap mutual funds based on AMFI’s parameters as predictors.
We also test the effectiveness and reliability of the models to provide actionable insights and recommendations to aid investors and fund managers in managing risk and optimizing portfolio strategies. The next two sections provide details on data collection and research methodology, followed by data analysis and interpretation. Finally, last section provides conclusion and scope for future research.

2. Data Collection

For our study, the stress test data were sourced from the AMFI website’s dedicated section, “Disclosure of Stress Test & Liquidity Analysis in respect of MidCap & SmallCap Funds”. Specifically, stress test data pertaining to MidCap and SmallCap mutual funds for the period of eight months from February 2024 to September 2024 were collected. The number of mutual funds that were part of the dataset from February 2024 to September 2024 is provided in Table 1.
Table 1 shows that between February and September 2024; the dataset included information of 29 MidCap funds and 27 SmallCap funds. In May 2024, data were available for only 13 MidCap funds and 10 SmallCap funds. The MidCap and SmallCap mutual funds that were part of the dataset from February 2024 to September 2024 are provided in Table 2 and Table 3, respectively.
Upon collecting the stress test data, a thorough data pre-processing step was conducted to ensure its quality and suitability for analysis. This involved excluding schemes which had missing values and inconsistencies within the dataset to enhance its integrity and reliability for subsequent modelling efforts. We tried not to impute values for missing values across the parameters identified. Table 4 lists the 14 parameters which were identified as features for model development for evaluation purposes. These parameters were deemed to be essential for assessing mutual fund stress levels based on their presence in the stress test template, and were subsequently used as features in the modelling process.
The stress test pro-rata liquidation variable for the 50% portfolio and 25% portfolio were considered separately for the building models, which were binned based on the categorization shown in Table 5. Separate models were built for the stress test with the 50% portfolio and 25% portfolio for each dataset across eight months separately, and they were evaluated for their predictive powers.
For data exclusion criteria, due to potential data incompleteness or inconsistency, a few companies were excluded from the modelling process to ensure the robustness and reliability of the analysis. This step was crucial for maintaining the integrity of the dataset and mitigating the risk of bias in the modelling results. The companies excluded from the dataset for non-availability of data with respect to a few variables is also shown in Table 2 and Table 3.
For the response variable, binning was conducted by considering a stress level of 7 days and above as indicating high-stress companies.2 Thus, companies were categorized into two levels, namely high-stress- and low-stress-level companies, across all of the months. Table 6 shows the summary of companies which were further categorized for between 50% and 25% portfolio liquidation from February to September 2024.

3. Research Methodology

This study investigates the need for predicting the mutual fund stress levels based on the 14 parameters which were identified as features for model development and for evaluation purposes. All of the variables were standardized before using them in the model building. To build the model, data for eight months from February 2024 to September 2024 were obtained, and an Artificial Neural Network (ANN) method was proposed for prediction. We used Artificial Neural Networks (ANNs) over other machine learning methods is due to their ability to analyze the impact of dynamic interrelationships among the 14 variables on the stress level, even when the information about the system was not detailed. The 14 parameters considered for building the classification model are complicated, and complex relationships can be built (Alzubaidi et al. 2021). ANNs are the most preferred method to define such complex and complicated relations, and they have the capability to establish relationships between the parameters considered and output desired in our study over other machine learning methods (Du et al. 2019; Degadwala and Vyas 2024). In recent years, we have observed that ANNs have been applied to various problems such as prediction, optimization, control systems, and many more, which can help in decision making (D’Amour et al. 2022). In our study, the main goal of using the ANN is that it is independent to the statistical distribution of the parameters considered. In this study, we tried to build an ANN consisting of between three layers and five layers. The first layer/input layer consisted of 14 parameters, which were the input variables. The middle layer was used in modelling the complex relationships in the study. Different numbers of neurons were used in the layers to model the complexity and evaluation of the problem (Harvey and Liu 2018). As observed in Equation (1), the weights connecting the Zh hidden layer with the Xj input layer is labelled as Wij. Each node in the hidden layer calculates the weighted sum of the neurons in the input layer.
I n p u t j s = w k j x k i
Outputs corresponding to these hidden layers are obtained as a result of implementing the activation function or the transfer function. The backpropagation method was used as the learning algorithm in the ANN, which initially starts with random weights. The network “learns” by gradually adjusting its weights until it can produce the target outputs specified for the 14 parameters considered in the study.
The calculation of the error in the network was carried out using Equation (2):
E = 1 2 j t j O j 2
where tj and Oj are the actual and desired values of unit j in the output layer. The weights are updated depending on the delta rule of learning, as shown in Equation (3):
w i j = η δ i o i
where η > 0 is the learning rate, δ i is a correction term, and o i is the output of unit i in the previous layer. We observe that the correction term is proportional to the output error. In the backpropagation algorithm, the correction term is calculated using the gradient descent method, resulting in the following expression for the delta term of an output unit:
δ i = E / o j o j / I j = t j o j o u t o j 1 o j
Thus, the correction term for the hidden nodes is calculated applying the recursive formula in Equation (5):
δ j = o j 1 o j k δ k w k j
It is often also observed that, if a momentum term is added to the learning rule, as given in Equation (6), it can help in enhancing the performance and also the stability in the training process:
w i j = η δ i O i + μ Δ w i j p r e v
where we can observe that 0 < µ < 1 is a constant called momentum and the term w i j p r e v is the adjustment.
When the difference between the desired outputs and the actual outputs reaches an acceptable threshold, the process is complete. If not, the weights are adjusted to minimize the gap between the target and the actual values.
Thus, in the neural network built, each hidden layer consists of multiple nodes (or neurons), each with its own activation function, which helps introduce non-linearity to the model. The activation type of each node is sigmoidal in nature. Thus, the sigmoid function squashes input values to a range between 0 and 1. Thus, to build a classification-based model, the neural network was utilized with the sigmoid as the step function/activation function. For this study, the neural networks had an input layer consisting of nodes which accept the 14 predictor values in our study, and successive layers of nodes are to receive input from the previous layers (Bergstra and Bengio 2012; Hastie et al. 2009). Various configurations of neural networks were explored, including models with one, two, and three hidden layers, with the number of nodes ranging from 2 to 10. This approach allowed for flexibility and adaptability in capturing the underlying patterns and relationships within the dataset, ultimately facilitating accurate predictions of mutual fund stress levels (Chen et al. 2020; Gu et al. 2020). For the model evaluation, the trained neural network models were evaluated using appropriate performance metrics on validation models consisting of 20% as the holdout proportion. The models were evaluated using training and validation models using performance measures such as total accuracy, sensitivity, specificity, F-score, etc. Some of the important measures are mentioned below (Crone et al. 2011; Hyndman and Koehler 2006; Makridakis et al. 2019). Sensitivity measures the ability of a model to classify the observation as positive given it was positive in nature. It is given by the following formula:
Sensitivity / True   Positive   Rate =   T r u e   P o s i t i v e   ( T P ) T r u e   P o s i t i v e   T P + F a l s e   N e g a t i v e   ( F N )
Specificity, on the other hand, measures the ability of a model to classify the observation as negative, given that it was negative in nature. It is given by the following formula:
Specificity = ( True   Negative   Rate ) = T r u e   N e g a t i v e   ( T N ) T r u e   N e g a t i v e   T N + F a l s e   P o s i t i v e   ( F P )
Precision measures the accuracy of positives classified using the model, and is given by the following formula:
Precision = T r u e   P o s i t i v e   ( T P ) T r u e   P o s i t i v e   T P + F a l s e   P o s i t i v e   ( F P )
F-score/F-Measure is used in binary logistic regression models that combine both precision and recall, and is given as follows:
F-score = 2 P r e c i s i o n R e c a l l P r e c i s i o n + R e c a l l
Finally, encompassing all of the measures, we look into the misclassification rate, which is the proportion of incorrect predictions (both false positives and false negatives) to the total number of predictions. It is given as follows:
Misclassification   Rate   =   F P + F N T P + T N + F P + F N = 1 T o t a l   A c c u r a c y
As we know, the total accuracy and misclassification rate are inversely related. As accuracy increases, the misclassification rate decreases, and vice versa.
For the overall examination of the results, metrics were compared between training and validation datasets. A model can achieve high accuracy on the training set, but may perform poorly on the validation set due to overfitting. Similarly, if the model achieves low accuracy on the training set but performs well on the validation set, this is an indication of underfitting (Srivastava et al. 2014). Thus, inferences were drawn based on both training and validation performance metrics to ensure that the model generalizes well. Since the launch of AFMI datasets, this is the first time any research paper has investigated modelling the stress level of mutual funds using deep learning models. For each architecture, the datasets for training and validation were in the ratio 80:20, chosen randomly for the period from February 2024 to September 2024, respectively. For the analysis, R-programming and SAS JMP software programs (R- 4.4.2 version JMP Pro 18) were used.

4. Data Analysis and Interpretation

For each month, and with response variables having either 50% pro-rata liquidation or 25% pro-rata liquidation, ANNs with different architectures were used, and each model was formed as follows:
  • Model 1: ANN with one hidden layer with two nodes, one input layer with fourteen variables, and an output layer with one variable which is categorical in nature.
  • Model 2: ANN with one hidden layer with three nodes, one input layer with fourteen variables, and an output layer with one variable which is categorical in nature.
  • Model 3: ANN with one hidden layer with nodes ranging from four to ten nodes, one input layer with fourteen variables, and an output layer with one variable which is categorical in nature (only the best model based on performance metrics in training and validation is shown).
  • Model 4: ANN with two hidden layers with two nodes each for a layer, one input layer with fourteen variables, and an output layer with one variable which is categorical in nature.
  • Model 5: ANN with two hidden layers with three nodes each for a layer, one input layer with fourteen variables, and an output layer with one variable which is categorical in nature.
  • Model 6: ANN with two hidden layers with nodes ranging from four to ten for each layer, one input layer with fourteen variables, and an output layer with one variable which is categorical in nature (only the best model based on performance metrics in training and validation is shown).
As mentioned before, the activation function used between the input layer and the hidden layer, as well as between the hidden layer and the output layer, is sigmoidal function. The learning rate and momentum rate are taken as 0.5. The operation is completed in 1000 iteration steps. The performance measures obtained for the training and validation period for each month were tabulated separately.
The structure of the ANN proposed for the months of February 2024 using six models is shown below in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Initially, as shown in Figure 7, Figure 8 and Figure 9, the models that had estimates with a hidden layer varying from one to three and with varying numbers of nodes are presented to predict the pro-rata basis liquidation with the stress level at 50% portfolio for the MidCap funds for February 2024. The result of the analysis is shown in Table 7:
(a)
Model building for MidCap funds for February 2024 with pro-rata basis liquidation of 50% portfolio.
In Table 7, consider the results shown in first row, which depicts the results for Model 1 with one hidden layer and two nodes for the dataset. We observe that, while the model demonstrates high sensitivity in correctly identifying instances of low stress levels, it suffers from low specificity and overall poor performance in accurately predicting stress levels. The accuracy of the model is calculated to be 0.8, indicating that it correctly classified 80% of the instances in the dataset. Upon splitting the dataset, 20 observations fall into training and 5 observations into test datasets. The performance metrics of Model 1 are depicted in Table 8.
As observed, the sensitivity of the model, also known as the true positive rate, is 100%, with all 14 observations and 4 observations in the training and test data being correctly classified. The specificity of the model, which measures the true negative rate, is also 100%, with 6 observations and 1 observation in the training and test dataset being correctly classified. The misclassification rate is 0.00, indicating that the model correctly identified all instances correctly with total accuracy of 100%.
In order to compare the obtained results across various models of ANN in the training and validation datasets for the time period considered at different pro-rata basis liquidation rates, total accuracy performance metrics were tabulated and compared between the training and validation datasets across all combinations. The results are summarized in Table 9 for MidCap funds and in Table 10 for SmallCap funds, respectively. As observed, the performance metrics offer valuable insights into the models’ effectiveness in predicting the liquidation strategies for different types of funds and time periods. Across all scenarios, the best-performing NN model consistently achieved perfect accuracy, indicating that it correctly predicts all instances in the dataset. This exceptional accuracy underscores the effectiveness of the model in capturing the underlying patterns and relationships in the data. Furthermore, the models consistently outperform the no information rate, which serves as a baseline performance metric, indicating that the models provide substantial value beyond random guessing. The high Kappa value of 1 for all scenarios suggests a strong agreement between the model’s predictions and the actual outcomes, correcting for agreement occurring by chance. This indicates robust performance across different liquidation strategies and fund types. However, the consistent high total accuracy suggest that the performance of the best NN model remains stable and reliable. Moreover, the sensitivity and specificity metrics indicate that the model performs well in correctly identifying both positive and negative instances, respectively.
The perfect sensitivity and specificity scores further validate the model’s ability to accurately predict the liquidation strategies for both MidCap and SmallCap funds across different time periods and portfolio liquidation percentages. Overall, the results demonstrate the effectiveness of the NN model in predicting optimal liquidation strategies for MidCap and SmallCap funds, highlighting its potential utility in financial decision-making processes.

5. Conclusions and Scope for Future Research

The recent introduction of innovative methodologies for evaluating mutual fund performance and risk in India, exemplified by the Association of Mutual Funds of India’s (AMFI) “Stress Test” initiative, marks a significant transformation in the investment landscape. This initiative, supported by SEBI’s outlined methodology, aims to assess the stress levels in MidCap and SmallCap mutual funds by simulating scenarios of significant redemption requests. The proactive approach taken by AMFI and SEBI reflects a commitment to enhancing transparency and accountability within the mutual fund industry, ultimately empowering investors with deeper insights into the resilience of their investment portfolios. The results obtained from the analysis highlights the effectiveness of neural network models in predicting optimal liquidation strategies for MidCap and SmallCap funds under different scenarios. The consistently high accuracy, Kappa value, sensitivity, and specificity metrics underscore the reliability and robustness of these models in assessing fund performance and risk.
In conclusion, the integration of innovative methodologies such as stress testing into mutual fund evaluation frameworks represents a positive step towards bolstering investor confidence and promoting informed decision making. By providing insights into how funds perform under stress scenarios, investors can better understand the potential risks associated with their investments and make more informed choices.
Moving forward, there are several suggestions and avenues for future research. Firstly, the ongoing monitoring and refinement of stress testing methodologies will be essential to ensure their effectiveness in capturing evolving market dynamics. Additionally, exploring the application of advanced machine learning techniques beyond neural networks, such as ensemble methods or deep learning architectures, could further enhance the accuracy and predictive power of fund evaluation models. Furthermore, conducting comprehensive studies to evaluate the impact of stress testing initiatives on investor behaviour, market stability, and fund performance over the long term would provide valuable insights for industry stakeholders and regulators. Future research could investigate the behavioural patterns of retail investors when they are presented with risk-related metrics such as stress test results and liquidity parameters. Future research could delve into more sophisticated deep learning architectures to refine the prediction of stress parameters. Studies could test ensemble learning techniques or hybrid models that combine neural networks with other statistical methods for potentially higher accuracy.
Overall, the introduction of stress testing initiatives represents a significant milestone in the evolution of mutual fund evaluation practises in India. By embracing innovation and adopting proactive measures to enhance transparency and accountability, the mutual fund industry can continue to foster investor trust and contribute to the development of a resilient and sustainable financial ecosystem. By leveraging advanced computational techniques, this study contributes to the ongoing discourse surrounding risk management and decision-making in the realm of mutual fund investments. The insights garnered from this research have practical implications for investors, fund managers, and regulatory bodies, facilitating more informed investment strategies and risk mitigation measures in the mutual fund industry.

Author Contributions

Conceptualization, S.M.; Methodology, D.R.N.; Formal analysis, D.R.N.; Data curation, D.R.N.; Writing—original draft, S.M. and D.R.N.; Writing—review and editing, S.M. and D.R.N.; Project administration, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data is freely available in the website’s dedicated section with the heading, “Disclosure of Stress Test & Liquidity Analysis in respect of MidCap & SmallCap Funds”. We encourage aspirants to download it and test the models.

Conflicts of Interest

The authors declare no conflicts of interest.

Notes

1
2
In the article titled “Revealed! No. of days Nippon India, biggest small-cap fund, will need to sell off 50% of its portfolio”, the authors emphasized that more than 3–6 days would suggest stress in the mutual fund. https://www.businesstoday.in/mutual-funds/story/revealed-no-of-days-nippon-india-biggest-small-cap-fund-will-need-to-sell-off-50-of-its-portfolio-421556-2024-03-15, accessed on 20 April 2024.

References

  1. Alzubaidi, Laith, Jinglan Zhang, Amjad Humaidi, Ayad Al-Dujaili, Ye Duan, Omran Al-Shamma, Jose Santamaría, Mohammed Fadhel, Muthana Al-Amidie, and Laith Farhan. 2021. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. Journal of Big Data 8: 10–19. [Google Scholar] [CrossRef] [PubMed]
  2. Bergstra, James, and Yoshua Bengio. 2012. Random Search for Hyper-Parameter Optimization. The Journal of Machine Learning Research 13: 281–305. [Google Scholar]
  3. Berk, Jonathan, and Jules van Binsbergen. 2012. Measuring Skill in the Mutual Fund Industry. Journal of Financial Economics 118: 1–20. [Google Scholar] [CrossRef]
  4. Chen, Wei, Huilin Xu, Lifen Jia, and Ying Gao. 2020. Machine learning model for Bitcoin exchange rate prediction using economic and technology determinants. International Journal of Forecasting 37: 28–43. [Google Scholar] [CrossRef]
  5. Crone, Sven, Michele Hibon, and Konstantinos Nikolopoulos. 2011. Advances in forecasting with neural networks? Empirical evidence from the NN3 competition on time series prediction. International Journal of Forecasting 27: 635–60. [Google Scholar] [CrossRef]
  6. D’Amour, Alexander, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew Hoffman, and et al. 2022. Underspecification Presents Challenges for Credibility in Modern Machine Learning. Journal of Machine Learning Research 23: 1–61. [Google Scholar]
  7. Degadwala, Sheshang, and Dhairya Vyas. 2024. Systematic Analysis of Deep Learning Models vs. Machine Learning. International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10: 60–70. [Google Scholar] [CrossRef]
  8. Du, Mengnan, Ninghao Liu, and Xia Hu. 2019. Techniques for interpretable machine learning. Communications of the ACM 63: 68–77. [Google Scholar] [CrossRef]
  9. Elton, Edwin, and Martin Gruber. 2020. A Review of the Performance Measurement of Long-Term Mutual Funds. Financial Analysts Journal 76: 22–37. [Google Scholar] [CrossRef]
  10. Feng, Guanhao, Stefano Giglio, and Dacheng Xiu. 2020. Taming the Factor Zoo: A Test of New Factors. The Journal of Finance 75: 1327–70. [Google Scholar] [CrossRef]
  11. Gu, Shihao, Bryan Kelly, and Dacheng Xiu. 2020. Empirical Asset Pricing via Machine Learning. The Review of Financial Studies 33: 2223–73. [Google Scholar] [CrossRef]
  12. Harvey, Campbell, and Yan Liu. 2018. Detecting Repeatable Performance. The Review of Financial Studies 31: 2499–552. [Google Scholar] [CrossRef]
  13. Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. New York: Springer. [Google Scholar]
  14. Hoberg, Gerard, Nitin Kumar, and Nagpurnanand Prabhala. 2018. Mutual Fund Competition, Managerial Skill, and Alpha Persistence. Review of Financial Studies 31: 1896–929. [Google Scholar] [CrossRef]
  15. Hyndman, Rob, and Anne Koehler. 2006. Another look at measures of forecast accuracy. International Journal of Forecasting 22: 679–88. [Google Scholar] [CrossRef]
  16. Irvine, Paul, Jeong Kim, and Jue Ren. 2018. The Beta Anomaly and Mutual Fund Performance. SSRN Electronic Journal. Available online: http://dx.doi.org/10.2139/ssrn.3285885 (accessed on 12 January 2024).
  17. Jones, Christopher, and Haitao Mo. 2020. Out-of-Sample Performance of Mutual Fund Predictors. The Review of Financial Studies 34: 149–93. [Google Scholar] [CrossRef]
  18. Joshi, A., and S. Arora. 2022. Economic Conditions and Investor Behavior: Evidence from Indian Mutual Funds. Economic Analysis and Policy 56: 201–20. [Google Scholar]
  19. Kavya, Manga, and Prakash Chokkamreddy. 2024. Growth and Dynamics in the Indian Mutual Fund Industry: Analyzing Investor Preferences and Investment Strategies. International Journal of Advanced Research in Science, Communication and Technology 4: 175–83. [Google Scholar] [CrossRef]
  20. Li, Bin, and Alberto Rossi. 2020. Selecting Mutual Funds from the Stocks They Hold: A Machine Learning Approach. SSRN Electronic Journal. Available online: http://dx.doi.org/10.2139/ssrn.3737667 (accessed on 14 January 2024).
  21. Makridakis, Spyros, Evangelos Spiliotis, and Vassilis Assimakopoulos. 2019. The M4 Competition: 100,000 time series and 61 forecasting methods. International Journal of Forecasting 36: 54–74. [Google Scholar] [CrossRef]
  22. Narasimha, M. 2024. Mutual Funds Market in India. International Journal of Marketing & Human Resource Management 13: 34–41. [Google Scholar]
  23. Patton, Andrew, and Allan Timmermann. 2010. Monotonicity in asset returns: New tests with applications to the term structure, the CAPM, and portfolio sorts. Journal of Financial Economics 98: 605–25. [Google Scholar] [CrossRef]
  24. Pástor, Ľuboš, Robert Stambaugh, and Lucian Taylor. 2017. Do Funds Make More When They Trade More? The Journal of Finance 72: 1483–528. [Google Scholar] [CrossRef]
  25. Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research 15: 1929–58. [Google Scholar]
  26. Sukumar, R. 2020. Mutual Funds: A Modern Investment Option for Indian Investors. Asian Journal of Finance & Accounting 12: 89–102. [Google Scholar]
Figure 1. Model 1 depicting ANN with one hidden layer and two nodes for February 2024.
Figure 1. Model 1 depicting ANN with one hidden layer and two nodes for February 2024.
Risks 12 00179 g001
Figure 2. Model 2 presents ANN with one hidden layer and three nodes for February 2024.
Figure 2. Model 2 presents ANN with one hidden layer and three nodes for February 2024.
Risks 12 00179 g002
Figure 3. Model 3 depicts ANN with many nodes in the hidden layer for February 2024.
Figure 3. Model 3 depicts ANN with many nodes in the hidden layer for February 2024.
Risks 12 00179 g003
Figure 4. Model 4 presents ANN with two hidden layer with two nodes each for February 2024.
Figure 4. Model 4 presents ANN with two hidden layer with two nodes each for February 2024.
Risks 12 00179 g004
Figure 5. Model 5 presents ANN with two hidden layer with three nodes each for February 2024.
Figure 5. Model 5 presents ANN with two hidden layer with three nodes each for February 2024.
Risks 12 00179 g005
Figure 6. Model 6 presents ANN with multiple nodes for two hidden layer for February 2024.
Figure 6. Model 6 presents ANN with multiple nodes for two hidden layer for February 2024.
Risks 12 00179 g006
Figure 7. Model −1 for February 2024, with estimates.
Figure 7. Model −1 for February 2024, with estimates.
Risks 12 00179 g007
Figure 8. Model −4 for February 2024, with estimates.
Figure 8. Model −4 for February 2024, with estimates.
Risks 12 00179 g008
Figure 9. Model −6 for February 2024, with estimates.
Figure 9. Model −6 for February 2024, with estimates.
Risks 12 00179 g009
Table 1. Number of mutual funds as part of monthly stress test datasets.
Table 1. Number of mutual funds as part of monthly stress test datasets.
MonthMidCap FundsSmallCap Funds
February to September 2024 (except May 2024)2927
May 20241310
Source: Compiled from www.amfiindia.com.
Table 2. List of MidCap mutual funds, part of dataset (from February to September 2024) 1.
Table 2. List of MidCap mutual funds, part of dataset (from February to September 2024) 1.
Sl. NoMidCap Mutual FundsFebruary 2024March 2024April 2024May 2024June 2024July 2024August 2024September 2024
1Aditya Birla Sun Life MidCap FundX
2Axis MidCap FundX
3Bandhan MidCap Fund
4Baroda BNP Paribas MidCap FundX
5Canara Robeco MidCap Fund
6DSP MidCap FundX
7Edelweiss MidCap Fund
8Franklin India Prima FundX
9HDFC MidCap Opportunities FundX
10HSBC MidCap Fund
11ICICI Prudential MidCap FundX
12Invesco India MidCap FundX
13ITI MidCap Fund
14JM MidCap Fund
15Kotak Emerging Equity Fund
16LIC MF MidCap FundX
17Mahindra Manulife MidCap FundX
18Mirae Asset MidCap FundX
19Motilal Oswal MidCap FundX
20Nippon India Growth Fund
21PGIM India MidCap Opportunities FundX
22Quant MidCap Fund
23SBI Magnum MidCap FundX
24Sundaram MidCap Fund
25Tata MidCap Growth FundX
26Taurus MidCap Fund
27Union MidCap FundX
28UTI—MidCap Fund
29WhiteOak Capital MidCap Fund
1 The “X” mark in the table indicates the exclusion of the MidCap funds for the non-availability of the data.
Table 3. List of SmallCap mutual funds, part of dataset (from February to September 2024) 1.
Table 3. List of SmallCap mutual funds, part of dataset (from February to September 2024) 1.
Sl. NoSmallCap Mutual FundsFebruary 2024March 2024April 2024May 2024June 2024July 2024August 2024September 2024
1Aditya Birla Sun Life SmallCap Fund
2Axis SmallCap Fund
3BANDHAN SmallCap FundX
4BANK OF INDIA SMALLCAP FUND
5Baroda BNP Paribas SmallCap Fund
6Canara Robeco SmallCap FundX
7DSP SmallCap Fund
8Edelweiss SmallCap FundX
9Franklin India Smaller Companies Fund
10HDFC SmallCap Fund
11HSBC SmallCap FundX
12ICICI Prudential SmallCap Fund
13Invesco India SmallCap Fund
14ITI SmallCap FundX
15JM SmallCap Fund
16Kotak SmallCap FundX
17LIC MF SmallCap Fund
18Mahindra Manulife SmallCap Fund
19Motilal Oswal SmallCap Fund
20Nippon India SmallCap FundX
21PGIM India SmallCap Fund
22Quant SmallCap FundX
23Quantum SmallCap Fund
24SBI SmallCap Fund
25Sundaram SmallCap FundX
26Tata SmallCap Fund
27Union SmallCap Fund
28UTI SmallCap FundX
1 The “X” mark in the table indicates the exclusion of the MidCap funds for the non-availability of the data.
Table 4. The 14 parameters as features for model development.
Table 4. The 14 parameters as features for model development.
Sl. NoIndependent VariablesDescription
1AUM (INR in crores)Asset Under Management in crores of INR (1 crore equals 10 million).
2Liability-side Top 10 investor (%)Indicates % of AUM held by top 10 investors of the scheme.
3Asset-side (AUM held in) LargeCap (%)Indicates % of scheme AUM invested in LargeCap, MidCap, and SmallCap securities, and % held in cash.
4Asset-side (AUM held in) MidCap (%)
5Asset-side (AUM held in) SmallCap (%)
6Asset-side (AUM held in) cash (%)
7Portfolio annualized standard deviation (%)Standard deviation indicates how widely a stock or portfolio’s returns varies from its mean over a given period. For each incremental standard deviation, there is an increasing level of reliability.
8Benchmark annualized standard deviation (%)
9Portfolio betaBeta is a measure of volatility—or systemic risk—of a security or portfolio compared to the market (usually the broad market index such as BSE-500 or NSE-500). Stocks with betas higher than 1.0 can be interpreted as more volatile than the broad market index.
10Portfolio trailing 12 m PE The price to earnings (P/E) ratio is one of the most widely used valuation methods, as it accounts for a company’s actual earnings instead of projected earnings. The P/E ratio indicates how much an investor is willing to pay for one unit of earnings for that company. For a given company, whether the value of the current P/E is suitable depends on various factors including sector, growth prospects, business cycle, etc.
11Benchmark PE trailing 12 m PE
12Benchmark PE trailing 12 m PE 1 year ago
13Benchmark PE trailing 12 m PE 2 year ago
14Portfolio turnover ratio (%)Portfolio turnover is a measure of how frequently assets within a mutual fund scheme are bought and sold by the fund manager over a given period. Portfolio turnover is calculated by taking either the total amount of new securities purchased or the number of securities sold (whichever is less) over a particular period, divided by the total net asset value (NAV) of the fund. The measurement is usually reported for a 12-month period. For example, a 5% portfolio turnover ratio suggests that 5% of the portfolio holdings changed over a one-year period.
Table 5. Dependent variables for stress test.
Table 5. Dependent variables for stress test.
Dependent VariableBinning Categorization
Stress test pro-rata liquidation after removing bottom 20% of portfolio based on scrip liquidity (considering 10% PV with 3x volumes) 50% portfolioStress level ≥ 7 days = high stress
Stress level < 7 days = low stress
Stress test pro-rata liquidation after removing bottom 20% of portfolio based on scrip liquidity (considering 10% PV with 3x volumes) 25% portfolio
Table 6. Companies categorized based on stress levels (February–September 2024).
Table 6. Companies categorized based on stress levels (February–September 2024).
Stress Test Pro-Rata Liquidation @ 50% PortfolioStress Test Pro-Rata Liquidation @ 25% Portfolio
Companies with Low Stress LevelsCompanies with High Stress LevelsCompanies with Low Stress LevelsCompanies with High Stress Levels
February 2024MidCap198234
SmallCap813138
March 2024MidCap198234
SmallCap813138
April 2024MidCap205223
SmallCap813138
May 2024MidCap7281
SmallCap4563
June 2024MidCap236263
SmallCap1612208
July 2024MidCap227263
SmallCap1612208
August 2024MidCap218263
SmallCap1711208
September 2024MidCap209263
SmallCap1613218
Table 7. Performance metrics of ANN models for MidCap funds during February 2024 (with pro-rata basis liquidation of 50% portfolio).
Table 7. Performance metrics of ANN models for MidCap funds during February 2024 (with pro-rata basis liquidation of 50% portfolio).
ModelAccuracyNo Information RateKappaMcnemar’s Test p-ValueSensitivitySpecificity
1 Hidden layer with 2 nodes0.80.800.0730.01.0
1 Hidden layer with 3 nodes1.00.641NA1.01.0
2 Hidden layer with 2 nodes0.960.640.9111.001.00.89
2 Hidden layer with 3 nodes0.720.7200.0231.00.0
2 Hidden layer with 10 nodes0.720.7200.0231.00.0
3 Hidden layer with 10 nodes0.720.7200.0231.00.0
Table 8. Performance metrics of Model 1 for MidCap funds during February 2024 (with pro-rata basis liquidation of 50% portfolio).
Table 8. Performance metrics of Model 1 for MidCap funds during February 2024 (with pro-rata basis liquidation of 50% portfolio).
Confusion Matrix for Training DatasetConfusion Matrix for Validation Dataset
ActualPredicted CountActualPredicted Count
Stress_Level 50% portfolio01Stress_Level 50% portfolio01
0140040
106101
Misclassification rate0.00Misclassification rate0.00
Table 9. Total accuracy of ANN across various architectures (Model 1 to Model 6) across training and validation datasets from February 2024 to September 2024 for MidCap.
Table 9. Total accuracy of ANN across various architectures (Model 1 to Model 6) across training and validation datasets from February 2024 to September 2024 for MidCap.
Model 1Model 2Model 3Model 4Model 5Model 6
Year Total AccuracyTotal AccuracyTotal AccuracyTotal AccuracyTotal AccuracyTotal Accuracy
TrainingValidationTrainingValidationTrainingValidationTrainingValidationTrainingValidationTrainingValidation
February 2024STPRL@ 50% portfolio1.001.001.001.001.001.000.901.000.900.801.000.80
STPRL@ 25% portfolio1.001.000.701.000.951.001.001.000.801.000.801.00
March 2024STPRL@ 50% portfolio1.001.001.001.001.001.000.951.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
April 2024STPRL@ 50% portfolio1.000.801.000.801.000.800.850.600.701.000.651.00
STPRL@ 25% portfolio1.001.001.001.001.001.001.000.801.001.001.001.00
May 2024STPRL@ 50% portfolio0.860.500.861.000.57 1.00 0.711.000.710.500.301.00
STPRL@ 25% portfolio0.711.001.001.001.001.001.000.501.000.501.001.00
June 2024STPRL@ 50% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
July 2024STPRL@ 50% portfolio0.951.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
August 2024STPRL@ 50% portfolio0.570.601.001.000.910.401.001.000.380.400.910.80
STPRL@ 25% portfolio0.861.001.001.001.001.000.861.001.001.001.001.00
September 2024STPRL@ 50% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
Table 10. Total Accuracy of ANN across various architectures (Model 1 to Model 6) across training and validation datasets from February 2024 to September 2024 for SmallCap.
Table 10. Total Accuracy of ANN across various architectures (Model 1 to Model 6) across training and validation datasets from February 2024 to September 2024 for SmallCap.
Model 1Model 2Model 3Model 4Model 5Model 6
Year Total AccuracyTotal AccuracyTotal AccuracyTotal AccuracyTotal AccuracyTotal Accuracy
TrainingValidationTrainingValidationTrainingValidationTrainingValidationTrainingValidationTrainingValidation
February 2024STPRL@ 50% portfolio0.881.000.351.001.001.001.001.001.000.751.001.00
STPRL@ 25% portfolio0.530.750.531.000.471.000.710.750.640.500.290.75
March 2024STPRL@ 50% portfolio1.000.751.000.751.000.751.000.751.000.751.001.00
STPRL@ 25% portfolio1.001.001.001.000.441.001.001.001.001.001.001.00
April 2024STPRL@ 50% portfolio0.350.751.001.001.001.000.880.750.531.000.941.00
STPRL@ 25% portfolio0.940.751.001.000.591.000.531.001.001.000.711.00
May 2024STPRL@ 50% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.001.000.721.000.571.000.430.500.711.00
June 2024STPRL@ 50% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.000.750.531.000.881.001.001.001.001.00
July 2024STPRL@ 50% portfolio1.001.001.001.000.621.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.000.670.750.610.751.001.000.560.750.220.75
August 2024STPRL@ 50% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio1.001.001.001.001.001.001.001.000.781.001.001.00
September 2024STPRL@ 50% portfolio1.001.001.001.001.001.001.001.001.001.001.001.00
STPRL@ 25% portfolio0.441.000.661.000.550.750.551.000.611.000.721.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maheshwari, S.; Naik, D.R. Predicting Mutual Fund Stress Levels Utilizing SEBI’s Stress Test Parameters in MidCap and SmallCap Funds Using Deep Learning Models. Risks 2024, 12, 179. https://doi.org/10.3390/risks12110179

AMA Style

Maheshwari S, Naik DR. Predicting Mutual Fund Stress Levels Utilizing SEBI’s Stress Test Parameters in MidCap and SmallCap Funds Using Deep Learning Models. Risks. 2024; 12(11):179. https://doi.org/10.3390/risks12110179

Chicago/Turabian Style

Maheshwari, Suneel, and Deepak Raghava Naik. 2024. "Predicting Mutual Fund Stress Levels Utilizing SEBI’s Stress Test Parameters in MidCap and SmallCap Funds Using Deep Learning Models" Risks 12, no. 11: 179. https://doi.org/10.3390/risks12110179

APA Style

Maheshwari, S., & Naik, D. R. (2024). Predicting Mutual Fund Stress Levels Utilizing SEBI’s Stress Test Parameters in MidCap and SmallCap Funds Using Deep Learning Models. Risks, 12(11), 179. https://doi.org/10.3390/risks12110179

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop