1. Introduction
Surveys are commonly employed in the social sciences for several compelling reasons: Survey data can address various research questions, each associated with multiple corresponding hypotheses. Typically, these hypotheses, aligned with deductive reasoning, are derived from existing theories and are subjected to various quantitative (primarily statistical) research methods. However, there are situations where an inductive approach could be more suitable for hypothesis development, such as a lack of established theory, limited empirical data on the subject, or when the phenomenon under investigation is intricate and involves multiple variables. When a deductive approach proves inadequate, researchers may need to adopt an inductive, exploratory approach for hypothesis generation. This inductive approach is often synonymous with qualitative research methods, relying on subjective data interpretation. However,
Kell and Oliver (
2004) argue that an inductive approach can also be founded on quantitative research methods when formulating hypotheses.
Kell and Oliver (
2004) contend that quantitative data can give rise to new hypotheses if it can speak for itself without preconceived notions or prior beliefs shaping the data.
Selecting a correctly specified model is essential to enabling an inductive approach to quantitative data. A correct model selection is imperative to free the data from preconceptions and allow them to reveal insights independently. In other words, a properly chosen model minimises omitted variable bias. As
Clarke (
2005) points out, omitting a critical explanatory variable can increase the omitted variable bias. The risk is particularly high when numerous non-explanatory variables are included in the equation, especially if the explanatory variable is lost and left out. Therefore, a proper model selection should be conducted with care. There are two main approaches for achieving an appropriate model selection: (1) model selection through information criteria and (2) model selection through penalising models. Information criteria become impractical when dealing with many potential covariates (
Arnold 2010). Therefore, penalising models are needed when the data have many potential covariates. The strength of penalising models is their ability to identify the most relevant explanatory variables in data with many covariates. Nevertheless, classic penalised models have their limitations. For instance, lasso and elastic net regression are inconsistent with ordinal data, commonly encountered in surveys, or only consistent under specific constraints (
Jia and Yu 2010). There are several studies which have looked at different ways of selecting models (
Desboulets 2018;
Negrín et al. 2010;
Pacifico and Pilone 2024).
Because surveys typically use ordinal data with many covariates, a penalising model that can use ordinal data is needed. Since the classic penalising models cannot consistently modulate ordinal data, new penalising statistical models have been developed to accommodate ordinal data with many covariates. Two such models are the cumulative generalised monotone incremental forward stagewise (GMIFS) method and the parallel element linked multinomial-ordinal (ELMO) model, both of which are adept at handling ordinal data (
Archer et al. 2014;
Wurm et al. 2021). These novel models were developed for biostatistical application in cancer research. These methods have yet to be demonstrated to work on survey data.
What if there is a situation where there are numerous covariates but no established theory to inform hypothesis formulation? In this paper, we tackle this important concern. This paper proposes a novel method for conducting quantitative inductive research on survey data when the variable of interest follows an ordinal distribution. Thus, the primary contribution of this paper lies in presenting a new approach for conducting quantitative inductive research on survey data featuring an ordinal variable. To achieve this, we investigate the application of novel and classic penalising models to select explanatory variables affecting the target debt levels in Swedish listed firms. The primary aim of this study is the methodological description and application of the novel GMIFS and ELMO models on ordinal survey data to achieve quantitative inductive reasoning, in other words, to generate valid quantitatively based hypotheses. The employed case (the chosen target debt levels in Swedish listed firms) is used for the pedagogical purpose of outlining the methodological application. Through the case, this study aspires to transparently demonstrate this methodology for future application in other settings. As a methodological paper, the case’s findings do not present this study’s main purpose. However, it is necessary to discuss the case’s outcome for transparency. Therefore, the secondary aim is used to differentiate from the main purpose of this study. Consequently, the secondary aim is generated from the application in this specific case: to discover variables that can predict a company’s target debt level. Thus, establishing a hypothesis on the prediction of a company’s target debt level will be referred to as the secondary aim of this study.
Section 2 provides an overview of prior research in the field.
Section 3 outlines the methodology.
Section 4 presents the case data.
Section 5 reports the findings obtained from various penalised models. Finally,
Section 6 discusses the findings and concludes the results obtained by different methods.
2. The Case
Studying the determinants and consequences shaping capital structure decisions is not new: In 1984, Myers asked, “
How do firms choose their capital structures?” (
Myers 1984, p. 575). This query marked a starting point for what was to become the capital structure research field. Building on this inquiry,
Graham and Harvey (
2001) conducted a seminal study that unveiled a significant finding: 81 per cent of the surveyed U.S. firms adhered to either a somewhat strict or flexible target debt level. Subsequently, a replication study spanning European countries—the UK, Netherlands, France, and Germany—by
Brounen et al. (
2004) reinforced this discovery, with 76 per cent of firms reporting a somewhat strict or flexible target debt level.
The existing literature offers valuable insights into the determinants for adopting target debt.
Mielcarz et al. (
2018) revealed a firm preference for debt-leverage-ratio targets. On the other hand,
Flannery and Rangan (
2006) observed a tendency among firms to align their debt with long-term capital structure targets.
Lemmon et al. (
2008) uncovered that leverage tends to exhibit constancy over time, persisting for over two decades. These findings suggest that determining target debt levels may depend on factors with enduring stability.
Miglo (
2020) discovered that firms that deviate further from their
target debt levels are less inclined to adopt the zero-leverage policy compared to those closer to their targets. According to their model, this reluctance stems from the potential for a significant tax shield when moving towards the target, all else being equal. Similarly,
Gungoraydinoglu and Öztekin (
2021) revealed that shocks affecting firms may result in fluctuations in
target debt ratios over time without necessarily causing observable changes in debt ratios. They also found that while leverage targets are influenced by observed leverage ratios, the degree to which cost and benefit considerations manifest in observed leverage versus leverage targets and/or target deviations may vary among firms and over time. Additionally,
Zhou et al. (
2016) unveiled a positive correlation between firms’ target debt levels and the cost of their equity. In a related vein,
Hovakimian et al. (
2001) found the significance of median leverage within industry categories as a pivotal metric influencing the target debt levels of all companies operating within the same category.
Furthermore,
Harford et al. (
2009) uncovered the usage of capital structure targets among U.S. firms to facilitate substantial acquisitions.
Antoniou et al. (
2008) delved into the influence of solvency and firm size, revealing their positive effects on financial leverage, while increased profitability, growth prospects, and share prices negatively impacted financial leverage. This underscores the notion that the market environment within which the firms operate can either increase or decrease their leverage targets.
Campello (
2003) shed light on the impact of economic downturns, highlighting that firms with high debt burdens fare more poorly than their low-debt counterparts during recessions. This relationship particularly resonates in industry sectors with low debt exposure, whereas high-debt sectors exhibit greater resilience. Thus, maintaining a debt leverage target conforming with other firms within the same industry category emerges as a logical strategy during both economic downturns and upturns.
Marchica and Mura (
2010) concurred, emphasising the industry median leverage as a critical determinant of companies’ target debt levels.
Memon et al. (
2021),
Touil and Mamoghli (
2020), and
Vo et al. (
2022) studied how fast firms adjusted their capital structures after a target debt ratio was set.
In addition, this case (secondary study) contributes by presenting the determinants of target debt levels within firms. Specifically, we delve into the intricate relationship between estimated target debt levels and accounting-based data—a subject to be explored in greater detail in
Section 4.
3. Method
In line with the primary aim of this study, a presentation of the method used in the case is outlined below.
These questionnaires were dispatched in 2005 and 2008 to the CFOs of all companies listed on the Stockholm Stock Exchange. In cases where a CFO was absent, the questionnaire was directed to another senior executive responsible for financial management. Comprising 12 questions, with ten featuring subqueries, respondents were prompted to rank each query on a scale from zero (never/not important) to four (always/very important)
1. The consolidated dataset encompasses responses from both 2005 and 2008, featuring 292 companies. Of these, 42 remained active throughout both years. The overall adjusted response rate for the two years stood at 39.1 per cent, with non-responses accounting for 60.9 per cent; see
Table A1 for the descriptive statistics of the survey’s main questions. This was an important reason why we utilised survey data in our article because the response rate was high, approximately 40 per cent (in contrast to similar studies, which have a response rate of around 10 per cent
2). Another argument is, as mentioned, that survey data should only be viewed as a single case. The article’s main aim is not to analyse the usage of target debt levels in Sweden today but rather to engage in a general discussion surrounding how to apply an inductive method using quantitative data.
The other variables were drawn from the Swedish Companies Registration Office (Bolagsverket), where all limited companies are mandated to submit annual reports. This database offers over 170 variables for all such companies. Matched with the Stockholm Stock Exchange market listings using corporate identification numbers, each company’s potential auxiliary variables surpassed 170. Following the elimination of unusable variables, 167 variables were retained for further analysis.
3.1. Multiple Imputations via Classification and Regression Trees (CARTs)
If data are missing, utilising a penalising variable selection method without first addressing this issue would not be feasible; it would likely break down (
Long and Johnson 2015). Therefore, it is imperative to employ a method for estimating the missing data before applying a penalising method.
In this study, we adopted multiple imputations by chained equations (MICEs) to estimate missing data, following the approach recommended by
van Buuren and Groothuis-Oudshoorn (
2011). However, using linear and logistic regressions for imputation presents challenges, such as multicollinearity in the linear regression model and separation in the logistic regression model (
Albert and Anderson 1984). A solution to these problems was proposed by
Burgette and Reiter (
2010), who advocated for the use of Classification and Regression Trees (CARTs) (
Breiman et al. 1984).
Burgette and Reiter (
2010) conducted a simulation study comparing the CART-based MICE algorithm to MICE using other algorithms. Their findings indicated that “quadratic and interaction terms, CART-based MICE results in notably lower mean squared errors and biases. Even the estimated main effects are somewhat closer to the truth… Across all β elements, approximately 70% of the intervals cover the truth when using CART-based MICE, compared with 53% for standard MICE” (
Burgette and Reiter 2010, pp. 1072–73). Given that CART-based MICE outperformed other methods in the simulation study, we employed CART-based MICE to address non-response and missing values in the survey and the register data.
It is important to note that the imputation of non-response in the survey data was carried out without using information obtained from the register data and vice versa. However, one downside of CART-based MICE is its hierarchical tree structure, which could exacerbate cross-correlations between different inputs. CART-based MICE was performed separately on the two datasets: one imputation of the survey data and another of the register data. We opted for ten imputations, as
Schafer (
1999) suggested, when using Rubin’s formula for relative efficiency (
Rubin 1987), five to ten imputations are typically sufficient. Three variables had more than 70 per cent missing values and were subsequently removed. Before CART-based MICE, all 162 register variables contained incomplete information; 147 variables had complete information after the imputation. If a variable still had some missing values after CART-based MICE, it was removed to facilitate lasso regression. In total, 15 variables were deleted. After imputation, the survey data retained 109 variables with complete information, including our primary variable of interest, the target debt level.
3.2. Multinomial Lasso
Standard methods for the selection of variables are the lasso (
Tibshirani 1996), the ridge, and the elastic net (
Zou and Hastie 2005) regressions: out of the three methods, the lasso regression is the most widely used for variable selection. The variable of interest in this study has an ordinal distribution. However, the multinomial lasso model can fit the ordinal data. However, the performance could be better than a penalised model for ordinal independent variables (see
Wurm et al. 2021).
Let
n be the sample,
, where
is a p-dimensional vector of covariates, and
is the dependent variable following a multinomial distribution. The negative log-likelihood is maximised subject to the constraints
The constraint
is the
-norm (
Hastie et al. 2015), where
controls the shrinkage of the modes; if
, the models give the OLS estimates, and the shrinkage increases as
increases (
Archer et al. 2014). Some of the coefficients will shrink to be precisely zero when
; the solution for the lasso is unique.
The ungrouped multinomial lasso model above allows the lasso to select different variables for different outcomes. A different variety of the model described above is the grouped multinomial lasso: the grouping is then performed on the coefficients
, and the likelihood is rewritten as
The constraint is using the
-norm. The model penalises and selects the variables that should be included later in a regression model. Both lasso methods are fitted using a coordinate descent algorithm (
Hastie et al. 2015).
3.3. Element Linked Multinominal-Ordinal (ELMO) Models
Wurm et al. (
2021) proposed a class of models called the element linked multinomial-ordinal (ELMO). The ELMO is a subset of vector-generalised linear models and is generally fitted with a coordinate descent algorithm. Sometimes, it uses an algorithm for ordinal and multinomial regression models with an elastic net penalty. Simulations fit this elastic net and use accurate data to outperform the lasso and ridge regressions (
Zou and Hastie 2005). Each of the three ELMO models operates with a link function consisting of two parts: The first part decides the model family
3, and the second part is an ordinary link function
4. Therefore, the ELMO class model has a procedure which makes it suitable for both ordinal data and unordered categorical data, i.e., the parallel and nonparallel form of the model. The nonparallel model can be shrunk towards the parallel model using an over-parameterising nonparallel model called the semi-parallel model (
Wurm et al. 2021).
Let
be a vector of size
, where
if observation
i belongs to class
k, 0 otherwise with
classes. Let
be a matrix of size
. The probability that observation
with covariates
belongs to class
k will be denoted by
. Let
be a
matrix of regression coefficients and
be a vector of
intercept values. The covariates’ corresponding predictors are recorded into the vector
. The class probabilities
are connected by
, where
consists of two parts: a function over the distribution family and an elementwise link function (see
Wurm et al. 2021).
The specification for the three ELMO models is defined below. The standard for all models is that the penalising parameter is defined by
. The elastic net penalty is defined by
and is a weighted average between the lasso and ridge regression’s penalising parameters; furthermore, the penalty can either be manually set or automatically selected by the data (
Wurm et al. 2021). The elastic net penalty behaves as the lasso: it shrinks coefficients to zero when no relationship to the independent variable can be found (
Zou and Hastie 2005). Usually, the penalising parameter
is set first, and then, the tuning parameter
is used to select the best value (
Wurm et al. 2021).
In the parallel model, the columns of the
-matrix are restricted to being identical. Therefore, a new variable,
is imposed, which stands for the common column vector, and will ensure that all cumulative class probabilities are moved in the same direction, where
is the sum of multinomial trials, and
is the log-likelihood (
Wurm et al. 2021).
In the nonparallel model, there are no restrictions on
and as a result, not all the cumulative class probabilities will be compelled to move in the same direction. Due to the properties of the nonparallel model, it is more suitable for unordered multinomial data, but it can still be used on ordinal data (
Wurm et al. 2021).
The semi-parallel model can be used on both ordinal response data and unordered multinomial data: this is because it includes an over-parameterised nonparallel model, which has both parallel and nonparallel coefficients. Depending on
, the semi-parallel model may contain only the parallel or nonparallel coefficients, where
is a third tuning parameter, especially for parallel terms (
Wurm et al. 2021).
The semi-parallel model has an additional restriction
, compared to the non-parallel
and the parallel model
, where
represents a vector of length
K of ones. All three ELMO models are optimised using a coordinate descent algorithm. The semi-parallel model combines both parallel and nonparallel coefficients, making it somewhat over parameterised compared to a purely nonparallel model. By employing an elastic net penalty, the penalised likelihood generally converges to a unique solution in most cases. Some covariates in the penalised semi-parallel model may exhibit only parallel coefficients, effectively setting nonparallel coefficients to zero. In some cases, the semi-parallel model incorporates both parallel and nonparallel coefficients (
Wurm et al. 2021).
3.4. Cumulative Generalised Monotone Incremental Forward Stagewise (GMIFS) Method
The cumulative generalised monotone incremental forward stagewise (GMIFS) method was developed by
Archer et al. (
2014). The GMIFS method aims to fit a penalised method on ordinal data. This method is built on the incremental forward stagewise (IFS), which gives a penalised solution on non-ordinal data. The IFS method for linear regression has a process comparable to forward stepwise regression, which has a greedy procedure. The difference is that, compared to the forward stepwise regression, the coefficient updates are smaller and made more carefully for the IFS.
Let
be a vector of size
, where
, and the classes have
levels. Let
be a matrix of size
. The probability that observation
with covariates
belongs to class
k is denoted by
(
Archer et al. 2014). Hence, the likelihood of an ordinal response model can be written as
Consequently, the log-likelihood can be expressed as
For the cumulative logit model, the corresponding log-likelihood with respect to
is written as
One of the advantages of the GMIFS method is the estimation of a cross-correlation matrix that addresses the problem of cross-correlations between the different input variables. As GMIFS estimates a cross-correlation matrix, the model takes several detailed steps during its calculations and makes many computationally expensive iterations. Consequently, the model has the longest computational time (
Archer et al. 2014).
4. Data
In line with the secondary aim of the current study, we investigate companies’ target debt levels. An inductive approach requires utilising a wide array of variables. This study’s survey data was combined with the vast database from The Swedish Companies Registration Office (Bolagsverket). This study included all variables accessible through the companies’ annual reports.
4.1. Survey Data
The survey was administered to Chief Financial Officers (CFOs) of companies with primary listings on the Stockholm Stock Exchange in 2005 and 2008. In cases where no individual held the title of CFO, the survey was directed to another senior executive responsible for the company’s financial management. This questionnaire closely replicated the survey initially developed by
Graham and Harvey (
2001). It comprised 12 main questions, resulting in 112 variables, with one question focusing on the target debt level. For an English translation of the survey questionnaire, please refer to
Daunfeldt and Hartwig’s (
2014) Appendix 1. The dependent variable under examination corresponded to one of the survey questions, which was as follows:
Does your company have a target range for the solvency (or the debt-to-equity) ratio? Please, choose one of the alternatives.
No target debt level.
Yes, a flexible target range (=the aim is that the solvency/debt-to-equity ratio should be within a wide range).
Yes, a somewhat tight target range (=the aim is that the solvency/debt-to-equity ratio should be within a relatively narrow range).
Yes, a strict target range (=the aim is that the solvency/debt-to-equity ratio should be at, or very close to, a certain percentage figure.).
In 2005, the survey was distributed to all listed companies on the Stockholm Stock Exchange, a total of 244 companies. The survey was mailed on three occasions: the 8th of January, the 14th of March, and the 23rd of May. For those companies that did not respond in the initial round, follow-up contact was made via phone, with a polite encouragement to participate in the survey. Of the 244 companies surveyed, 112 returned the completed survey. Seven of these responses were deemed unusable, resulting in an adjusted response rate of 43.0 per cent.
In 2008, the survey was again dispatched to all 249 listed companies on the Stockholm Stock Exchange. The survey was mailed on four occasions: the 18th of February, the 10th of March, the 3rd of April, and the 16th of June. As in the previous survey, non-respondents in the initial round were contacted via phone. Out of the 249 initial surveys, 92 were returned. However, four of these responses could not be utilised, resulting in an adjusted response rate of 35.3 per cent. When combining the response rates from both surveys, the overall adjusted response rate amounted to 39.1 per cent. It is worth noting that these survey data have been previously used in studies conducted by
Hartwig (
2012) and
Daunfeldt and Hartwig (
2014).
4.2. Register Data
In Sweden, all limited companies are legally obliged to submit their annual reports to the Swedish Companies Registration Office (Bolagsverket). This information is subsequently cross-referenced with each company’s unique registration number. The data utilised in this study were sourced from PAR, a private consultancy agency. PAR has acquired accounting data from the Swedish Companies Registration Office and organised it into a comprehensive database. The PAR database comprises 162 accounting variables for all limited companies in Sweden. However, it is important to note that some values may be missing for certain companies.
This study used register data from the years preceding each survey, namely, 2004 and 2007. This choice was made to establish causality in alignment with the survey responses. Subsequently, following multiple imputations, these register data were matched with the questionnaire responses using each company’s unique registration number. This matching process allowed us to identify the variables that may influence the adoption of a target debt level.
6. Discussion and Conclusions
The primary aim of this study was to outline a new inductive approach based on a quantitative research method. This inductive approach could lay the foundation for further exploration of quantitative hypothesis formation. The outcomes of our case study support this assertion. Data with over 170 different variables were analysed through the novel and traditional penalising methods. Subsequently, a set of potential explanatory variables emerged from the penalising models, each explanatory variable giving rise to a hypothesis that was subsequently tested in the regression phase. The results of this regression can be found in the
Appendix A and are discussed below.
The six penalising models identified the following variables: company age, machinery, bank overdraft facility utilised, operating profit (loss) per employee, equity ratio, and quick asset ratio. Consequently, a key question emerged: Which among the different models should be considered the most valid?
Wurm et al. (
2021) investigated which penalty model performed the best on accurate ordinal data. They tested seven different models, including three versions of ELMO (parallel, nonparallel, and semi-parallel), two versions of multinomial logistic regression (ungrouped and grouped), GMIFS, and a cumulative logit model with forward stepwise variable selection by using AIC. The findings of
Wurm et al. (
2021) indicated that GMIFS outperformed all the other models, achieving a mean misclassification rate of 0.073. The parallel and semi-parallel ELMO were ranked second-best, with a mean misclassification of 0.091. The ungrouped multinomial logistic lasso had a mean misclassification of 0.108, and the ungrouped multinomial logistic lasso’s mean misclassification was 0.158. The worst performing was the nonparallel ELMO, with a mean misclassification rate of 0.373.
In this study, the novel penalising models ELMO and GMIFS significantly improved selectivity in identifying potential explanatory variables. They yielded one explanatory variable: the quick asset ratio (see
Table 3 and
Table 4). The traditional models generated multiple hypotheses, which, in the regression phase, were not statistically significant. Therefore, the novel ELMO and GMIFS outperformed the traditional models, aligning with the findings by
Wurm et al. (
2021). The selectivity of statistically significant hypotheses is of great value in large-scale applications in future studies. In cases with a large quantity of possible variables or a combination of variables for deductive testing, the manual inductive reasoning of subsequent testing is not feasible. Hence, a selective penalising model could be vital for efficient quantitative hypothesis formation for deductive testing.
Hence, it is evident that an inductive approach can be grounded in quantitative methods, provided that the data can autonomously offer insights, as emphasised by
Kell and Oliver (
2004). Nevertheless, to enable the data to articulate themselves effectively, it is imperative to employ an accurately specified model, thereby reducing the risk of omitted variable bias interfering with the hypothesis formulation process, according to Clarke’s assertion (
Clarke 2005). Considering this, the authors of this study advocate for the advantages of using the new penalising models, ELMO and GMIFS, especially when dealing with ordinal data. These models prove invaluable for selecting optimal explanatory variables, a critical step in hypothesis development. However, the novelty of this study is also a limitation. To the authors’ knowledge, this methodology paper is the first to apply the ELMO and GMIFS models inductively to survey data. Consequently, this study utilises penalising models, which have never previously been proven to work on survey data, thereby creating uncertainties relating to instances where the novel ELMO and GMIFS might provide unexpected errors or incorrect results, especially when they are applied in a new setting. In concluding the primary aim, this paper proves that these novel penalising models work on this dataset. However, there is a need for future studies to further explore quantitative inductive research based on novel penalising models.
Aligning with the secondary aim of this study, the generated hypothesis is that the quick asset ratio was the sole variable that significantly impacted the target debt level. This hypothesis was later tested, and within the boundaries of the case, it was proven true. These conclusions are further supported by the cumulative ordinal logit model, which reveals a significant negative effect of the quick asset ratio on the target debt level (for additional details, refer to
Table A2,
Table A3,
Table A6, and
Table A7).
Because the quick asset ratio variable was later confirmed to be the only significant variable statistically affecting the target debt level, the novel ELMO and GMIFS were superior to the older penalising models. The older penalising models generated a larger list of potential explanatory variables, including non-significant explanatory variables. Furthermore, the ungrouped multinomial lasso did not include the significant explanatory variable: quick asset ratio. Therefore, this method included several non-explanatory variables while excluding the sole explanatory variable. This error leads to increased omitted variable bias, as
Clarke (
2005) highlighted.
There are several limitations of our paper. The first limitation of the case is that the survey data are old. Arguably, the age of survey data does not impact the main purpose of this study, as the case is mainly employed to outline the method pedagogically. Therefore, its age is of less importance for this study’s primary aim. Nevertheless, the survey data age impairs the certainty of drawing real-world conclusions regarding the generated hypothesis of quick asset ratio and target debt levels. However, there is also an important strength of the case: the high response rate in the survey and the amount of possible explanatory variables. Consequently, the high response rate is relevant for the case application and outlining the method, aligning with the primary aim of this study, which proves the novel penalising model’s inductive application on survey data. However, future research with contemporary data must verify the real-world association between quick-asset ratio and target debt levels.
Another limitation concerns this study’s primary aim. While the novel penalising models have proved effective in this dataset, their application to survey data is new and not without uncertainties. Unexpected errors or misclassifications could occur, particularly when these models are employed in new settings. Thus, future research must investigate these models’ robustness across different datasets and conditions.
Furthermore, several variables identified by traditional models were not statistically significant, which invites reflection on potential influential factors not considered in this analysis. The iterative nature of model building, especially in an inductive approach, suggests that the inclusion of additional variables could provide deeper insights.
In conclusion, this study has demonstrated the potential of novel penalising models like ELMO and GMIFS in a quantitative inductive framework. These models have shown superior performance in identifying key variables that influence the target debt level in Swedish listed companies. However, the research’s inductive nature and the application’s novelty also suggest a cautious approach, advocating for further empirical testing and refinement of these models.
By outlining these potential variables and methodological enhancements for future research, this paper provides a roadmap for subsequent empirical inquiries. The groundwork laid in this study encourages an iterative quantitative inductive research strategy.