Next Article in Journal
An Overview of Islamic Accounting: The Murabaha Contract
Previous Article in Journal
The Transformation of the Healthcare Business through the COVID-19 Pandemic (2020–2021)
Previous Article in Special Issue
The Generalised Extreme Value Distribution Approach to Comparing the Riskiness of BitCoin/US Dollar and South African Rand/US Dollar Returns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Particle MCMC in Forecasting Frailty-Correlated Default Models with Expert Opinion

Department of Actuarial Studies and Business Analytics, Macquarie Business School, Macquarie University, Sydney, NSW 2109, Australia
J. Risk Financial Manag. 2023, 16(7), 334; https://doi.org/10.3390/jrfm16070334
Submission received: 24 April 2023 / Revised: 10 July 2023 / Accepted: 11 July 2023 / Published: 14 July 2023
(This article belongs to the Special Issue Financial Econometrics and Models)

Abstract

:
Predicting corporate default risk has long been a crucial topic in the finance field, as bankruptcies impose enormous costs on market participants as well as the economy as a whole. This paper aims to forecast frailty-correlated default models with subjective judgements on a sample of U.S. public non-financial firms spanning January 1980–June 2019. We consider a reduced-form model and adopt a Bayesian approach coupled with the Particle Markov Chain Monte Carlo (Particle MCMC) algorithm to scrutinize this problem. The findings show that the 1-year prediction for frailty-correlated default models with different prior distributions is relatively good, whereas the prediction accuracy ratios for frailty-correlated default models with non-informative and subjective prior distributions over various prediction horizons are not significantly different.

1. Introduction

Default forecasting is crucial for financial institutions and investors. Prior to investing in or extending credit to a company, investors and creditors must assess the company’s financial distress risk in order to avoid incurring a significant loss. In the literature on financial distress, default risk modelling can be grouped into two main categories: structural and reduced-form approaches. This paper uses the reduced-form method of correlated default timing. The interested readers may refer to Nguyen and Zhou (2023) for a general view of the literature on reduced-form models of correlated default timing.
Accounting-based measures are the first generation of reduced-form models for predicting the failure of a company. The earliest works predicting this type of financial distress are univariate analyses (Beaver 1966, 1968), which employ financial ratios independently and adopt the cut-off point for each financial ratio in order to improve the precision of classifications for a distinct sample. Altman (1968) conducted a multivariate analysis of business failure based on multiple discriminant analyses by combining the data from numerous financial ratios from the financial statement into a singular weighted index. The second generation of default literature is the logistic model (Ohlson 1980). This method was developed to deal with the shortcomings of the Altman Z-score method. Shumway (2001) attempts to predict defaults and shows that half of the accounting ratios utilized by Altman (1968) and Zmijewski (1984) have poor prediction on the default models, while a large number of market-driven independent variables are significantly associated with default probability. The recent expansion of reduced-form default risk models has centred on duration analysis. Jarrow and Turnbull (1995) and Jarrow et al. (1997) are the pioneers of term structure and credit spread modelling.
With regard to duration analysis, recent research indicates that observable macroeconomic and firm-specific factors may not be sufficient to characterize the variation in default risk, as corporate default rates are strongly correlated with latent factors. The need for and importance of the hidden factor in a default model are discussed in several recent studies, such as Koopman and Lucas (2008), Duffie et al. (2009), Chava et al. (2011), Koopman et al. (2011, 2012), Creal et al. (2014), Azizpour et al. (2018), and Nguyen (2023).
To improve the prediction accuracy of default models, the utilization of expert judgement in the decision-making process is common in practice, as there may not be enough statistically significant empirical evidence to reliably estimate the parameters of complicated models. This problem is considered to be of central interest in simulating a number of debates in the empirical literature regarding the issue of Bayesian inference. In the process of inference, however, the majority of Bayesian analyses utilize non-informative priors formed by formal principles. The theoretical foundation utilized by the majority of Bayesians is that of Savage (1971, 1972) and De Finetti (2017). Despite the fact that non-informative prior distribution plays a crucial role in defining the model for certain problems, it appears that there is an unavoidable drawback, as it is sometimes impossible to specify only non-informative priors and disregard the informative priors. It is observed that Bayes factors are sensitive to the selection of unknown parameters of informative prior distributions, which has a greater likelihood of influencing the posterior distribution. As a consequence, it generates debates regarding the selection of priors. Moreover, real prior information is beneficial for specific applications, whereas non-informative priors do not take advantage of this; consequently, such circumstances require informative priors. In other words, this is where subjective views and expert opinion are combined. Assuming we have a complex, high-dimensional posterior distribution, it is uncertain whether we have exhaustively summarized it. This should likely be completed by an experienced statistician. Choosing informative priors and establishing a connection with expert opinion are still the subject of debate in academic research, and interesting stories about them are still being continued. Recently, there has been research on the default prediction combined with expert opinion using machine learning techniques, such as by Lin and McClean (2001), Kim and Han (2003), Zhou et al. (2015), and Gepp et al. (2018). However, these studies adopt machine learning techniques with single classifiers.
Motivated by these findings, this paper aims to answer the research question of whether adding expert opinions to the frailty-correlated default risk model can give us better prediction results. To do so, we combine prior distributions to the frailty-correlated default model in Duffie et al. (2009) and adopt the Particle MCMC approach in Nguyen (2023) to estimate the unknown parameters and predict the default risk in the model using the dataset of U.S. public non-financial firms spanning 1980–2019. Our findings show that the 1-year prediction for frailty-correlated default models with different prior distributions is relatively good, whereas the prediction accuracy of models decrease significantly as the prediction horizons increase. The results also indicate that prediction accuracy ratios for frailty-correlated default models with non-informative and subjective prior distributions over various prediction horizons are not significantly different. Specifically, the out-of-sample prediction accuracy for the frailty-correlated default models with subjective prior distribution is slightly higher than that of the frailty-correlated default models with uniform prior distribution (95.00% for 1-year prediction, 85.23% for 2-year prediction, and 83.18% for 3-year prediction of the frailty default model with uniform prior distribution; and 96.05% for 1-year prediction, 86.32% for 2-year prediction, and 84.71% for 3-year prediction of the frailty default model with subjective prior distribution).
To obtain the research objective, the remainder of the paper is organized as follows: Section 2 presents the econometric model and the estimation methodology for the frailty-correlated default models with the different prior distributions. Section 3 reports major results. Data and the choice of covariates are also presented in this section. Section 4 provides the model performance evaluation. Section 5 presents the concluding remarks and limitations of the research.

2. Econometric Model

This part outlines the econometric model used by Duffie et al. (2009) and our extension to the model and improvement of the method to examine and forecast default risk at the firm level. We first provide an introduction to the notations used in this study. We consider a complete filtered probability space ( Ω , F , G , P ) , where the filtration G : = { G t } t [ 0 , T ] describes the flow of information over time and P is a real-world probability measure. Further on, we use the standard convention where capital letters denote random variables, whereas lower case letters are used for their values.
The complete Markov state vector is described as W t = ( X i t , Y t , H t ) , where we let W t be a Markov state vector of firm-specific and macroeconomic covariates, X i t a vector of observable firm-specific covariates for firm i at the first observation time t i until the last observation time T i , V i be an unobservable firm-specific covariate, Y t be a vector of observable macroeconomic variables at all times, and H t be an unobservable frailty (latent macroeconomic factor) variable; Z i t = ( 1 , X i t , Y t ) denote a vector of observable covariates for firms i at time t, where 1 is a constant.
On the event of s > t of survival to t, given the information set F t , the conditional probability of survival to time t + τ is
q ( W t , τ ) = p ( s > t + τ | F t ) = E e t t + τ λ ( z ) d z | W t
and the conditional default probability at time t + τ is of the form:
p ( W t , τ ) = p ( T < t + τ | F t ) = E t t + τ e t u λ ( z ) d z λ ( u ) d u | W t
The information filtration of { F t } t [ 0 , T ] includes the information set of the observed macroeconomic/firm-specific variables:
{ Y τ } τ [ 0 , t ] { ( X i τ , D i τ ) } i [ 1 , m ] , τ [ t i , m i n ( t , T i ) ]
The complete information filtration { G t } t [ 0 , T ] contains the variables in the information filtration of { F t } t [ 0 , T ] and the frailty process { H τ } τ [ 0 , t ] .
The assumptions are imposed as follows:
Assumption 1. 
All firms’ default intensities at time t depend on a Markov state vector W t which is only partially observable.
Assumption 2. 
Conditional on the path of the Markov state process W determining the default intensities, the firm default times are the first event times of an independent Poisson process with time-varying intensities determined by the path of W. This is referred to as a doubly stochastic assumption.
Assumption 3. 
Set the level of mean-reversion of H, α = 0 , the unobserved frailty process H is a mean-reverting Ornstein–Uhlenbeck (OU) process which is given by the stochastic differential equation below:
d H t = η H t d t + σ d W t ,
where η , α , σ are parameters; { W t } t [ 0 , T ] is a standard Brownian motion with respect to ( Ω , F , G , P ) ; η is a nonnegative constant, the speed of mean-reversion of H; σ is the volatility of the Brownian motion.
In the general case, without Assumption 3, we would need extremely numerically intensive Monte Carlo integration in a high dimensional space due to our large dataset from 1980 to 2019. Thus, we assume process H is an OU process, as in Duffie et al. (2009).
The default intensity of a firm i at the time t is: λ i t = Λ ( S i ( W t ) , θ ) , where S i ( W t ) = ( Z i t , H t ) is the component of the state vector at time t and θ = ( κ , ξ , η , σ ) is a parameter vector to be estimated; κ is a parameter vector of the observable covariates Z; ξ is a parameter of the frailty variable H t , η is the speed of mean-reversion of H t ; and σ is a Brownian motion parameter of H t . The parameters η and σ need to be estimated through a mean-reverting OU process, which we assume the unobserved frailty process H will follow. The proportional hazards form is expressed by
Λ ( ( z , h ) , θ ) = e ( κ 1 z 1 + + κ n z n + ξ h )
D m is the default indicators of m firms. Default indicator D i t of the firm i at the time t is defined as:
D i t = 1 if firm i defaulted at time t 0 otherwise
Now we will start with the conditional probability of the m company. As mentioned above, we let t i be the first observation time for firm i and T i is the last observation time for firm i. For each firm i and fixed time t, we have
P ( D i t = 1 | γ , θ ) = λ i t Δ t e λ i t Δ t
P ( D i t = 0 | γ , θ ) = e λ i t Δ t
and then, in our case, the conditional probability of the individual firm is given by
p ( Z i t , D i t , H | θ ) = λ i t Δ t e λ i t Δ t D i t + e λ i t Δ t ( 1 D i t )
t = t i T i p ( Z i t , D i t , H | θ ) = e t = t i T i λ i t Δ t t = t i T i ( D i t λ i t Δ t + ( 1 D i t ) )
Thus, the conditional probability of the m firm is expressed as:
i = 1 m t = t i T i p ( Z i t , D i t , H | θ ) = i = 1 m e t = t i T i λ i t Δ t t = t i T i ( D i t λ i t Δ t + ( 1 D i t ) ) = e i = 1 m t = t i T i λ i t Δ t i = 1 m t = t i T i ( D i t λ i t Δ t + ( 1 D i t ) ) .
Applying Bayes’ theorem:
p ( θ | Z , D , H ) L ( θ | Z , D , H ) p ( θ )
We have two cases for the prior distribution p ( θ ) : (i) Uniform prior and (ii) Subjective prior.
  • Prior distribution is uniform
    p ( θ | Z , D , H ) L ( Z , D , H | θ ) ,
    where p ( θ ) 1 (non-informative prior distribution). This case is exactly studied by Duffie et al. (2009). Our extension to the model by combining with priors is given below.
  • Prior distribution is subjective
    p ( θ | Z , D , H ) L ( θ | Z , D , H ) N ( κ , ξ | μ , Σ ) ,
    where N ( μ , Σ ) is the multivariate normal prior with a mean vector μ and a covariance matrix Σ .
If the observable covariate process Z is independent of the frailty process H, the likelihood function of intensity parameter vector θ is given by
L ( θ | Z , D ) = L ( θ | Z , D , h ) p H ( h ) d h × N ( κ , ξ | μ , Σ ) = E i = 1 m e t = t i T i λ i t Δ t t = t i T i ( D i t λ i t Δ t + ( 1 D i t ) ) | Z , D × N ( κ , ξ | μ , Σ ) ,
where p H ( . ) is the unconditional probability density of the unobservable frailty process H.
Now we show how to transform the model with the frailty-correlated defaults to the one combined with the subjective prior distribution. We have found the posterior probability density earlier as
p ( θ | Z , D , H ) L ( θ | Z , D , H ) p ( θ ) = L ( θ | Z , D , H ) N ( κ , ξ | μ , Σ ) = L ( θ | Z , D , H ) e x p ( 1 2 ( ( κ , ξ ) μ ) T Σ 1 ( ( κ , ξ ) μ ) ( 2 π ) n | Σ | .
Taking the logarithm of Equation (11)
l o g L ( θ | Z , D , H ) e x p ( 1 2 ( ( κ , ξ ) μ ) T Σ 1 ( ( κ , ξ ) μ ) ) ( 2 π ) n | Σ | = l o g ( L ( θ | Z , D , H ) + l o g e x p ( 1 2 ( ( κ , ξ ) μ ) T Σ 1 ( ( κ ) μ ) ) ( 2 π ) n | Σ | .
Recall that the log-likelihood of parameter value θ given the observable and hidden variables is given by
( θ | Z , D , H ) = l o g e i = 1 m t = t i T i λ i t Δ t i = 1 m t = t i T i ( D i t λ i t Δ t + ( 1 D i t ) ) = i = 1 m t = t i T i λ i t Δ t + i = 1 m t = t i T i l o g ( D i t λ i t Δ t + ( 1 D i t ) ) .
We proceed to take the logarithm for the second term of the Equation (12)
l o g e x p ( 1 2 ( ( κ , ξ ) μ ) T Σ 1 ( ( κ , ξ ) μ ) ) ( 2 π ) n | Σ | = 1 2 ( 2 π ) n | Σ | ( ( κ , ξ ) μ ) T Σ 1 ( ( κ , ξ ) μ ) .
In the second term, the central interest is the covariance matrix. For notational simplicity, set γ = ( κ , ξ ) . It is then rewritten as
( γ 1 μ 1 ) c 1 , 1 + ( γ 2 μ 2 ) c 2 , 1 + + ( γ n + 1 μ n + 1 ) c n , 1 ( γ 2 μ 1 ) c 1 , 2 + ( γ 2 μ 2 ) c 2 , 2 + + ( γ n + 1 μ n + 1 ) c n , 2 ( γ 1 μ 1 ) c 1 , n + ( γ 2 μ 2 ) c 2 , n + + ( γ n + 1 μ n + 1 ) c n , n ( γ 1 μ 1 ) c 1 , n + 1 + ( γ 2 μ 2 ) c 2 , n + 1 + + ( γ n + 1 μ n + 1 ) c n , n + 1 γ 1 μ 1 γ 1 μ 2 γ n μ n γ n + 1 μ n + 1
= ( γ 1 μ 1 ) ( ( γ 1 μ 1 ) c 1 , 1 + ( γ 2 μ 2 ) c 2 , 1 + + ( γ n μ n ) c n , 1 + ( γ n + 1 μ n + 1 ) c n + 1 , 1 ) + ( γ 2 μ 2 ) ( ( γ 1 μ 1 ) c 1 , 2 + ( γ 2 μ 2 ) c 2 , 2 + + ( γ n μ n ) c n , 2 + ( γ n + 1 μ n + 1 ) c n + 1 , 2 ) + + ( γ n μ n ) ( ( γ 1 μ 1 ) c 1 , n + ( γ 2 μ 2 ) c 2 , n + + ( γ n μ n ) c n , n + ( γ n + 1 μ n + 1 ) c n + 1 , n ) + ( γ n + 1 μ n + 1 ) ( ( γ 1 μ 1 ) c 1 , n + 1 + ( γ 2 μ 2 ) c 2 , n + 1 + + ( γ n μ n ) c n , n + 1 + ( γ n + 1 μ n + 1 ) c n + 1 , n + 1 ) = ( ( γ 1 μ 1 ) ( γ 1 μ 1 ) c 1 , 1 + ( γ 1 μ 1 ) ( γ 2 μ 2 ) c 2 , 1 + + ( γ 1 μ 1 ) ( γ n μ n ) c n , 1 + ( γ 1 μ 1 ) ( γ n + 1 μ n + 1 ) c n + 1 , 1 ) + ( ( γ 2 μ 2 ) ( γ 1 μ 1 ) c 1 , 2 + ( γ 2 μ 2 ) ( γ 2 μ 2 ) c 2 , 2 + + ( γ 2 μ 2 ) ( γ n μ n ) c n , 2 + ( γ 2 μ 2 ) ( γ n + 1 μ n + 1 ) c n + 1 , 2 ) + + ( ( γ n μ n ) ( γ 1 μ 1 ) c 1 , n + ( γ n μ n ) ( γ 2 μ 2 ) c 2 , n + + ( γ n μ n ) ( γ n μ n ) c n , n + ( γ n μ n ) ( γ n + 1 μ n + 1 ) c n + 1 , n ) + ( ( γ n + 1 μ n + 1 ) ( γ 1 μ 1 ) c 1 , n + 1 + ( γ n + 1 μ n + 1 ) ( γ 2 μ 2 ) c 2 , n + 1 + + ( γ n + 1 μ n + 1 ) ( γ n μ n ) c n , n + 1 + ( γ n + 1 μ n + 1 ) ( γ n + 1 μ n + 1 ) c n + 1 , n + 1 ) = j = 1 n + 1 k = 1 n + 1 c j k ( γ j μ j ) ( γ k μ k ) .
Then, the second term can be rewritten as
l o g e x p ( 1 2 ( γ μ ) T Σ 1 ( γ μ ) ) ( 2 π ) n | Σ | = 1 2 ( 2 π ) n | Σ | ( γ μ ) T Σ 1 ( γ μ ) = 1 2 ( 2 π ) n | Σ | j = 1 n + 1 k = 1 n + 1 c j k ( γ j μ j ) ( γ k μ k ) .
We combine terms of Equation (11) to obtain an overall likelihood function given the filtration G
l o g ( L ( θ | Z , D , H ) + l o g e x p ( 1 2 ( γ μ ) T Σ 1 ( γ μ ) ) ( 2 π ) n | Σ | = i = 1 m t = t i T i λ i t Δ t + i = 1 m t = t i T i l n ( D i t λ i t Δ t + ( 1 D i t ) ) + j = 1 n + 1 k = 1 n + 1 c j k ( γ j μ j ) ( γ k μ k ) .
Now the central interest is to estimate Equation (17). We used a Bayesian approach coupled with the Particle MCMC algorithm to estimate and forecast the frailty-correlated default models with uniform and subjective prior distributions. Particle filters can be understood as sequential Monte Carlo (SMC) methods, as introduced by Handschin and Mayne (1969) and Handschin (1970). Particles are a set of points in the sample space, and particle filters provide approximations to the posterior densities via these points. Each particle has an assigned weight, and then the posterior distribution can be approximated by a discrete distribution. Several algorithms about particle filters have been proposed in the literature review, and it can be said that the difference between algorithms consist in the way that a set of the particles evolves and adapts to inputs data. Algorithm 1 presents the Sequential Monte Carlo process we applied in our method.
Algorithm 1: Sequential Monte Carlo algorithm
  • At time t = 1: n = 1 , , N
(1)
Sample  H 1 n q θ ( . | ( z 1 , D 1 ) )
(2)
Calculate and normalize the weights
w 1 ( H 1 n ) : = p θ ( H 1 n , ( z 1 , D 1 ) ) q θ ( H 1 n | ( z 1 , D 1 ) ) = μ θ ( H 1 n ) g θ ( ( z 1 , D 1 ) | H 1 n ) q θ ( H 1 n | ( z 1 , D 1 ) ) ,
W 1 n : = w 1 ( H 1 n ) i = 1 N w 1 ( H 1 i ) .
  • At time t = 2 , , T : n = 1 , , N
(1)
Resample the particles, i.e., sample the indices  A t 1 n G ( . | W t 1 ) ,
(2)
Sample   H t n q ( . | ( ( z t , D t ) , H t 1 A t 1 n ) )  and set  H 1 : t n : = ( H 1 : t 1 A t 1 n , H t n ) ,
(3)
Calculate and normalize the weights
w t ( H 1 : t n ) : = p θ ( H 1 : t n , ( z 1 : t , D 1 : t ) ) p θ ( H 1 : t 1 A t 1 n , ( z 1 : t 1 , D 1 : t 1 ) ) q θ ( H t n | ( ( z t , D t ) , H t 1 A t 1 n ) ) = f θ ( H t n | H t 1 A t 1 n ) g θ ( ( z t , D t ) | H t n ) q θ ( H t n | ( ( z t , D t ) , H t 1 A t 1 n ) )
W t n : = w t ( H 1 : t n ) i = 1 N w t ( H 1 : t i )
One disadvantage of this approach is that the SMC approximation to p θ ( x t | ( y 1 : T ) deteriorates when T t is too large. Andrieu et al. (2010) have proposed the Particle PIMH method to overcome this difficulty. This is a class of MCMC using the SMC algorithm as its component to design its multi-dimensional proposal distributions. The advantage of this method is that the PIMH sampler does not call for the SMC algorithm to generate all samples which approximate p θ ( x 1 : T | y 1 : T ) but only to choose a sample which can be approximated for p θ ( x 1 : T | y 1 : T ) (see Andrieu et al. 2010). Algorithm 2 presents the PIMH method applied in our model.
Algorithm 2: PIMH algorithm
  • Set k = 0
    Sample S p θ ( h 1 : T | ( z 1 : T , D 1 : T ) ) by SMC Algorithm 1,
    Draw H 1 : T ( 0 ) p ^ θ ( . | ( z 1 : T , D 1 : T ) ) from S
    Set p ^ θ ( z 1 : T , D 1 : T ) ( 0 ) = p ^ θ ( . | ( z 1 : T , D 1 : T ) )
  • For k = 1 : N
(1)
Sample S p θ ( h 1 : T | ( z 1 : T , D 1 : T ) ) by SMC Algorithm 1
Draw H 1 : T * p ^ θ ( . | ( z 1 : T , D 1 : T ) )
(2)
Draw U with the uniform distribution (0, 1)
If U < p ^ θ ( z 1 : T , D 1 : T ) * / p ^ θ ( z 1 : T , D 1 : T ) ( k 1 )
      Set H 1 : T ( k ) = H 1 : T *
      Set p ^ θ ( z 1 : T , D 1 : T ) ( k ) = p ^ θ ( z 1 : T , D 1 : T ) *
Else
      Set H 1 : T ( k ) = H 1 : T ( k 1 )
      Set p ^ θ ( z 1 : T , D 1 : T ) ( k ) = p ^ θ ( z 1 : T , D 1 : T ) ( k 1 )
In our method, we combine Particle MCMC with the maximum likelihood method to estimate the intensity parameter vector θ for the frailty-correlated model. We present the implementation steps in Algorithm 3. See Nguyen (2023) for further discussions about the methods.
Algorithm 3: Particle MCMC Expectation-Maximization algorithm
  • Initialize
    Set i := 0
    Set θ ( 0 ) = ( κ ^ , 0.05 , 0.01 , 1 ) , where κ ^ is an estimate of κ in the model without the hidden factors
  • Loop
    Set i := i + 1
    Sample H 1 , H 2 , , H N from p H ( . | Z , D , θ ( i 1 ) ) by PIMH Algorithm 2
    Employ the maximum likelihood method to estimate parameters θ ( i ) from Equation (17) using generated samples H 1 , H 2 , , H N
    Exit when achieving reasonable numerical convergence of the likelihood L .

3. Major Results

3.1. Data Sample

The dataset used to estimate the models is as follows: Short-term risk-free risk (3-month Treasury bill rate) is collected from the Board of Governors of the Federal Reserve System. We use the Compustat North America dataset and the Center for Research in Security Prices (CRSP) database from Wharton Research Data Services. We collected quarterly and annual accounting data for companies in the nonfinancial industry in the United States. Compustat quarterly and annual files contain information regarding both short- and long-term debt. When comparing the values of Debt in Current Liabilities and Total Current Liabilities, for short-term debts, we select the greater value. When the quarterly debt values are missing, we substitute them with the annual debt values if they are available; if they are not, they are treated as the final missing values. Additionally, we include these companies’ stock market data. Historical default rate data are collected from Moody’s database. Our default measure is determined in a similar way to Nguyen (2023). The final dataset contains 2432 U.S Industrial categories with 424,601 firm-month observations with a total of 412 defaults (272 bankruptcies and 140 other defaults) over the period from January 1980 to June 2019.

3.2. The Choice of Covariates

The observable firm-specific/macroeconomic covariate variables used to examine and predict the defaults for the U.S. firms are as follows:
  • The 3-Month Treasury bill rate (TREASURY RATE): The 3-Month Treasury bill is a short-term U.S. government security with a constant 3-month maturity. The Federal Reserve computes yields for constant maturities by interpolating points along a Treasury yield curve comprised of actively traded issues with term maturities. It is a risk-free rate and has a significant impact on monetary policy (see, for example, Duffie et al. 2007, 2009; Duan et al. 2012; Azizpour et al. 2018; Nguyen 2023).
  • The Trailing 1-year return on the S&P 500 (SP 500): This variable measures the market return, and its importance has been documented in previous studies (see, for example, Duffie et al. 2007, 2009; Duan et al. 2012; Azizpour et al. 2018; Nguyen 2023).
  • Distance to Default (D2D): This variable is defined as the number of standard deviations of the annual asset growth of a firm where the firm’s assets are higher than its liabilities (see Merton (1974) for further discussion on this variable). We construct this variable in a similar way to Vassalou and Xing (2004), Hillegeist et al. (2004), Bharath and Shumway (2008), and Nguyen (2023). Duffie et al. (2007, 2009) and Nguyen (2023) find a negative and significant relationship between distance to default and the default intensity of the US firms in Industry category. Duan et al. (2012) also show that the default risk of U.S industry and financial firm firms is significantly and negatively associated with the Distance-to-Default variable.
  • Firm size (FIRM SIZE): This variable is used to show the measure or quantity of a company’s assets. The importance of this variable was documented in a study by Shumway (2001), Duan et al. (2012), and Nguyen (2023). Firm size is calculated as the logarithm of the assets.
  • Return on assets ratio (ROA): This is a financial ratio that indicates a company’s ability to generate profit relative to the value of its assets. A higher ROA expressed as a percentage indicates that a company can generate more profits from its assets. A lower ROA indicates productivity and the company’s ability to better its balance sheet management. The return on assets ratio is computed as a ratio of net income to total assets. In the default literature, the profitability ratio is a traditional variable, and its importance has been pointed out since Altman (1968) and is widely used in the finance literature, such as Shumway (2001), Duan et al. (2012); Nguyen (2023).
  • Financial leverage ratio (LEVERAGE): This ratio, also known as the debt ratio, is used to assess a company’s ability to meet its long-term (one year or longer) debt obligations. These obligations consist of interest payments, the ultimate principal payment, and any other fixed obligations, such as lease payments. This ratio is calculated as the ratio of total liabilities to total assets (see, for example, Ohlson 1980; Zmijewski 1984; Nguyen 2023).
  • Trailing 1-year firm stock return (FIRM RETURN): This variable is suggested by Shumway (2001) and is widely used in the finance literature (see, for example, Bharath and Shumway 2008; Duffie et al. 2007, 2009; Nguyen 2023). We use a similar formula as Shumway (2001) and Nguyen (2023) to compute this variable.
Table 1 and Table 2 provide definitions and summary statistics for all research covariates used in the sample to predict the frailty-correlated default models with different prior distributions.

3.3. Parameter Estimates

We estimate both default models with both uniform and subjective prior distribution, which enables us to compare two models easily. Table 3 shows the estimates for parameters of default intensities with a uniform prior distribution.
From Table 4, it can be seen that all these variables are statistically significant at traditional confidence levels. The estimate of Distance to Default of 0.6309 indicates that a negative shock to the distance to default by one standard deviation increases the default intensity by ≈87.91%. Among firm-specific variables, Distance to Default, which is the volatility-adjusted leverage measure, shows its dominant role in explaining a significant variation of the default intensity. The result of Firm size indicates that larger firms often have more financial flexibility than smaller firms, which can help them better overcome financial distress. The coefficient of Return on assets ratio confirms that firms with high-profits relative to assets are less likely to go bankrupt. The result of financial leverage ratio reports that the higher the debt ratios, the higher the default risk of firms. The 1-year trailing stock return covariate is statistically significant and negatively related to the default intensities of the firms. The observable macroeconomic variables chosen in this study are highly economically and statistically significantly negatively associated with the default intensities of the firms.
μ = ( 3.1 0.6 1.1 0.1 0.9 0.18 0.36 0.53 0.1 ) Σ = 0.540000 0.004164 0.008542 0.006700 0.017213 0.024840 0.008855 0.005788 0.000056 0.004164 0.000440 0.000327 0.000088 0.000446 0.000206 0.000054 0.000067 0.000084 0.008542 0.000327 0.006385 0.000111 0.000912 0.000272 0.000554 0.000534 0.000018 0.006700 0.000088 0.000111 0.000472 0.002533 0.000251 0.000129 0.000091 0.000023 0.017213 0.000446 0.000912 0.002533 0.074100 0.000619 0.001904 0.000771 0.000015 0.024840 0.000206 0.000272 0.000251 0.000619 0.001164 0.000457 0.000189 0.000041 0.008855 0.000022 0.000554 0.000129 0.001904 0.000457 0.008680 0.002102 0.000045 0.005788 0.000053 0.000534 0.000091 0.000771 0.000189 0.002102 0.002021 0.000010 0.000045 0.000041 0.000019 0.000026 0.000043 0.000021 0.000057 0.000009 0.000023
The role of the frailty effect is not relatively large in our dataset. The volatility and the mean reversion of the hidden factor, which determine the dependence of the unobserved default intensities on the latent variable H t , have a highly economically and statistically significant impact on the default intensities of the firms. The frailty volatility is the coefficient ξ of the dependence of the default intensity on the OU frailty process H. The coefficient of 0.1096 shows us that an increase of 1 % of the latent factor volatility will increase the unobserved default intensities by 10.96 % monthly. This finding is consistent with Duffie et al. (2009) and Nguyen (2023). The estimated mean reversion η of frailty factor is approximately 43.60 % monthly. Brownian motion volatility is statistically significantly positive. In general, signs of coefficients in the frailty-correlated defaults models are no surprise. It can be seen from Table 3 and Table 4 that the signs and scales of estimates in both cases where models with uniform and subjective prior distributions are similar.

4. Out-of-Sample Performance and Robustness Check

To evaluate the model performance, we use the cumulative accuracy profile (CAP) and the accuracy ratio (AR). The companies are divided into two equal groups: estimation and evaluation. We estimate the parameters based on the estimation group and then evaluate the prediction accuracy using the evaluation group. The implementation steps are shown as follows: Firstly, we estimate parameters in the frailty-correlated default model with subjective prior distribution using the historical default rates in the period from 1981 to 2011. Secondly, using the estimation results obtained from Step 1, we forecast the data for the period from 2012 to 2018 based on the covariates time series model for observable firm-specific/macroeconomic covariates. Thirdly, we forecast the data of the frailty variable for the period (2012-2018) using the PIMH Algorithm 2. Fourthly, after obtaining the estimates from Step 1 and the data obtained from Steps 2 and 3, we compute the default probability based on Equation (2). Lastly, we can determine a CAP and its associated AR. The CAPs and ARs for the out-of-sample prediction horizons are displayed in Figure 1 and Figure 2.
Table 5 reports the results of out-of-sample predictions of frailty-correlated models with uniform and subjective prior distributions. From two default models, it can be seen that the prediction ratios of the frailty-correlated default model with subjective prior distribution are higher than those of the model with uniform prior distribution. The out-of-sample prediction accuracy for 1-year prediction on average is good. Specifically, 95 percent for the frailty-correlated model with a non-informative prior distribution and 96.05 percent for the model with a subjective prior distribution. When the time horizon for predictions is extended to three years, the AR of the models suffers a significant decline, falling to 85.23 percent for frailty-correlated models with uniform prior distributions and 86.32 percent for those with subjective prior distributions. We also perform out-of-sample default predictions using the logistic regression method 1 to compare the accuracy with our proposed method in Table 6. The results show that our method has better prediction power compared with the logistic regression method.
Overall, two notable conclusions can be drawn from these parameter estimation results: (i) The 1-year prediction for both models is good and when the prediction horizons increase, the prediction accuracy of the models decreases significantly. (ii) It can be seen that there has not been much difference about prediction accuracy ratios for frailty-correlated default models with non-informative and subjective prior distributions over three out-of-sample prediction horizons, including 2012–2018 for 1-year default distribution, 2013–2018 for 2-year default prediction, and 2014–2018 for 3-year default prediction.
To check the robustness of the estimation results for the frailty-correlated default model with subjective prior distribution, we estimated a subperiod from 1980 to 2011 as a sensitivity test. The outcomes correspond with the signs and magnitude of the entire sample. On the other hand, the value of log-likelihood in the frailty-correlated default model with subjective prior distribution (−2202.45) is larger than that in the frailty-correlated default model with non-informative prior distribution (−2379.61), which confirms that the frailty-correlated default model should incorporate the expert opinion.

5. Concluding Remarks and Limitations

Risk assessment is part of the decision-making process in many fields of discipline including finance. In the financial distress literature, the credit risk evaluation entails the evaluation of the hazard of potential future exposure or probable loss to lenders in the context of lending activities. The effective management of credit risk is a crucial aspect of risk management and crucial to the long-term survival of any bank. Credit risk management’s objective is to maximize the bank’s risk-adjusted return by keeping credit risk exposure within acceptable limits. The ability to accurately forecast a company’s financial distress is a major concern for many stakeholders. This practical relevance has motivated numerous studies on the topic of predicting corporate financial distress. To improve the prediction accuracy of default models, the utilization of expert judgement in the decision-making process is common in practice as there may not be enough a statistically significant amount of empirical evidence to reliably estimate parameters of complicated models. This problems is considered to be of central interest of simulating a number of debates in the empirical literature regarding the issue of Bayesian inference.
This paper proposes a method to add expert judgement to the frailty-correlated default risk model in Duffie et al. (2009) by incorporating subjective prior distributions into the model. Then we employ the Bayesian method coupled with a Particle MCMC approach in Nguyen (2023) in order to evaluate the unknown parameters and predict the default risk models on a historical defaults dataset of 424,601 firm-month observations from January 1980 to June 2019 of 2432 U.S. industrial firms. We compare the prediction results of the frailty-correlated default risk model with uniform and subjective prior distributions together. The findings show that the 1-year prediction for both models are pretty good and the prediction accuracy of models decrease considerably as the prediction horizons increase. The results also indicate that prediction accuracy ratios for frailty-correlated default models with non-informative and subjective prior distributions over various prediction horizons are not significantly different. Specifically, the out-of-sample prediction accuracy for the frailty-correlated default models with uniform distribution is slightly higher than that of the frailty-correlated default models with informative prior distribution over three out-of-sample prediction horizons, including 2012–2018 for the 1-year default distribution, 2013–2018 for the 2-year default prediction, and 2014–2018 for the 3-year default prediction.
The frailty-correlated default model with expert opinion has been designed to estimate and predict the default risk of corporations. The model can be adapted to accommodate any context. However, the model also has its limitations. Firstly, one of the main limitations is that we cannot access inputs of data for expert opinion; therefore, to some certain extent, our results also depend on how we assume the values of priors. Accordingly, the prediction accuracy can be slightly different. It is observed that Bayes factors are sensitive to the selection of unknown parameters from informative prior distributions, which has a greater likelihood of influencing the posterior distribution. As a consequence, it generates debates regarding the selection of priors. According to Kass and Raftery (1995), non-informative priors may also contribute to posterior estimate instability and convergence of the sampler algorithm. Choosing informative priors and establishing a connection with expert opinion are still the subject of debate in academic research, and interesting stories about them are still being continued. Therefore, future work should use actual data of expert opinion, which may be feasibly conducted in the age of big data. Recently, there have been research on the default prediction combined with expert opinion using machine learnings, such as Lin and McClean (2001), Kim and Han (2003), Zhou et al. (2015), and Gepp et al. (2018). However, these studies adopt machine learning techniques with single classifiers and observable variables. Future work can adopt a meta-learning framework to examine and predict defaults with expert opinion at the firm level.

Funding

This research received no external funding.

Data Availability Statement

The firms’ default data that support the findings of this study are available from Moody’s. Restrictions apply to the availability of these data, which were used under license for this study. Data are available at https://www.moodys.com/ (accessed on 10 July 2023) with the permission of Moody’s. The firms’ data and the Trailing 1-year return on the S&P 500 that support the findings of this study are available from Wharton Research Data Services. Restrictions apply to the availability of these data, which were used under license for this study. Data are available at http://whartonwrds.com/ (accessed on 10 July 2023) with the permission of Wharton Research Data Services. The 3-month Treasury bill rate that supports the findings of this study is openly available in Board of Governors of the Federal Reserve System at https://fred.stlouisfed.org/series/TB3MS (accessed on 10 July 2023).

Acknowledgments

I would like to thank Tak Kuen Siu and Tom Smith for their insightful comments and suggestions. I would also like to thank the referees for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Note

1
I would like to thank a referee for suggesting this.

References

  1. Altman, Edward I. 1968. Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. The Journal of Finance 23: 589–609. [Google Scholar] [CrossRef]
  2. Andrieu, Christophe, Arnaud Doucet, and Roman Holenstein. 2010. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society. Series B, Statistical Methodology 72: 269–342. [Google Scholar] [CrossRef] [Green Version]
  3. Azizpour, Shahriar, Kay Giesecke, and Gustavo Schwenkler. 2018. Exploring the sources of default clustering. Journal of Financial Economics 129: 154–83. [Google Scholar] [CrossRef]
  4. Beaver, William H. 1966. Financial ratios as predictors of failure. Journal of Accounting Research 4: 71–111. [Google Scholar] [CrossRef]
  5. Beaver, William H. 1968. Market prices, financial ratios, and the prediction of failure. Journal of Accounting Research 6: 179–92. [Google Scholar] [CrossRef]
  6. Bharath, Sreedhar T., and Tyler Shumway. 2008. Forecasting default with the Merton distance-to-default model. The Review of Financial Studies 21: 1339–69. [Google Scholar] [CrossRef]
  7. Chava, Sudheer, Catalina Stefanescu, and Stuart Turnbull. 2011. Modeling the loss distribution. Management Science 57: 1267–87. [Google Scholar] [CrossRef] [Green Version]
  8. Creal, Drew, Bernd Schwaab, Siem Jan Koopman, and Andre Lucas. 2014. Observation-driven mixed-measurement dynamic factor models with an application to credit risk. Review of Economics and Statistics 96: 898–915. [Google Scholar] [CrossRef] [Green Version]
  9. De Finetti, Bruno. 2017. Theory of Probability: A Critical Introductory Treatment. Hoboken: John Wiley & Sons, vol. 6. [Google Scholar]
  10. Duan, Jin-Chuan, Jie Sun, and Tao Wang. 2012. Multiperiod corporate default prediction—A forward intensity approach. Journal of Econometrics 170: 191–209. [Google Scholar] [CrossRef]
  11. Duffie, Darrell, Andreas Eckner, Guillaume Horel, and Leandro Saita. 2009. Frailty correlated default. The Journal of Finance 64: 2089–123. [Google Scholar] [CrossRef]
  12. Duffie, Darrell, Leandro Saita, and Ke Wang. 2007. Multi-period corporate default prediction with stochastic covariates. Journal of Financial Economics 83: 635–65. [Google Scholar] [CrossRef] [Green Version]
  13. Gepp, Adrian, Martina K. Linnenluecke, Terrence J. O’Neill, and Tom Smith. 2018. Big data techniques in auditing research and practice: Current trends and future opportunities. Journal of Accounting Literature 40: 102–15. [Google Scholar] [CrossRef] [Green Version]
  14. Handschin, Johannes. 1970. Monte Carlo techniques for prediction and filtering of nonlinear stochastic processes. Automatica 6: 555–63. [Google Scholar] [CrossRef]
  15. Handschin, Johannes Edmund, and David Q. Mayne. 1969. Monte Carlo techniques to estimate the conditional expectation in multi-stage non-linear filtering. International Journal of Control 9: 547–59. [Google Scholar] [CrossRef]
  16. Hillegeist, Stephen A., Elizabeth K. Keating, Donald P. Cram, and Kyle G. Lundstedt. 2004. Assessing the probability of bankruptcy. Review of Accounting Studies 9: 5–34. [Google Scholar] [CrossRef]
  17. Jarrow, Robert A., and Stuart M. Turnbull. 1995. Pricing derivatives on financial securities subject to credit risk. The Journal of Finance 50: 53–85. [Google Scholar] [CrossRef]
  18. Jarrow, Robert A., David Lando, and Stuart M. Turnbull. 1997. A Markov model for the term structure of credit risk spreads. The Review of Financial Studies 10: 481–523. [Google Scholar] [CrossRef] [Green Version]
  19. Kass, Robert E., and Adrian E. Raftery. 1995. Bayes factors. Journal of the American Statistical Association 90: 773–95. [Google Scholar] [CrossRef]
  20. Kim, Myoung-Jong, and Ingoo Han. 2003. The discovery of experts’ decision rules from qualitative bankruptcy data using genetic algorithms. Expert Systems with Applications 25: 637–46. [Google Scholar] [CrossRef]
  21. Koopman, Siem Jan, and André Lucas. 2008. A non-Gaussian panel time series model for estimating and decomposing default risk. Journal of Business & Economic Statistics 26: 510–25. [Google Scholar]
  22. Koopman, Siem Jan, André Lucas, and Bernd Schwaab. 2011. Modeling frailty-correlated defaults using many macroeconomic covariates. Journal of Econometrics 162: 312–25. [Google Scholar] [CrossRef]
  23. Koopman, Siem Jan, André Lucas, and Bernd Schwaab. 2012. Dynamic factor models with macro, frailty, and industry effects for U.S. default counts: The credit crisis of 2008. Journal of Business & Economic Statistics 30: 521–32. [Google Scholar]
  24. Lin, Feng Yu, and Sally McClean. 2001. A data mining approach to the prediction of corporate failure. Knowledge-Based Systems 14: 189–95. [Google Scholar] [CrossRef]
  25. Merton, Robert C. 1974. On the pricing of corporate debt: The risk structure of interest rates. The Journal of Finance 29: 449–70. [Google Scholar]
  26. Nguyen, Ha. 2023. An empirical application of Particle Markov Chain Monte Carlo to frailty correlated default models. Journal of Empirical Finance 72: 103–21. [Google Scholar] [CrossRef]
  27. Nguyen, Ha, and Xian Zhou. 2023. Reduced-form models of correlated default timing: A systematic literature review. Journal of Accounting Literature 45: 190–205. [Google Scholar] [CrossRef]
  28. Ohlson, James A. 1980. Financial ratios and the probabilistic prediction of bankruptcy. Journal of Accounting Research 18: 109–31. [Google Scholar] [CrossRef] [Green Version]
  29. Savage, Leonard J. 1971. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association 66: 783–801. [Google Scholar] [CrossRef]
  30. Savage, Leonard J. 1972. The Foundations of Statistics. Chelmsford: Courier Corporation. [Google Scholar]
  31. Shumway, Tyler. 2001. Forecasting bankruptcy more accurately: A simple hazard model. The Journal of Business 74: 101–24. [Google Scholar] [CrossRef] [Green Version]
  32. Vassalou, Maria, and Yuhang Xing. 2004. Default risk in equity returns. The Journal of Finance 59: 831–68. [Google Scholar] [CrossRef]
  33. Zhou, Ligang, Dong Lu, and Hamido Fujita. 2015. The performance of corporate financial distress prediction models with features selection guided by domain knowledge and data mining approaches. Knowledge-Based Systems 85: 52–61. [Google Scholar] [CrossRef]
  34. Zmijewski, Mark E. 1984. Methodological issues related to the estimation of financial distress prediction models. Journal of Accounting Research 22: 59–82. [Google Scholar] [CrossRef]
Figure 1. This graph illustrates the out-of-sample cumulative accuracy profiles (power curves) over the entire sample period (1980–2019) for various prediction horizons. The companies are divided into two equal groups: estimation and evaluation. We estimate the parameters based on the estimation group and then evaluate the prediction accuracy using the evaluation group. The power curve illustrates 20% of companies with the most capacity of default over the different horizons in the frailty-correlated default model with subjective prior.
Figure 1. This graph illustrates the out-of-sample cumulative accuracy profiles (power curves) over the entire sample period (1980–2019) for various prediction horizons. The companies are divided into two equal groups: estimation and evaluation. We estimate the parameters based on the estimation group and then evaluate the prediction accuracy using the evaluation group. The power curve illustrates 20% of companies with the most capacity of default over the different horizons in the frailty-correlated default model with subjective prior.
Jrfm 16 00334 g001
Figure 2. This graph illustrates average accuracy ratios for out-of-sample prediction in three prediction accuracy horizons for frailty-correlated default models with expert opinion.
Figure 2. This graph illustrates average accuracy ratios for out-of-sample prediction in three prediction accuracy horizons for frailty-correlated default models with expert opinion.
Jrfm 16 00334 g002
Table 1. Observable firm-specific attributes and macroeconomic factors.
Table 1. Observable firm-specific attributes and macroeconomic factors.
NoCovariatesDefinitionsReference
1TREASURY RATE3-month US Treasury bill rateDuffie et al. (2007, 2009),
Duan et al. (2012), Nguyen (2023)
2SP500Trailing 1-year return on the S&P 500 indexDuffie et al. (2007, 2009),
Duan et al. (2012), Azizpour et al. (2018),
Nguyen (2023)
3D2DDistance to DefaultDuffie et al. (2007, 2009),
Duan et al. (2012), Nguyen (2023)
4FIRM SIZELogarithm of the assetsShumway (2001), Nguyen (2023)
5ROANet income to total assetsAltman (1968), Shumway (2001),
Nguyen (2023)
6LEVERAGETotal liabilities to total assetsOhlson (1980), Zmijewski (1984),
Nguyen (2023)
7FIRM RETURNTrailing 1-year stock returnShumway (2001), Duffie et al. (2007),
Duffie et al. (2009), Bharath and Shumway (2008),
Nguyen (2023)
Notes: The details of observable covariates used to examine and predict the frailty-correlated default model with prior distributions.
Table 2. Summary statistics of observable firm-specific attributes and macroeconomic factors.
Table 2. Summary statistics of observable firm-specific attributes and macroeconomic factors.
VariableMeanSDMinimumMedianMaximum
Macroeconomics covariates
TREASURY RATE4.68373.13430.01004.950016.300
SP5000.10480.1534−0.55420.12250.4452
Firm-specific covariates
D2D
Defaults0.03251.3529−14.19970.03324.9388
Nondefaults1.90521.4412−5.45341.802548.6861
FIRM SIZE
Defaults20.19361.581115.168820.107026.3362
Nondefaults21.47831.842213.539221.458627.9370
ROA
Defaults−0.00960.1026−5.31560.00364.1160
Nondefaults0.01050.0449−3.53410.01282.9270
LEVERAGE
Defaults0.70600.37920.00000.66627.6641
Nondefaults0.56710.24380.00000.55888.2774
FIRM RETURN
Defaults−0.03391.3053−2.8998−0.096245.8583
Nondefaults−0.03560.4603−2.9045−0.04445.4282
Notes: The historical default rates comprises 424,601 month observations between January 1980 and June 2019.
Table 3. Estimation results of default intensity with non-informative prior distribution.
Table 3. Estimation results of default intensity with non-informative prior distribution.
PredictorCoefficientAsymptotict-Statistic95% Confidence Interval
Standard ErrorLower BoundUpper Bound
Macroeconomics covariates:
Constant−3.12630.7673−4.07−4.6303−1.6223
TREASURY RATE−0.12310.0231−5.33−0.1685−0.0777
SP500−0.90930.2832−3.21−1.4645−0.3540
Firm-specific covariates:
D2D−0.60990.0202−30.19−0.6496−0.5703
FIRM SIZE−0.18380.0355−5.18−0.2535−0.1142
ROA−0.36910.0941−3.92−0.5536−0.1846
LEVERAGE0.52930.046211.460.43870.6200
FIRM RETURN−1.12820.0825−13.67−1.2900−0.9663
Frailty effect:
Hidden-factor volatility0.10960.006117.970.09750.1216
Hidden-factor mean reversion0.43600.05467.980.32880.5432
Brownian motion volatility8.46100.339424.937.79569.1264
No. of firm-month observations424,601
Log-likelihood−2379.61
Notes: Asymptotic standard errors of the estimated parameters are computed using the Hessian matrix.
Table 4. Estimates of the frailty-correlated model with subjective prior distribution.
Table 4. Estimates of the frailty-correlated model with subjective prior distribution.
PredictorCoefficientAsymptotict-Statistic95% Confidence Interval
Standard ErrorLower BoundUpper Bound
Macroeconomics covariates:
Constant−3.45560.6259−5.52−4.6825−2.2288
TREASURY−0.11750.0184−6.37−0.1536−0.0813
SP500−1.06200.2306−4.60−1.5142−0.6099
Firm-specific covariates:
D2D−0.63090.0169−37.30−0.6641−0.5977
FIRM SIZE−0.18560.0289−6.42−0.2422−0.1290
ROA−0.38070.0780−4.8792−0.5336−0.2277
LEVERAGE0.55700.038014.62770.48230.6316
FIRM RETURN−1.21340.0676−17.93−1.3461−1.0808
Frailty effect:
Hidden-factor volatility0.08970.004022.000.08170.0977
Hidden-factor mean reversion0.61890.059010.470.50310.7347
Brownian motion volatility12.50690.435428.721611.653413.3604
No. of firm-month observations424,601
Log likelihood−2202.45
Notes: Table reports the estimation result of the frailty-correlated model combined with subjective prior distribution. Asymptotic standard errors of the estimated parameters are calculated by the Hessian matrix, given μ and Σ below.
Table 5. Prediction accuracy for frailty-correlated default model with different prior distribution.
Table 5. Prediction accuracy for frailty-correlated default model with different prior distribution.
T (Year)2012201320142015201620172018Average
185.71100.00100.0087.5091.67100.00100.0095.00
Uniform prior2 77.7883.3380.0089.4793.3387.5085.23
3 72.7378.5778.6890.9195.0083.18
185.71100.00100.0093.3393.33100.00100.0096.05
Subjective prior2 78.6887.5080.0090.9193.3387.5086.32
3 75.0078.5783.3391.6795.0084.71
Notes: The table reports the accuracy ratios for the out-of-sample prediction for different prediction horizons. In particular, individual accuracy ratios and average accuracy ratios for the model with uniform and subjective prior distributions over three default prediction horizons (2012–2018, 2013–2018, and 2014–2018) are presented.
Table 6. Comparison of default prediction accuracy between the Logistic Regression method and the Particle MCMC Expectation-Maximization method.
Table 6. Comparison of default prediction accuracy between the Logistic Regression method and the Particle MCMC Expectation-Maximization method.
T (Year)2012201320142015201620172018Average
180.0075.0050.0080.0090.00100.00100.0082.14
Logistic Regression2 66.6766.6757.1478.5790.9180.0073.33
3 62.5054.5564.2880.0093.3370.93
Particle MCMC185.71100.00100.0093.3393.33100.00100.0096.05
Expectation-Maximization2 78.6887.5080.0090.9193.3387.5086.32
with subjective prior3 75.0078.5783.3391.6795.0084.71
Notes: The table reports the accuracy ratios for the out-of-sample default prediction at 1 year, 2 years, and 3 years using the Logistic Regression method and Particle MCMC Expectation-Maximization with subjective prior method.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nguyen, H. Particle MCMC in Forecasting Frailty-Correlated Default Models with Expert Opinion. J. Risk Financial Manag. 2023, 16, 334. https://doi.org/10.3390/jrfm16070334

AMA Style

Nguyen H. Particle MCMC in Forecasting Frailty-Correlated Default Models with Expert Opinion. Journal of Risk and Financial Management. 2023; 16(7):334. https://doi.org/10.3390/jrfm16070334

Chicago/Turabian Style

Nguyen, Ha. 2023. "Particle MCMC in Forecasting Frailty-Correlated Default Models with Expert Opinion" Journal of Risk and Financial Management 16, no. 7: 334. https://doi.org/10.3390/jrfm16070334

APA Style

Nguyen, H. (2023). Particle MCMC in Forecasting Frailty-Correlated Default Models with Expert Opinion. Journal of Risk and Financial Management, 16(7), 334. https://doi.org/10.3390/jrfm16070334

Article Metrics

Back to TopTop