Next Article in Journal
How to Partition a Quantum Observable
Next Article in Special Issue
Statistics as a Social Activity: Attitudes toward Amalgamating Evidence
Previous Article in Journal
Streamflow Prediction Using Complex Networks
Previous Article in Special Issue
A Metric Based on the Efficient Determination Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence

Department of Statistics, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(7), 610; https://doi.org/10.3390/e26070610
Submission received: 12 June 2024 / Revised: 13 July 2024 / Accepted: 16 July 2024 / Published: 19 July 2024
(This article belongs to the Special Issue Bayesianism)

Abstract

:
For multivariate non-Gaussian involving copulas, likelihood inference is dominated by the data in the middle, and fitted models might not be very good for joint tail inference, such as assessing the strength of tail dependence. When preliminary data and likelihood analysis suggest asymmetric tail dependence, a method is proposed to improve extreme value inferences based on the joint lower and upper tails. A prior that uses previous information on tail dependence can be used in combination with the likelihood. With the combination of the prior and the likelihood (which in practice has some degree of misspecification) to obtain a tilted log-likelihood, inferences with suitably transformed parameters can be based on Bayesian computing methods or with numerical optimization of the tilted log-likelihood to obtain the posterior mode and Hessian at this mode.

1. Introduction

Dependence models with multivariate copulas have had many applications in the past two decades to handle non-Gaussian dependence; in particular, for applications such as risk analysis where variables can have more dependence in the joint tails than with Gaussian dependence with the same strength of central dependence.
When pairwise scatterplots of variables suggest lower and upper tail dependence, possibly asymmetric in the strength in the joint lower tails (extreme of lower quadrant) versus the strength in the joint upper tails (extreme of upper quadrant), several different parametric copula families with tail dependence are among the best based on information criteria such as the Akaike information criterion (AIC). However model-based bivariate lower and upper tail dependence measures can be quite different for these different parametric copulas, and the comparisons of lower and upper tail dependence measures might not match the visual comparisons on the pairwise scatterplots. This is because likelihood methods are influenced a lot by data in the middle (rather than extremes), and all simple parametric models have some degree of misspecification.
For univariate distributions, it is well known that inferences involving large quantiles should not be based on a fitted parametric distribution because extrapolation is not reliable when the data values in the middle have the most influence in the parameter estimates. There are two approaches for univariate inferences involving extremes: (a) from univariate extreme value theory with the assumption of a well-behaved tail density, the peaks-over-threshold method based on generalized Pareto distribution [1] can be used, or (b) splicing models [2] can be used with different flexible densities for the body and tail, if inferences are also needed for non-extremes. For the joint tail region, there is a multivariate Pareto approach such as that in [3], but there is no convenient way to combine with a density for the body.
The goal in this article is to propose a method that incorporates “prior” information on the relations of bivariate lower/upper tail dependence pairs, thereby placing more weight on joint extreme observations when estimating the dependence parameters of the multivariate copula; the splicing of densities for the body and the joint tails is avoided. This approach should lead to parameter estimates of copula dependence parameters with more reliable inference for tail dependence and other tail-based quantities.
How different parametric copula models lead to quite different tail inferences is illustrated with some financial returns data over a few consecutive years. Consider the financial returns for different market indexes or stocks in the same sector of a market; for dependence analysis, commonly, a copula-GARCH model (see [4]) is applied to GARCH-filtered returns. Pairwise normal scores plots after rank transform to N ( 0 , 1 ) show tail dependence with the clouds of points being sharper than the elliptical shape in the extreme lower and upper quadrants. Often, there appears to be a stronger dependence in the joint lower tail than in the joint upper tail.
When different flexible parametric multivariate copula families, such as vine and factor copula models, are fit to multivariate GARCH-filtered returns, the best-fitting models based on AIC imply lower and upper tail dependence for any pair of returns. This is based on the results of [5,6], which imply that if bivariate copulas, for pairs of variables in the first tree of the vine or with a variable linked to a latent variable, have lower and upper tail dependence, then the bivariate copulas of all pairs of variables have lower and upper tail dependence. Note that factor copulas are vine copulas that include latent variables.
However, if model-based tail dependence parameters are computed based on the best few fitted models, they can be quite different among the models and sometimes may not match what is seen in the normal score plots. For example, (a) sometimes, a model-based lower tail dependence parameter may be closer to 0 than expected based on the plot, or (b) the model-based lower tail dependence parameter may be smaller than the model-based upper tail dependence parameter, in contrast to the the visual inspection of the plot.
With the non-parametric method for empirical tail dependence measures in [7], it is possible to compare empirical and model-based lower and upper tail dependence to show quantitatively that model-based measures might not be reliable for all bivariate margins. This is because the fit of parametric multivariate models based on likelihood tends to be dominated by the data in the middle of the distribution. Inference concerning the middle (e.g., medians and non-extreme marginal orthant probabilities) can be reliable but not necessarily inference concerning the extremes (e.g., extreme marginal orthant probabilities or multivariate quantiles of the form defined in [8]).
This article shows the use of a tilted likelihood to estimate parameters of the 1-factor copula so that inferences in the joint tails are improved.The 1-factor copula for d variables has a vector parameter ϑ j for the bivariate linking copula of the jth variable and the latent variable (the latter explains the joint dependence of the observed variables). The tilting depends on the nature of the variables. For d GARCH-filtered stock returns for stocks in the same sector, the dependence parameters { ϑ j : 1 j d } can be considered a sample from a super-population so that it is reasonable to assume a common prior distribution for the ϑ j . The tilted log-likelihood is based on the sum of the 1-factor copula log-likelihood and the logarithm of this prior density that is based on tail dependence summaries from some “previous data”.
There is a numerical data example in Section 2 for preliminaries to show explicitly why likelihood inference can be inadequate for tail inference; tail dependence parameters are defined, and examples of normal score plots are given in this section. Section 3 and Section 4 contain the theory and numerical methods to develop a “prior” to help with tail inference for the 1-factor copula model with asymmetric tail-dependent copulas linking to the latent variable. Section 5 illustrates the theory for a data example with GARCH-filtered stock returns from stocks in a S&P sector to show improved tail inference. Section 6 has some simulation results to compare with the data example. Section 7 concludes with a discussion on the generality of the approach proposed for the 1-factor model; the basis is a “super-population” assumption for some bivariate margins with lower and upper tail dependence. The background results for tail dependence, copulas, and factor models are given in Appendix A.

2. Numerical Data Example to Illustrate Discrepancy for Tail Inference

In this section, a numerical low-dimensional data example is used to clarify what is meant by the possible poor joint tail inference following maximum likelihood.
Definitions of bivariate tail dependence and the copula as summaries of dependence are presented to explain concepts of dependence in joint tails.
Let F 1 : d be an absolutely continuous d-variate distribution with univariate margins F 1 , , F d and copula C 1 : d such that F 1 : d = C 1 : d ( F 1 , , F d ) . Let Y = ( Y 1 , , Y d ) F 1 : d .
For the bivariate margin F j k = C j k ( F j , F k ) with j k , the probabilistic version of the lower and upper tail dependence parameters is:
λ j k , L = lim u 0 + Pr ( Y j F j 1 ( u ) | Y k F k 1 ( u ) ) = lim u 0 + u 1 Pr ( Y j F j 1 ( u ) , Y k F k 1 ( u ) ) = lim u 0 + C j k ( u , u ) / u , λ j k , U = lim u 0 + Pr ( Y j F j 1 ( 1 u ) | Y k F k 1 ( 1 u ) ) = lim u 0 + u 1 Pr ( Y j F j 1 ( 1 u ) , Y k F k 1 ( 1 u ) ) = lim u 0 + C ¯ j k ( 1 u , 1 u ) / u ,
where C ¯ j k ( u j , u k ) = 1 u j u k + C j k ( u j , u k ) for 0 u j , u k 1 .
Consider a random sample from F 1 : d with y i = ( y i 1 , , y i d ) for i = 1 , , n . Because λ j k , L and λ j k , U are limiting quantities (as u 0 + ), there are no direct empirical (data) versions. For the numerical examples in this section and later sections, the sample version comes from a limit of tail-weighted dependence measures.
A general reference for concepts (in the above and in later sections) with copulas and dependence is [9], and the estimator of tail dependence from a limit of tail-weighted dependence measures is given in [7]. For the probabilistic version, the tail-weighted dependence measures are indexed by a parameter α > 1 and the limit, as α is the tail dependence parameter. After computing the empirical tail-weighted dependence measure for a grid of α values, typically in the interval [ 10 , 20 ] , a regression model is fit for the empirical measure versus a power of α 1 , and then the tail dependence parameter is estimated as the extrapolation with α 1 0 .
The data example involves GARCH-filtered stock returns with all stocks in the same sector. Appendix A.4 has some background for GARCH time series and copula-GARCH models.
The S&P 500 data set of GARCH-filtered stock returns (January 2013 to December 2015, good economic conditions) used for illustration is analyzed in [10]. The sample consists of n = 754 days. For the finance sector, some initial descriptive statistics analyses based on 10 stocks were chosen from 64; the ticker symbols of the 10 stocks are COF, RJF, SCHW, FRC, GL, FD, TROW, GS, BLK, and ICE. Normal score plots of GARCH-filtered returns for a few pairs amongst these 10 stocks are given in Figure 1 (see Appendix A.3 for the mathematical definition of the transform). They show tail dependence, with the clouds of the points being sharper than the elliptical shape and having a stronger correlation in the lower quadrant than in the upper quadrant.
These few stocks are used to demonstrate (in small tables) a typical situation of differences in empirical and model-based tail quantities. To check that a 1-factor dependence structure is reasonable, the non-parametric transform to normal scores is applied to GARCH-filtered returns, and factor analysis (see [11]) is applied to the resulting correlation matrix. The loadings are, respectively, 0.741, 0.802, 0.838, 0.475, 0.688, 0.821, 0.665, 0.609, 0.690, 0.830. The average absolute difference between the empirical and model-based correlation matrix is 0.03, and the maximum absolute difference between the empirical and model-based correlation matrix is 0.21 (with two discrepancies with an absolute difference > 0.10 ), so the 1-factor structure is reasonable as a first-order approximation when considering that a 10 × 10 matrix with 45 correlations is approximated by a simple correlation matrix with 10 parameters. With a larger dimension (more stocks in the same sector), a 1-factor model with some weak conditional dependence (see [12]) could be a better dependence model.
Two parametric copula models are fitted to account for non-Gaussian dependence—1-factor with d = 10 linking copulas that are all BB1, or all reflected BB1 (abbreviated as BB1r). These are referred to briefly as 1-factor BB1 and 1-factor BB1r. The details of these models are summarized in Appendix A.1 and Appendix A.2 in Appendix A; in particular, Appendix A.1 has the definition of the 2-parameter bivariate BB1 copula and some of its dependence properties, and Appendix A.2 has the definition of the 1-factor copula for d variables based on conditional independence of observed variables given a latent variable.
Table 1 has empirical and model-based lower and upper tail dependence measures: λ ^ j k , L , λ j k , L ( ϑ ^ ) , λ ^ j k , U , λ j k , U ( ϑ ^ ) . Model-based values are based on maximum likelihood estimates (MLEs) with 1-factor BB1r and 1-factor BB1. Table 2 has an empirical Spearman rank correlation as a central measure of dependence: ρ ^ j k , S . The values of ρ j k , S ( ϑ ^ ) for the two 1-factor copula models are quite close to the empirical values compared with some discrepancies for tail dependence measures. Table 3 has summaries, averaged over d ( d 1 ) / 10 = 45 bivariate margins. Table 1 and Table 3 show that tail inferences from different models with lower and upper tail dependence can be quite different, but the models can have similar inferences for central quantities.
The tail asymmetry of financial returns, with commonly more dependence in the joint lower tail than in the joint upper tail, is explained and discussed in [13,14]. The 1-factor BB1r model has a smaller AIC value than 1-factor BB1, and it better matches the empirical property of lower tail dependence, being often larger than upper tail dependence. However, the 1-factor BB1r model tends to overestimate the difference in the lower and upper tail dependence, and the 1-factor BB1 model tends to underestimate the difference in lower and upper tail dependence. This motivates the tilted likelihood in Section 3 with an appropriate “prior” so that model-based tail dependence measures are closer to empirical counterparts.
It has been observed in many data examples (see [15] and Chapter 7 of [9]) that model-based assessment of tail dependence may not be accurate. The more recent development of tail-weighted dependence measures in [16] allows for better assessment on the reliability of a parametric copula model for tail inferences, by comparing empirical and model-based directional tail-weighted measures.

3. Tilted Likelihood for 1-Factor Copula Model with Tail Dependence

This section has a modified log-likelihood using a prior based on previous data for tail dependence parameters in a 1-factor copula model (as given in Appendix A.2). The starting point is the copula-based log-likelihood after univariate margins have been estimated.
Assume that satisfactory univariate models F ^ j ( 1 j d ) have been fit to the random sample { ( y i 1 , , y i d ) : i = 1 , , n } and then transform to the uniform scale to { u i = ( u i 1 , , u i d ) : i = 1 , , n } with u i j = F ^ j ( y i j ) in the interval ( 0 , 1 ) .
We consider mainly inference on dependence parameters for the data in the transformed uniform scale, considered as a realization of a random sample { U i } for a copula cumulative distribution function (cdf) C U ( · ; ϑ ) , where ϑ = ( ϑ 1 , , ϑ d ) . The log-likelihood for a random sample of size n is:
L ( ϑ ) = L ( ϑ j : j = 1 , , d ) = i = 1 n log c U ( u i ; ϑ 1 , , ϑ d ) .
For the 1-factor copula based on BB1r (and other) linking copulas, there are lower bounds on components of the 2-dimensional ϑ j .
For likelihood inference, there is invariance to 1-1 transforms of ϑ j to η j , with the latter being functions of lower and upper tail dependence parameters. Specifically, η j = ( η 1 j , η 2 j ) = ( log [ λ j L / ( 1 λ j L ] , log [ λ j U / ( 1 λ j U ) ] ) with λ j L , λ j U being the lower and upper tail dependence parameters for the bivariate copula linking variable j to the latent variable. Note that η j is unbounded. The tilted log-likelihood or log “posterior” is:
L ˜ ( η 1 , , η d ) = i = 1 n log c U ( u i ; η 1 , , η d ) + j = 1 d log f H ( η j )
where the density f H does not depend on j. The above is called the tilted log-likelihood because the goal is to obtain parameter estimates that put less weight in the middle of the data space and more weight in the tails based on “prior” expected behavior of how the lower and upper tail dependence parameters are related.
With the appropriate transformation, the prior can be taken as multivariate normal. For bivariate BB1r or BB1, η j is 2-dimensional, and f H is assumed to be bivariate normal. The latter is reasonable if the form of η j is chosen so that (2) is closer to a quadratic in a neighborhood of its mode. Asymptotic likelihood theory (see [17]) implies that the log-likelihood is quadratic in a neighborhood of the mode, as n , but the adequacy of the approximation for moderate sample size n depends on the transform.
The justification of “independent” prior densities for different variables is based on some empirical checks for 1-factor copula construction with different bivariate linking copulas (with or without tail dependence). The inverse Hessian (roughly the covariance matrix of the sampling distribution of the MLE) of the negative log-likelihood in (1) for the 1-factor copula is close to the block diagonal, with a block for each η j . The product form of the “prior” is based on an assumption of a “super-population” for the variables linked to the latent variable (e.g., stocks in a market sector). The density f H can be considered a frequency density of η j values over a large “super-population”.
A method is described in Section 4 to decide on choices for f H .
Similar ideas to the tilted log-likelihood have been used to obtain an adjusted log-likelihood that corrects some undesirable behavior of the MLE, given in [18,19]. There are also some connections with variational Bayes inference such as when the posterior density is assumed to be approximated by a multivariate Gaussian density after a suitable transform so that parameters are unconstrained. However, with copula applications [20,21], parsimonious and possibly unrealistic assumptions are made for the covariance matrix (such as diagonal or factor structure) of the Gaussian density. The optimization involves a Kullback–Leibler divergence of the Gaussian approximation and the posterior. This differs from optimizing (2) with no constraints on the form of the Hessian matrix at the mode.

Numerical Optimization for Posterior Mode and Hessian at Mode

The tilted log-likelihood has the penalized log-likelihood as an analogy so that standard numerical optimization methods can be used for estimating the mode and its Hessian.
The tilted log-likelihood in (2) and its log-likelihood counterpart in (1) are functions of 2 d parameters for the 1-factor BB1r copula with d variables. For the log-likelihood, Ref. [22] discusses an efficient numerical procedure where the log-likelihood, gradient, and Hessian are analytically derived and coded in Fortran90, and all integrals are evaluated via Gauss–Legendre quadrature (see [23]).
The code is modified to handle the transform from BB1 parameters ( θ j , δ j ) to ( η 1 j , η 2 j ) , and this requires care in using the chain rule for partial derivatives. The code for (2) and its gradient and Hessian are inputted into an efficient modified Newton–Raphson algorithm, as summarized in Section 6.2 of [9]. This leads to much faster computations than coding the negative of (2) in R and using a quasi-Newton method for numerical minimization based on numerical gradients and Hessians because many more iterations are needed compared with the modified Newton–Raphson. With the use of Fortran90 (for loops), analytic derivatives, and modified Newton–Raphson iterations, the time to deduce the posterior mode is decreased by a factor larger than 20 for 2 d = 40 parameters. Without the increased speed, the simulation study reported in Section 6 would take too much time. Also numerical optimization with the quasi-Newton method performs much worse as the number of parameters increases beyond 40.
With the negative Hessian at the mode of the tilted log-likelihood, the inverse Hessian can be used to obtain interval estimates for functions of the parameters.

4. Closer Match of Empirical and Model-Based Tail Dependence

Suppose diagnostic plots suggest tail dependence for all pairs of variables. Maximum likelihood estimation with a parametric copula might not provide good model-based estimates of tail dependence parameters or reliable inferences for tail-based quantities. In this section, a least squares method is used to obtain parameter estimates for the 1-factor copula that will make the empirical and model-based tail dependence parameters closer to each other. That is, there is an objective function to find copula parameters to better match model-based and empirical tail dependence parameters.
Let θ be the vector of all parameters ( ϑ 1 , , ϑ d ) . The jth component is ϑ j = ( θ j , δ j ) for the 1-factor BB1r or BB1 copula; see Appendix A.1 for the parametric BB1 family. The steps below assume that the 1-factor BB1r has lower AIC than 1-factor BB1 (empirical evidence from many applications of 1-factor copulas to GARCH-filtered stock returns).
  • Minimize negative of log-likelihood L ( ϑ ) in (1) to get MLE ϑ ^ .
  • Get empirical matrix of lower tail dependence λ ^ j k , L , upper tail dependence λ ^ j k , U , central dependence Spearman rho ρ ^ j k , S .
  • Minimize
    S ( ϑ ) = 1 2 d ( d 1 ) { j < k [ λ ^ j k , L λ j k , L ( ϑ j , ϑ k ) ] 2 + j < k [ λ ^ j k , U λ j k , U ( ϑ j , ϑ k ) ] 2 + j k [ ρ ^ j k , S ρ j k , S ( ϑ j , ϑ k ) ] 2 }
    with ϑ ^ as starting point. Let the result be ϑ ˜ .
  • Convert ϑ ˜ j to λ ˜ j V , L = λ j V , L ( ϑ ˜ j ) , λ ˜ j V , U = λ j V , U ( ϑ ˜ j ) as defined in (A6), using the BB1r linking copula for variable j and the latent variable V.
  • Transform to values in ( , ) : log [ λ ˜ j V , L / ( 1 λ ˜ j V , L ) ] and log [ λ ˜ j V , U / ( 1 λ ˜ j V , U ) ] for j = 1 , , d .
  • Get the sample mean vector and covariance matrix for a sample of size d for the two transformed λ ˜ ’s. The mean vector and covariance matrix are used as parameters for the bivariate normal prior f H in (2). For the tilted likelihood, use the parametrization
    η j = log [ λ j V , L / ( 1 λ j V , L ) ] , log [ λ j V , U / ( 1 λ j V , U ) ]
    for j = 1 , , d .
The data set mentioned in Section 2 as used in [10] has 64 stocks in the finance sector, 21 stocks in the energy sector and 60 stocks in the health sector of S&P 500 (years 2013–2015). The above procedure is applied to 20 random stocks from the finance sector, 10 random stocks from the energy sector, and 20 random stocks from the health sector. Below in (4) to (6) are the mean vector and covariance matrix for f H for three cases:
μ 1 = 0.20 0.38 , Σ 1 = 0.163 0.134 0.134 0.231 , ρ 1 = 0.691 ;
μ 2 = 0.04 0.74 , Σ 2 = 0.277 0.390 0.390 0.830 , ρ 2 = 0.813 ;
μ 3 = 0.30 1.05 , Σ 3 = 0.121 0.140 0.140 0.365 , ρ 3 = 0.666 .
They are used as the parameters of three bivariate normal distributions. The three cases are used in subsequent sections to allow a sensitivity analysis of the parameters in f H .
All three cases in (4) to (6) indicate stronger tail dependence in the joint lower tail compared with the joint upper tail because of the larger value in the first component of μ . Of the three cases, the first example has the strongest expected lower tail dependence because of largest first component of μ . For the first two cases, the median lower tail dependence is larger than 0.5 because of the positive value. The median upper tail dependence is less than 0.5 for all three cases.

5. Data Example with Prior and Tilted Likelihood

This section summarizes the application of the tilted log-likelihood for GARCH-filtered stock returns. Initially, three 1-factor copula constructions, with BB1, BB1r and BB7 bivariate linking copulas to the latent variable, were fitted with maximum likelihood for different subsets of stocks. Here, as is common from many empirical applications, the 1-factor copula based on BB1r is best, based on the AIC.
The tilted log-likelihood in (2) was then used for analysis of random subsets of stocks from the finance, energy and health sectors; these were different subsets from those used to determine the prior parameters (4)–(6). The qualitative conclusions are similar for different random subsets, so below we report details for one case of 20 randomly chosen finance stocks, considered one representative application of the theory in the preceding sections.
The numerical details below are based on 20 stocks with the tickers LNC, PGR, MMC, C, KEY, CBOE, BK, BEN, AXP, ALL, BAC, RF, AFL, ZION, DFS, CMA, MCO, GL, TRV, and BRK-B, used to determine the prior f H , and 20 other stocks with the tickers L, CME, MTB, MKTX, AIZ, MET, SCHW, FITB, STT, HBAN, PFG, BLK, SPGI, CB, COF, TFC, WRB, JPM, FRCB, and ICE for applying the procedure in Section 3 and Section 4.
Inferences for tail dependence are compared for five cases below with summaries in Table 4.
  • 1-factor BB1r, f H based on finance sector stocks.
  • 1-factor BB1r, f H based on energy sector stocks.
  • 1-factor BB1r, f H based on health sector stocks.
  • 1-factor BB1, f H based on finance sector stocks.
  • 1-factor BB7, f H based on finance sector stocks.
The “parameters” of f H are ( log λ L / ( 1 λ L ) , log λ U / ( 1 λ U ) ) and have different transformations to the parameters ( θ , δ ) of BB1r, BBB1, BB7.
Table 4 shows that for BB1r, there is little sensitivity to the three priors (4)–(6). However, the worse fitting 1-factor BB1 and 1-factor BB7 models (based on last column of Table 4) do not lead to better matching with empirical dependence measures using the prior in (4). Overall, these latter two models fit worse in the middle of the data space, leading to smaller values for (2) at the mode.
For 1-factor BB1 with the tilted log-likelihood (2), we looked at the negative inverse Hessian (covariance matrix of normal approximation) in posterior mode for row 2 of Table 4. There is almost zero correlation of the parameters for different variable indices j (for different stocks). The inverse Hessian is too large to show in its entirety, but an extract of some entries is converted into standard deviations and correlations in Table 5 and Table 6.

Bayesian Computing with STAN

Results based on the prior in (2) were also obtained via Bayesian computing with STAN (Hamiltonian Monte Carlo). Estimation for a 1-factor copula model via the Hamiltonian Monte Carlo is shown in [24], but their inferences do not include asymmetric tail dependence.
In Bayesian inference, the parameter vector Θ * consists of both the (transformed) copula dependence parameters η = ( η 1 , η 2 , , η d ) and the latent variables v = ( v 1 , v 2 , , v n ) in (A3). We assume a joint independent uniform prior distribution for the latent variables and a (product of) bivariate normal prior for the copula dependence parameters for the bivariate linking copulas. The prior density is given by
π Θ * ( θ * ) = π V ( v ) π H 1 , , H d ( η ) = i = 1 n I 1 < v i < 1 j = 1 d f H η j ,
where the mean and variance of the bivariate normal prior f H are given in (4)–(6). The “complete” likelihood function with the latent variables as parameters is
p U | Θ * ( u 1 , , u n θ * ) = i = 1 n p U i | V i , H 1 , , H d u i v i , η 1 : d = i = 1 n j = 1 d c j V ( u j , v i ; ϑ j ( η j ) ) ,
where c j V is given in Appendix A.2. Since the Bayesian estimation treats the latent variables as additional parameters, the likelihood function consists of the conditional density function given the latent variables instead of the joint density function. The posterior density function of the parameters (up to a constant) is
π Θ | U ( θ * u 1 , , u n ) p U , Θ * ( u , θ * ) = p U | Θ * u 1 , , u n θ * ) π Θ * ( θ * ) .
To perform Bayesian inference on the (transformed) copula dependence parameters of the 1-factor model, we use the No-U-Turn sampler (NUTS) proposed by [25]. NUTs is an extension of the Hamiltonian Monte Carlo algorithm, implemented within the STAN framework developed by [26]. The 1-factor copula models with BB1 and reflected BB1 copulas are fitted to the GARCH-filtered returns in STAN. For the data example with results summarized in Table 5 and Table 6, the posterior statistics of η (including posterior means, standard deviations, and correlation matrix) are similar to the results obtained from maximizing the tilted likelihood function in (2). In comparison with Table 5, the median and maximum absolute differences are, respectively, (a) 0.006 and 0.033 for μ η ’s, (b) 0.002 and 0.014 for σ η ’s, and (c) 0.023 and 0.059 for ρ ’s.
From (7), it is seen that the log posterior is (up to a constant) equal to:
L ˜ η 1 , , η d , v = i = 1 n j = 1 d log c j V u j , v i ; ϑ j ( η j ) + j = 1 d log f H ( η j ) ;
this is equivalent to the tilted log-likelihood in (2) after marginalizing over the latent variables v . Therefore, the two approaches should yield essentially the same result.
With a flat prior on the η j , the posterior estimates should align with the maximum likelihood estimates. However, in the case of estimating BB1 or reflected BB1 copulas, identifiability issues arise when using a flat prior. The two parameters of BB1 or reflected BB1 are negatively dependent, which can result in different combinations of parameter values producing similar likelihood values. This issue might be overlooked in maximum likelihood estimation since it converges to one of the maxima, with an appropriate starting point for numerical optimization. However, it becomes evident in Bayesian estimation, where the model struggles to distinguish between different parameter values in the posterior distributions. We found that incorporating informative priors can effectively mitigate this problem. These priors leverage tail dependence measures to provide additional information about the relationship between the parameters, thereby improving the model’s ability to identify meaningful and interpretable parameter values.

6. Simulation Summary

This section has some simulation results for comparisons. Simulated data sets of size n = 754 and d = 20 are obtained to match the data example in Section 5; the algorithm for the simulation is in Algorithm 22 of [9]. For each simulated data set, ( η 1 j , η 2 j ) for j = 1 , , d are generated at random from (4) and then a random sample { ( u i 1 , , u i d ) : i = 1 , , n } is generated from 1-factor BB1r based on the tail dependence parameters. For each simulated data set, as a sensitivity analysis, the log posterior in (2) for all three choices of f H based on (4)–(6) is maximized to obtain the mode and the approximate covariance matrix of the posterior density; also the MLE based on (1) is obtained.
The MLEs of the η 1 j , η 2 j parameters are transformed to the estimated θ , δ parameters of BB1r. Similarly, three sets of posterior modes for η 1 j , η 2 j parameters are transformed to estimate the θ , δ parameters. Then, the following root mean squares (rms) are computed:
r m s m = ( 2 d ) 1 j = 1 d ( θ ^ j ( m ) θ j ) 2 + ( δ ^ j ( m ) δ j ) 2 1 / 2
for the four sets of estimators. The superscripts m = 1 , 2 , 3 indicate the three priors, and the superscript m = 0 indicates maximum likelihood. Over 100 simulated data sets, the rms summaries are given in Table 7.
As expected, all three priors lead to closer estimates to the ( θ , δ ) parameters used to generate the simulated data sets than the MLEs. The three sets of { ( θ ^ j ( m ) , δ ^ j ( m ) ) } for m = 1 , 2 , 3 are relatively much closer to each other than with the MLE. For all simulated data sets, the value of the tilted log-likelihood (2) at the posterior mode is largest for prior (4) and smallest for prior (6).
Another summary in Table 8 is the closeness to the empirical λ j k , L and λ j k , U over the d 2 pairs:
Δ M , m = [ d ( d 1 ) / 2 ] 1 j < k a j k , M ( m ) a j k , M e m p i r i c a l
where m { 0 , 1 , 2 , 3 } as above, and M { L , U , C } for lower tail dependence, upper tail dependence, and central dependence, with dependence measures a { λ ^ j k , L , λ ^ j k , U , ρ ^ j k , S } , respectively.
From Table 8, there is better matching with the tilted log-likelihood for upper tail dependence but no improvement for lower tail dependence. The Spearman values ρ j k , S are much closer for empirical versus model based parameters.
In the comparison of the simulation results to those for the stock return data in Section 5, the improvements in using (2) are less. This can be explained as follows. For finance stock return data with stocks from one sector, the 1-factor structure with lower and upper tail dependence is reasonable, and BB1r linking copulas can be considered good approximations, and there might also be weak conditional dependence of some stock returns conditioned on the latent variable. That is, the 1-factor BB1r copula model has some small degree of model misspecification, and this can explain why tilting to obtain model-based tail dependence parameters to match empirical counterparts should lead to better tail inference.

7. Discussion

A method has been proposed for improved tail inference when preliminary data and likelihood analysis suggest asymmetric tail dependence. The approach of the tilted log-likelihood introduces a prior distribution involving lower and upper tail dependence parameters. Incorporating the prior places more weight on the behavior of the joint lower and upper tails compared with the center of the probability space, thereby improving the extreme value inference. This can account for a small degree of model misspecification in the parametric model. The prior is chosen so that model-based lower and upper tail dependence parameters can be a closer match to empirical counterparts for a previous data set that has some similar features to the data set under consideration.
For simpler exposition, the theory is applied to a 1-factor copula model that can handle non-Gaussian dependence structures with asymmetric tail dependence. The tilted log-likelihood approach can be extended to other structured factor copula models (e.g., bi-factor and1-factor with weak residual dependence) with asymmetric tail dependence, where a super-population assumption is reasonable for how observed variables are linked to latent variables.
Also, the approach can be applied to vine copula models with bivariate tail dependence for all pairs of variables by choosing bivariate copulas with lower and upper tail dependence in tree 1 of the vine. From [5], lower and upper upper tail dependence in the first vine tree lead to this property for all pairs of variables. By including a prior based on pairs of variables with stronger dependence and asymmetric tail dependence, there could be a better match of vine copula model-based tail dependence measures and empirical counterparts.
The skew-t copula (see [27]) can also be used for asymmetric tail dependence. However, the functional relation of copula parameters and tail dependence parameters is much more complicated than with the BB1 copula (latter in Appendix A.1) such that the tilted log-likelihood approach would be have to implemented with a different transform of the copula parameters.
Bayesian computing methods can be used if there are latent variables. Alternatively, a tilted log-likelihood similar to (2) can be optimized via (a) a quasi-Newton method if the total number of parameters is not large (say, fewer than 40), (b) a modified Newton–Raphson method if the analytical gradient and Hessian can be obtained, or (c) sequential estimation of parameters if possible (Section 5.5 of [9]). For methods (b) and (c), numerical optimization of the tilted log-likelihood is used to obtain the (approximate) posterior mode, and then the Hessian in this mode, in order to obtain interval estimates of the functions of parameters.

Author Contributions

Conceptualization, H.J.; methodology, H.J. and X.L.; software, H.J. and X.L.; writing—original draft preparation, H.J.; writing—review and editing, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by NSERC Discovery Grant GR010293 and an NSERC Postgraduate Scholarship.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Thanks to the referees for their valuable comments and to the editors for their encouragement.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Background Results on Copulas and Dependence Concepts

The appendix consists of subsections of known results in order to make this article more self-contained. Details of individual topics are in [9] if there are no additional references.

Appendix A.1. BB1 Copula Family and Tail Dependence Properties

If ( U , V ) C for a bivariate copula C, the reflected copula C ^ ( u , v ) is the distribution of the reflection ( 1 U , 1 V ) . Simple probability calculations lead to:
C ^ ( u , v ) = u + v 1 + C ( 1 u , 1 v ) , 0 u , v 1 .
The survival function of C is related; it is defined as
C ¯ ( u , v ) = Pr ( U > u , V > v ) = 1 u v + C ( u , v ) , 0 u , v 1 ,
so that C ^ ( u , v ) = C ¯ ( 1 u , 1 v ) .
The bivariate 2-parameter BB1 copula family is useful to handle asymmetric lower and upper tail dependence. Asymmetric here refers to copulas where C is different from C ^ . For θ > 0 , δ > 1 ,
C BB 1 ( u , v ; θ , δ ) = 1 + ( u θ 1 ) δ + ( v θ 1 ) δ 1 / δ 1 / θ , 0 u , v 1 .
The reflected copula is: C BB 1 r ( u , v ; θ , δ ) = u + v 1 + C BB 1 ( 1 u , 1 v ; θ , δ ) .
The BB1 copula and its reflection are the most useful for handling asymmetric tail dependence. Other choices for this property are BB7 and skew-t (see [27]). The BB1 family has the nice property that there is increasing concordance or positive dependence as θ and δ increase with independence as the lower bound of the parameter space is reached and comonotonicity (perfect dependence) as one of θ or δ goes to infinity. The BB7 copula does not have the concordance property over the entire parameter space.
Tail dependence parameters (as functions of copula family) are λ L ( C BB 1 ) = 2 1 / ( δ θ ) and λ U ( C BB 1 ) = 2 2 1 / δ as functions of θ and δ . Then, λ U ( C BB 1 r ) = λ L ( C BB 1 ) and λ L ( C BB 1 r ) = λ U ( C BB 1 ) because the reflection reverses the joint upper and lower tails. The range of ( λ L , λ U ) is ( 0 , 1 ) 2 . To go from tail dependence parameters to copula parameters, the transforms are:
δ = [ log 2 ] / log ( 2 λ U ) , θ = log ( 2 λ U ) / ( log λ L ) for BB 1 , δ = [ log 2 ] / log ( 2 λ L ) , θ = log ( 2 λ L ) / ( log λ U ) for BB 1 r , δ = [ log 2 ] / [ log λ L ) , θ = [ log 2 ] / log ( 2 λ U ) for BB 7 .
For the data example in Section 5, the BB7 1-factor copula is fitted much worse, but it is included to show that the “prior” in (2) is independent of the tail-dependent family used for linking copulas to the latent variable.

Appendix A.2. 1-Factor Copula Construction

For the 1-factor copula model, it assumes conditional independence of U = ( U 1 , , U d ) , given a latent variable V U ( 0 , 1 ) . Let C j V ( · , ϑ j ) (for j = 1 , , d ) be the bivariate copula cdf for the jth variable with the latent variable V. The d-variate copula cdf and density for U = ( U 1 , , U d ) are:
C U ( u ; ϑ 1 , , ϑ d ) = 0 1 j = 1 d C j | V ( u j | v ; ϑ j ) d v ,
c U ( u ; ϑ 1 , , ϑ d ) = 0 1 j = 1 d c j V ( u j , v ; ϑ j ) d v ,
where C j | V ( u j | v ; ϑ j ) = C j V ( u j , v ; ϑ j ) / v is the conditional distribution of [ U j | V = v ] and c j V ( u j , v , ϑ j ) = 2 C j V ( u j , v ; ϑ j ) / v u j is the corresponding copula density. The ( j , k ) bivariate margin is
C j k ( u j , u k ) = 0 1 C j | V ( u j | v ; ϑ j ) C k | V ( u k | v ; ϑ k ) d v .
The Spearman’s rho for ( U j , U k ) is Cor ( U j , U k ) . It is numerically best to compute it via a 2-dimensional numerical integral
12 0 1 0 1 C j k ( u u , u k ) d u j d u k 3
to avoid a possible unbounded copula density c j k . For (A5), this becomes
12 0 1 0 1 C j | V ( u j | v ) d u j 0 1 C j | V ( u k | v ) d u k d v 3 = : 12 0 1 a j ( v ) a k ( v ) d v 3 ;
This can be computed via a 1-dimensional Gauss–Legendre quadrature over n q quadrature points v 1 , , v n q for v, and a j ( v ) and a k ( v ) (with respect to u j and u k ) can be separately evaluated via the Gauss–Legendre quadrature for v 1 , , v n q . See [23] for Gaussian quadrature theory.
Suppose C j V has lower and upper tail dependence for all j, such as with BB1 or reflected BB1 copulas. Define the survival function C ¯ j V as in (A1). Then, from Section 2.18 of [9], there are functions b j V and b j V * for lower and upper tails, respectively, such that for w j > 0 and w > 0 ,
C j V ( u w j , u w ) / u b j V ( w j , w ) , u 0 + , C ¯ j V ( 1 u w j , 1 u w ) / u b j V * ( w j , w ) , u 0 + , C j | V ( u w j | u w ) b j | V ( w j , w ) = b j V ( w j , w ) / w , u 0 + , C ¯ j | V ( 1 u w j | 1 u w ) b j | V * ( w j , w ) = b j V * ( w j , w ) / w , u 0 + .
Lower and upper tail dependence parameters are defined in Section 2. The conditions of Theorem 8.76 of [9] are satisfied with BB1 copulas, and it implies for variable j and k ( j k ) that the lower and upper table dependence parameters for ( U j , U k ) are
λ j k , L = 0 b j | V ( 1 | z ) b k | V ( 1 | z ) d z , λ j k , U = 0 b j | V * ( 1 | z ) b k | V * ( 1 | z ) d z .
For BB1,
b j V ( w j , w ) = ( w j δ θ + w δ θ ) 1 / ( δ θ ) , b j V * ( w j , w ) = w 1 + w ( w j δ + w δ ) 1 / δ , b j | V ( 1 | v ) = ( 1 + v δ θ ) 1 1 / ( δ θ ) , b j | V * ( 1 | v ) = 1 ( 1 + v δ ) 1 / δ 1 .
For reflected BB1, the lower and upper quantities are reversed, as the reflection just changes the upper (lower) quadrant to the lower (upper) quadrant.
Reflected BB1 and BB1 copulas usually fit better than the BB7 copula for stock return data with factor or vine copulas. From the concordance property of the BB1 copula, there is more lower and upper tail dependence as either of the two parameters increase.

Appendix A.3. Normal Scores and Use for Diagnostics

Let y 1 , , y n be a sample of size n for a variable y. The rank transform to the standard normal is
z ^ i = Φ 1 ( ( rank ( y i ) 0.5 ) / n ) ,
where a larger y value attains a larger rank (rank 1 for smallest and rank n for largest (among the sorted values of y 1 , , y n ). We refer to the { z ^ i } as the normal scores for y.
If there are d variables and a sample { ( y i 1 , , y i d ) : i = 1 , , n } , then one can obtain vectors of normal scores { ( z ^ i 1 , , z ^ i d ) : i = 1 , , n } with ranking performed separately for each variable. Pairs of variables have strong monotone dependence if the correlation of their normal score transforms is large. Figure 1 has examples of bivariate normal scores plots for pairs of GARCH-filtered returns. The plots are used to check for departures from elliptically shaped clouds of points, which are expected if the Gaussian copula fits well.

Appendix A.4. GARCH Time Series

A summary of the copula-GARCH models is as follows. Let P t ( t = 0 , 1 , , n ) be the price time series of a financial asset such as a market index or stock; the time index could be day, week, or month. The (log) return R t is defined as log ( P t / P t 1 ) . For d assets, denote the returns as R t 1 , , R t d at time t. For each financial return variable, a common choice is the GARCH(1,1) time series filter, with innovation distribution being the symmetric (or asymmetric) Student tν distribution with variance 1 and ν > 2 ; see Section 4.3.6 of [28].
Let F j ( · , ν j ) be the distribution of the innovation for the jth univariate marginal model:
R t j = μ j + σ t j Z t j , σ t j 2 = ω j + α j R t 1 , j 2 + β j σ t 1 , j 2 , j = 1 , , d , t = 1 , , n ,
where for each j, ω j > 0 , α j > 0 , β j > 0 , the Z t j are assumed to be innovations over t, and the random σ t j 2 depends on R t 1 , j 2 and σ t 1 , j 2 . For stationarity, α j + β j < 1 for all j. The vectors ( Z t 1 , , Z t d ) for t = 1 , , n are assumed to be independent and identically distributed with distribution:
F Z ( z ; ν 1 , , ν d , ϑ ) = C F 1 ( z 1 ; ν 1 ) , , F d ( z d ; ν d ) ; ϑ ,
where ϑ is a dependence parameter of a d-dimensional copula C. Sometimes, an autoregressive coefficient ϕ j is also added when μ j in (A7) is replaced by μ j + ϕ j R t 1 , j . The flexible choice of the copula families includes vine and factor copula constructions.

References

  1. McNeil, A.J. Estimating the tails of loss severity distributions using extreme value theory. ASTiN Bull. 1997, 27, 117–137. [Google Scholar] [CrossRef]
  2. Sun, M. Modeling cyber loss severity using a spliced regression distribution with mixture components. Open J. Stat. 2023, 13, 425–452. [Google Scholar] [CrossRef]
  3. Falk, M.; Padoan, S.; Wisheckel, F. Generalized Pareto copulas: A key to multivariate extremes. J. Multivar. Anal. 2019, 174, 104538. [Google Scholar] [CrossRef]
  4. Jondeau, E.; Rockinger, M. The copula-GARCH model of conditional dependencies: An international stock market application. J. Int. Money Financ. 2006, 25, 827–853. [Google Scholar] [CrossRef]
  5. Joe, H.; Li, H.; Nikoloulopoulos, A.K. Tail dependence functions and vine copulas. J. Multivar. Anal. 2010, 101, 252–270. [Google Scholar] [CrossRef]
  6. Krupskii, P.; Joe, H. Factor copula models for multivariate data. J. Multivar. Anal. 2013, 120, 85–101. [Google Scholar] [CrossRef]
  7. Lee, D.; Joe, H.; Krupskii, P. Tail-weighted dependence measures with limit being the tail dependence coefficient. J. Nonparametr. Stat. 2018, 30, 262–290. [Google Scholar] [CrossRef]
  8. Coblenz, M.; Dyckerhoff, R.; Grothe, O. Nonparametric estimation of multivariate quantiles. Environmetrics 2018, 29, e2488. [Google Scholar] [CrossRef]
  9. Joe, H. Dependence Modeling with Copulas; Chapman & Hall/CRC: Boca Raton, FL, USA, 2014. [Google Scholar]
  10. Fan, X. Dependence Modeling in High Dimensions with Latent Variables. Ph.D. Thesis, University of British Columbia, Vancouver, BC, Canada, 2024. [Google Scholar]
  11. Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis, 5th ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2002. [Google Scholar]
  12. Joe, H. Parsimonious graphical dependence models constructed from vines. Can. J. Stat. 2018, 46, 532–555. [Google Scholar] [CrossRef]
  13. Ang, A.; Chen, J. Asymmetric correlations of equity portfolios. J. Financ. Econ. 2002, 63, 443–494. [Google Scholar] [CrossRef]
  14. Longin, F.; Solnik, B. Extreme correlations in international equity markets. J. Financ. 2001, 56, 649–676. [Google Scholar] [CrossRef]
  15. Nikoloulopoulos, A.K.; Joe, H.; Li, H. Vine copulas with asymmetric tail dependence and applications to financial return data. Comput. Stat. Data Anal. 2012, 56, 3659–3673. [Google Scholar] [CrossRef]
  16. Li, X.; Joe, H. Multivariate directional tail-weighted dependence measures. J. Multivar. Anal. 2024, 203, 105319. [Google Scholar] [CrossRef]
  17. Serfling, R.J. Approximation Theorems of Mathematical Statistics; Wiley: New York, NY, USA, 1980. [Google Scholar]
  18. Liseo, B.; N, L. A note on reference priors for the scalar skew-normal distribution. J. Stat. Plan. Inference 2006, 136, 373–389. [Google Scholar] [CrossRef]
  19. Azzalini, A.; Arellano-Valle, R.B. Maximum penalized likelihood estimation for skew-normal and skew-t distributions. J. Stat. Plan. Inference 2013, 143, 419–433. [Google Scholar] [CrossRef]
  20. Loaiza-Maya, R.; Smith, M.S. Variational Bayes estimation of discrete-margined copula models with application to time series. J. Comput. Graph. Stat. 2019, 28, 523–539. [Google Scholar] [CrossRef]
  21. Nguyen, H.; Ausin, M.C.; Galeano, P. Variational inference for high dimensional structured factor copulas. Comput. Stat. Data Anal. 2020, 151, 107012. [Google Scholar] [CrossRef]
  22. Krupskii, P.; Joe, H. Structured factor copula models: Theory, inference and computation. J. Multivar. Anal. 2015, 138, 53–73. [Google Scholar] [CrossRef]
  23. Stroud, A.; Secrest, D. Gaussian Quadrature Formulas; Prentice-Hall: Englewood Cliffs, NJ, USA, 1966. [Google Scholar]
  24. Kreuzer, A.; Czado, C. Bayesian inference for a single factor copula stochastic volatility model using Hamiltonian Monte Carlo. Econom. Stat. 2021, 19, 130–150. [Google Scholar] [CrossRef]
  25. Hoffman, M.D.; Gelman, A. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. J. Mach. Learn. Res. 2014, 15, 1593–1623. [Google Scholar]
  26. Stan Development Team. Stan Modeling Language Users Guide and Reference Manual, Version 2.30.1; 2023. Available online: https://mc-stan.org/users/citations/ (accessed on 10 June 2024).
  27. Yoshiba, T. Maximum likelihood estimation of skew-t copulas with its applications to stock returns. J. Stat. Comput. Simul. 2018, 88, 2489–2506. [Google Scholar] [CrossRef]
  28. Jondeau, E.; Poon, S.H.; Rockinger, M. Financial Modeling under Non-Gaussian Distributions; Springer: London, UK, 2007. [Google Scholar]
Figure 1. Normal score plots for some pairs of GARCH-filtered stock returns. Lower and upper semi-correlations, as used in Section 2.4 of [9], show more dependence in the lower quadrant than in the upper quadrant and suggest asymmetric tail dependence.
Figure 1. Normal score plots for some pairs of GARCH-filtered stock returns. Lower and upper semi-correlations, as used in Section 2.4 of [9], show more dependence in the lower quadrant than in the upper quadrant and suggest asymmetric tail dependence.
Entropy 26 00610 g001
Table 1. Matrices of tail dependence measures for 10 stock GARCH-filtered returns: model-based 1-factor BB1r, empirical, model-based 1-factor BB1, respectively. Lower (upper) tail dependence below (above) diagonal. Bootstrap standard errors (SEs) for estimates of lower and upper tail dependence are mostly in the range 0.04 to 0.075.
Table 1. Matrices of tail dependence measures for 10 stock GARCH-filtered returns: model-based 1-factor BB1r, empirical, model-based 1-factor BB1, respectively. Lower (upper) tail dependence below (above) diagonal. Bootstrap standard errors (SEs) for estimates of lower and upper tail dependence are mostly in the range 0.04 to 0.075.
Model-Based Tail-Dependence Based on MLE with 1-Factor BB1r
1.0000.2580.1930.0170.1760.0930.1350.0370.0390.233
0.4001.0000.2670.0220.2420.1260.1840.0490.0510.326
0.4490.4701.0000.0180.1810.0960.1390.0380.0400.240
0.2570.2680.2981.0000.0160.0090.0130.0040.0040.021
0.3530.3690.4130.2391.0000.0880.1270.0350.0370.218
0.4490.4710.5330.2990.4141.0000.0690.0200.0200.114
0.3340.3490.3900.2270.3100.3911.0000.0270.0290.167
0.3280.3420.3820.2230.3040.3830.2881.0000.0080.045
0.3830.4010.4500.2580.3540.4510.3350.3291.0000.047
0.4340.4550.5130.2890.4000.5140.3780.3700.4351.000
Empirical Tail-Dependence Measures
1.0000.4180.2590.0870.1960.2330.2000.1600.1440.239
0.3971.0000.2380.1410.2750.2850.2470.1440.1630.333
0.3620.3821.0000.1180.2680.3260.2750.1080.2020.347
0.1840.1680.2171.0000.1300.1420.0800.1890.1570.079
0.2830.3100.3440.2311.0000.1780.2090.1110.1320.290
0.2810.3640.4940.1320.3071.0000.1820.0780.1610.339
0.2350.2740.3750.2070.3200.3011.0000.0600.1310.308
0.2670.2750.2190.2340.2070.2730.2011.0000.1370.109
0.2460.3120.3730.1560.2620.3330.1510.2791.0000.198
0.2840.2930.4500.2240.3010.4380.2550.2840.2761.000
Model-Based Tail-Dependence Based on MLE with 1-Factor BB1
1.0000.3850.3630.1720.3170.3350.2790.2230.2730.385
0.3081.0000.4160.1950.3630.3830.3170.2530.3110.442
0.3940.3971.0000.1840.3420.3610.3000.2400.2940.416
0.1690.1700.2111.0000.1630.1720.1440.1170.1420.194
0.2630.2650.3360.1471.0000.3160.2640.2110.2580.362
0.3960.3990.5250.2120.3371.0000.2780.2230.2720.383
0.2600.2620.3320.1460.2250.3331.0000.1870.2280.317
0.2630.2660.3360.1470.2280.3380.2251.0000.1830.253
0.3170.3200.4100.1740.2730.4120.2700.2731.0000.311
0.3580.3620.4690.1940.3070.4710.3030.3070.3731.000
Table 2. Empirical Spearman rank correlation matrix for 10 GARCH-filtered stock returns. Bootstrap SEs for Spearman are in the range 0.017 to 0.036. Model-based Spearman rhos based on 1-factor BB1r and 1-factor BB1 are quite close to the respective empirical values.
Table 2. Empirical Spearman rank correlation matrix for 10 GARCH-filtered stock returns. Bootstrap SEs for Spearman are in the range 0.017 to 0.036. Model-based Spearman rhos based on 1-factor BB1r and 1-factor BB1 are quite close to the respective empirical values.
Empirical Spearman Rho Central Dependence Measures
1.0000.7450.5970.3210.5340.5840.4390.4440.5670.565
0.7451.0000.6330.3760.6050.6200.4910.5190.5770.639
0.5970.6331.0000.3230.5510.7090.5810.5060.5610.723
0.3210.3760.3231.0000.3470.3780.3520.5080.3620.388
0.5340.6050.5510.3471.0000.5690.4940.3900.5010.558
0.5840.6200.7090.3780.5691.0000.5570.4980.5580.720
0.4390.4910.5810.3520.4940.5571.0000.3250.4200.599
0.4440.5190.5060.5080.3900.4980.3251.0000.4670.485
0.5670.5770.5610.3620.5010.5580.4200.4671.0000.567
0.5650.6390.7230.3880.5580.7200.5990.4850.5671.000
Table 3. Summaries to indicate how well model-based tail dependence and central dependence approximate respective empirical values. The averages and fractions are over 10 2 = 45 bivariate margins.
Table 3. Summaries to indicate how well model-based tail dependence and central dependence approximate respective empirical values. The averages and fractions are over 10 2 = 45 bivariate margins.
SummaryValue
average | λ ^ j k , L λ j k , L ( ϑ ^ ) | for 1-factor BB1r0.088
average | λ ^ j k , U λ j k , U ( ϑ ^ ) | for 1-factor BB1r0.101
average | ρ ^ j k , S ρ j k , S ( ϑ ^ ) | for 1-factor BB1r0.035
average | λ ^ j k , L λ j k , L ( ϑ ^ ) | for 1-factor BB10.043
average | λ ^ j k , U λ j k , U ( ϑ ^ ) | for 1-factor BB10.088
average ρ ^ j k , S ρ j k , S ( ϑ ^ ) for 1-factor BB10.036
average ( λ ^ j k , L λ ^ j k , U ) 0.088
average ( λ j k , L ( ϑ ^ ) λ j k , U ( ϑ ^ ) ) for 1-factor BB1r0.275
average ( λ j k , L ( ϑ ^ ) λ j k , U ( ϑ ^ ) ) for 1-factor BB10.021
fraction λ ^ j k , L > λ ^ j k , U 40/45
fraction λ j k , L ( ϑ ^ ) > λ j k , U ( ϑ ^ ) for 1-factor BB1r45/45
fraction λ j k , L ( ϑ ^ ) > λ j k , U ( ϑ ^ ) for 1-factor BB129/45
Table 4. Closeness to corresponding empirical values to model-based ML or posterior modal η L . η U values for lower tail dependence λ j k , L , upper tail dependence λ j k , U and central dependence parameters ρ j k , S of d ( d 1 ) / 2 pairs; d = 10 GARCH-filter stock returns. The quantiles in columns 2 to 4 are from the average absolute difference over d ( d 1 ) / 2 pairs.
Table 4. Closeness to corresponding empirical values to model-based ML or posterior modal η L . η U values for lower tail dependence λ j k , L , upper tail dependence λ j k , U and central dependence parameters ρ j k , S of d ( d 1 ) / 2 pairs; d = 10 GARCH-filter stock returns. The quantiles in columns 2 to 4 are from the average absolute difference over d ( d 1 ) / 2 pairs.
Case λ L λ U ρ S Objective (2)
BB1r, MLE0.0940.1160.034
BB1r, prior (4)0.0770.0430.0345766
BB1r, prior (5)0.0810.0540.0345766
BB1r, prior (6)0.0740.0460.0355743
BB1, prior (4)0.0780.0780.0365737
BB7, prior (4)0.1400.1340.0625549
Table 5. Posterior mode and standard deviation (SD) of η 1 j , η 2 j parameters. Note that μ η 1 j > μ η 2 j implies strong lower tail dependence than upper tail dependence for variable j with the latent variable. μ η 1 j > 0 means that the estimated lower tail dependence with latent variable exceeds 0.5. The SD σ values come from the square root of diagonal values of the negative inverse Hessian at the mode. The correlation values for each diagonal 2 × 2 block come from converting a covariance matrix to a correlation matrix.
Table 5. Posterior mode and standard deviation (SD) of η 1 j , η 2 j parameters. Note that μ η 1 j > μ η 2 j implies strong lower tail dependence than upper tail dependence for variable j with the latent variable. μ η 1 j > 0 means that the estimated lower tail dependence with latent variable exceeds 0.5. The SD σ values come from the square root of diagonal values of the negative inverse Hessian at the mode. The correlation values for each diagonal 2 × 2 block come from converting a covariance matrix to a correlation matrix.
Variable j μ η 1 j μ η 2 j σ η 1 j σ η 2 j ρ η 1 j , η 2 j
10.292−0.3510.0810.202−0.398
2−0.308−0.7970.1010.217−0.315
30.447−0.0100.0810.186−0.442
4−0.771−1.4220.1190.243−0.163
50.250−0.9040.0760.227−0.274
60.758−0.2610.0680.215−0.373
70.435−0.3900.0760.209−0.379
80.560−0.1650.0750.200−0.411
90.553−0.1370.0750.200−0.422
100.657−0.3140.0700.213−0.372
110.803−0.2550.0670.215−0.366
120.460−0.7550.0710.224−0.292
13−0.108−1.3120.0860.234−0.197
140.134−0.8590.0810.226−0.292
150.253−0.6370.0790.219−0.336
160.6000.1010.0780.186−0.450
17−0.149−1.2400.0890.233−0.216
180.533−0.0360.0780.192−0.437
190.107−0.7340.0830.221−0.323
20−0.393−1.2280.1010.235−0.215
Table 6. Part of negative inverse Hessian at posterior mode, that is, the posterior covariance for variables μ η 1 j , μ η 2 j , j { 1 , 7 , 13 } . Note that near the block diagonal form, the matrix is diagonal 2 × 2 block dominant.
Table 6. Part of negative inverse Hessian at posterior mode, that is, the posterior covariance for variables μ η 1 j , μ η 2 j , j { 1 , 7 , 13 } . Note that near the block diagonal form, the matrix is diagonal 2 × 2 block dominant.
j = 1 j = 7 j = 13
0.00662−0.00655−0.00002−0.000010.00005−0.00003
−0.006550.04093−0.00002−0.000250.000010.00052
−0.00002−0.000020.00570−0.006000.000050.00000
−0.00001−0.00025−0.006000.043880.000020.00029
0.000050.000010.000050.000020.00737−0.00396
−0.000030.000520.000000.00029−0.003960.05473
Table 7. Values of average difference in values in (8) over 100 simulated data sets. Also included are the fraction of times that the posterior mode from (2) is closer to the “true” vector compared with the MLE, and lower/upper quartiles of (8).
Table 7. Values of average difference in values in (8) over 100 simulated data sets. Also included are the fraction of times that the posterior mode from (2) is closer to the “true” vector compared with the MLE, and lower/upper quartiles of (8).
mAverage rms 0 rms m Fraction rms 0 > rms m Q1 rmsmQ3 rmsm
0 0.0850.101
10.0230.950.0660.079
20.0190.920.0710.082
30.0130.800.0740.091
Table 8. Values of average difference for values of (9) in 100 simulated data sets. Also, the fraction of times that the posterior mode from (2) improves on the MLE based on (9), and lower/upper quartiles of (9).
Table 8. Values of average difference for values of (9) in 100 simulated data sets. Also, the fraction of times that the posterior mode from (2) improves on the MLE based on (9), and lower/upper quartiles of (9).
mMavg Δ M , 0 Δ M , m Fraction Δ M , m < Δ M , 0 Q1 Δ M , m Q3 Δ M , m
0L 0.0590.078
1L0.0010.460.0600.077
2L0.0010.480.0600.077
3L0.0000.370.0600.079
0U 0.0730.129
1U0.0110.870.0650.116
2U0.0040.700.0700.122
3U0.0110.920.0650.114
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Joe, H.; Li, X. Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence. Entropy 2024, 26, 610. https://doi.org/10.3390/e26070610

AMA Style

Joe H, Li X. Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence. Entropy. 2024; 26(7):610. https://doi.org/10.3390/e26070610

Chicago/Turabian Style

Joe, Harry, and Xiaoting Li. 2024. "Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence" Entropy 26, no. 7: 610. https://doi.org/10.3390/e26070610

APA Style

Joe, H., & Li, X. (2024). Likelihood Inference for Factor Copula Models with Asymmetric Tail Dependence. Entropy, 26(7), 610. https://doi.org/10.3390/e26070610

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop