Next Article in Journal
Evidence of Large-Scale Quantization in Space Plasmas
Previous Article in Journal
Substrate Effect on Catalytic Loop and Global Dynamics of Triosephosphate Isomerase
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Maximum Entropy Approach to Loss Distribution Analysis

Department of Economics and Management, University of Trento, via Inama 5, Trento 38122, Italy
Entropy 2013, 15(3), 1100-1117; https://doi.org/10.3390/e15031100
Submission received: 16 February 2013 / Revised: 14 March 2013 / Accepted: 18 March 2013 / Published: 22 March 2013

Abstract

:
In this paper we propose an approach to the estimation and simulation of loss distributions based on Maximum Entropy (ME), a non-parametric technique that maximizes the Shannon entropy of the data under moment constraints. Special cases of the ME density correspond to standard distributions; therefore, this methodology is very general as it nests most classical parametric approaches. Sampling the ME distribution is essential in many contexts, such as loss models constructed via compound distributions. Given the difficulties in carrying out exact simulation, we propose an innovative algorithm, obtained by means of an extension of Adaptive Importance Sampling (AIS), for the approximate simulation of the ME distribution. Several numerical experiments confirm that the AIS-based simulation technique works well, and an application to insurance data gives further insights in the usefulness of the method for modelling, estimating and simulating loss distributions.

1. Introduction

The Maximum Entropy (ME) method is a powerful non-parametric technique for density approximation, first proposed by [1] in an information theory setup. By maximizing the Shannon’s entropy contained in the data subject to moment constraints, it provides the best-fitting distribution given that the only information available are the first k empirical moments, where “best-fitting” is defined in terms of maximum entropy. An appealing feature is that, besides working well in setups where a parametric approach would fail, the ME density nests most commonly used parametric distributions. Thus, when the true data generating process is a parametric model but the investigator is unable or unwilling to assume it a priori, the ME density identifies it a posteriori, i.e., according to the empirical properties of the data. It also provides a rough measure of the distance between a specific parametric model and the true data-generating process, estimated by the ME distribution. In other words, the method gives a “data-driven” indication whether a parametric or a non-parametric approach should be employed. This is a distinctive feature that makes it more interesting than other non-parametric techniques.
Estimation of the ME distribution may be performed in a relatively straightforward manner by means of a sequential updating algorithm proposed by [2]. On the other hand, exact simulation is not trivial. In general, the distribution function does not exist in closed form, so that the inverse transform method is computationally very expensive. An accept-reject scheme can be used for specific versions of the ME density (that is, for any given value of the parameters and of the number of constraints k), but is difficult to set up in general, because the density can have very different tail behaviors, and therefore an instrumental density that works when the tail is heavy would be inefficient for light-tailed versions of the distribution. Thus, the method would require different implementations for each member of the ME family, making it quite impractical.
In this paper we resort to Adaptive Importance Sampling (AIS) for approximate simulation of the ME density. The version of AIS based on mixture distributions was first introduced by [3]. [4] extend it to general mixture classes; recently, it has been employed for estimation of cosmological parameters [5] and for simulating copulas [6]. The method is a generalization of standard Importance Sampling (IS) because, at each iteration, a sample is simulated from the instrumental density and used to improve the IS density itself. So, “adaptivity” means that the instrumental density sequentially provides a progressively better approximation of the target density. It also rigorously formalizes the well-known capacity of mixtures to approximate most distributions.
The implementation of AIS proposed by [3] is based on mixtures of normal or Student-t densities. In this paper we develop a modification using mixtures of lognormal distributions, which is necessary for the simulation of densities supported on [ 0 , + ) . The AIS-based simulation methodology fits perfectly the ME setup, because it only requires the knowledge of the density function. Moreover, it is easily implemented, being based on simulation of lognormal random variables, and extremely accurate, as will be shown by means of numerical experiments.
The ME density is especially appropriate as a model for loss distributions in contexts where there are no theoretical reasons for choosing a specific parametric model. So, for example, even though the ME approach can give a satisfactory approximation of the P&L distribution of the log-returns obtained from equity prices, in this case the distribution is expected to be normal from standard financial theory, which is largely based on the hypothesis of Geometric Brownian Motion in continuous time. Thus, the use of the ME approach (similarly to the use of other ad hoc parametric distributions) is grounded on empirical reasons, but may be difficult to justify theoretically. On the other hand, the distribution of claims in non-life insurance or losses in operational risk is usually not expected to belong to any specific parametric family, and therefore, in absence of theoretical a priori information about the model, a non-parametric methodology such as ME can be very useful, as the choice has to be mostly motivated by the empirical features of the data. Finally, from the point of view of risk management, being able of sampling the ME distribution is crucial: although in simple cases risk measures can be computed by numerical integration, in a loss model obtained via compound distribution simulation is often the only way of measuring risk.
The main contribution of this paper is threefold. First, we extend the AIS methodology to the setup where the target distribution is supported on [ 0 , + ) , instead of R . Second, we use this technique for approximate simulation of the ME density. In this way, we overcome the difficulties encountered in exact simulation of the ME distribution. Finally, we show that the combination of ME and AIS provides an effective non-parametric approach for modelling different kinds of loss distributions and assessing the improvement with respect to classical parametric distributions.
The rest of this work is organized as follows. Section 2 reviews the ME method. Section 3 details the AIS approach to the simulation of the ME density. Section 4 shows selected results of the simulation experiments and applies the technique to the estimation and simulation of the distribution of indemnity payments. A thorough analysis of this example shall show the precision of the AIS simulation method and the merits of the ME approach in fitting the data and identifying the appropriateness or inappropriateness of standard parametric approaches. Finally, Section 5 concludes.

2. Background about the Maximum Entropy Method

The Maximum Entropy method is best described by Jaynes’ (1957) words: the ME distribution is “uniquely determined as the one which is maximally noncommittal with respect to missing information, and it agrees with what is known, but expresses maximum uncertainty with respect to all other matters”. The ME density takes the form
f ( x ) = exp - i = 0 k λ i g i ( x )
where k is the number of moment constraints and g i s are the functional forms of the so-called “characterizing moments”. In most cases, they are the arithmetic or logarithmic moments, corresponding respectively to g i ( x ) = x i and g i ( x ) = ( log x ) i , but other choices are possible [7]. The k + 1 parameters are customarily defined using the Greek letter λ because they are the Lagrange multipliers obtained by solving the maximization problem
max f W ( f ) = f ( x ) log f ( x ) d x
under the constraints
g i ( x ) f ( x ) d x = μ ^ i , i = 0 , 1 , , k
where f is a density, W is the Shannon entropy associated to f and μ ^ i is the sample counterpart of the i-th characterizing moment. It can be shown [7] that the solution, called ME density, takes the form Equation (1).
Despite its interesting properties, only very recently the method has been used more extensively in economics and finance ([8,9,10,11,12,13]). One reason is that Equation (2) cannot be solved analytically for k 2 , and the preferred algorithm for finding a solution, namely the Newton–Raphson algorithm, requires a very precise initialization to reach convergence, even for moderately large k [2]. However, [2] has developed a generalization of the algorithm that makes it easier to find the optimal density. The idea is that there is no need to impose all the moment constraints simultaneously. Instead, one can impose the constraints one at a time, from the lowest to the highest moment, and update the ME density sequentially, every time a new constraint is taken into account. The advantage of such a procedure is twofold: first, the maximization subject to few moment constraints can be easily carried out with standard Newton–Raphson. Second, the moments are not independent, so that the estimates of the first k parameters will typically be little changed when considering the ME density of order k + 1 , and can therefore be used as starting values for the ( k + 1 )-order problem.
The other relevant problem in practical applications is the choice of the “optimal” value of k. In general, a larger number of constraints results in a more precise approximation, but the estimation of more parameters introduces further noise and may lead to overfitting. There are two possible ways of proceeding.
Since the maximized log-likelihood is equal to - n i = 0 k λ ^ i ( k ) μ ^ i , where the λ ^ i ( k ) s are the parameters of the fitted ME(k) density and n is the sample size, a log-likelihood ratio (llr) test is easily computed. The test of the hypothesis k = s ( s = 1 , 2 , ) against k = s + 1 is given by l l r = - 2 n ( i = 0 s + 1 λ ^ i ( s + 1 ) μ ^ i - i = 0 s λ ^ i ( s ) μ ^ i ) ; from standard limiting theory, its asymptotic distribution is χ 1 2 . The model-selection procedure would thus be based on the following steps: (a) estimate sequentially the ME density with s = 1 , 2 , ; (b) perform the test for each value of s; (c) stop at the first value of s ( s 0 , say) such that the hypothesis s = s 0 cannot be rejected and conclude that the optimal value of k (from now on denoted by k * ) is equal to s 0 .
The preceding solution does not take into account the cost of estimating a model with a larger number of parameters. A possible remedy consists in computing an information criterion, such as the Akaike (AIC) or Bayesian (BIC) Information Criterion. To avoid overfitting, one can stop at the value k * such that at least one of the following two conditions holds: (1) the llr test cannot reject the hypothesis k = k * ; (2) the numerical value of AIC ( k * + 1 ) [or BIC ( k * + 1 ) ] is larger than the numerical value of AIC ( k * ) [or BIC ( k * ) ]. In the following we will mostly employ this approach.
The ME method is practically very important when the distribution to be approximated is non-standard, with features such as high skewness and/or kurtosis and heavy tail: in this case the ME distribution is able to catch them, whereas a standard parametric model would fail. In the latter case, it may be necessary to consider many terms in the exponent of the ME density, with the consequence that application of standard Newton–Raphson is unfeasible and Wu’s (2003) sequential algorithm is the only way of estimating the parameters.
On the other hand, classical distributions, such as the normal, lognormal, exponential, Pareto and others are obtained (exactly) from Equation (1) for small values of k, so that if the empirical distribution is “similar” to one of them, usually one or two moments will suffice to find a very good approximation. In this case estimation results based on the ME distribution and on the correct parametric model are essentially the same.
For the purposes of risk analysis of loss models, the most important distributions encompassed by the ME distribution are the lognormal and the Pareto. The lognormal Logn ( μ , σ 2 ) is a logarithmic ME density with k = 2 ; the values of the parameters are
λ 0 = μ 2 2 σ 2 + log σ + 1 2 log ( 2 π ) , λ 1 = 1 - μ σ 2 , λ 2 = 1 2 σ 2
The Pareto Par ( c , α ) is a logarithmic ME density with k = 1 and parameters
λ 0 = - log ( α c α ) , λ 1 = α + 1
See [7] for further details as well as for other examples.
This discussion suggests to use the ME approach as a procedure for verifying the appropriateness of a certain parametric model and/or assessing the distance of the data at hand from a given distribution: for example, when the investigator is uncertain between the Pareto and the lognormal distribution, a possible way of proceeding consists in fitting sequentially the ME(k) distribution with k = 1 , 2 , , stopping at the smallest value of k such that the llr test cannot reject the null hypothesis. The data-generating process is Pareto if the test accepts k = 1 , lognormal if it accepts k = 2 , neither Pareto nor lognormal if it accepts k > 2 ; moreover, a large value of k implies a large difference between the lognormal and the fitted ME distribution. See [14] for details.

3. Approximate Simulation of the ME Distribution

The ME distribution function F ( x ) does not exist in closed form, because the density (1) cannot be integrated analytically. Resorting to numerical integration may be reasonable for computing quantiles of X, but is rather cumbersome for the implementation of the inverse transform method. On the other hand, the flexibility of the ME distribution and, in particular, its widely varying tail-heaviness, make it difficult to find an envelope instrumental distribution general enough to set up an accept-reject scheme that works efficiently for any ME density.
Given these difficulties in simulating the ME distribution exactly, we propose a method based on Adaptive Importance Sampling (AIS; see [15,16,17]). The implementation of AIS used here was developed by [3,4] in a Bayesian setup and can also be used for approximate simulation of any absolutely continuous random variable [6]. Instead of a fixed IS density g, AIS finds a sequence of importance densities g ( t ) ( t = 1 , , T ) aimed at approximating the target density f. Thus, even starting from an initial instrumental density that is quite different from the optimal one, the algorithm eventually provides the most efficient (in a sense that shall be clarified below) IS density. A pseudo-code description of the algorithm is as follows:
  • Simulate a first sample x n ( 1 ) = ( x 1 ( 1 ) , , x n ( 1 ) ) g ( 1 ) by means of standard IS; compute the IS weights w j ( 1 ) = f ( x j ( 1 ) ) / g ( 1 ) ( x j ( 1 ) ) ( j = 1 , , n ).
  • Use this sample to approximate the moments of f and construct the updated importance function g ( 2 ) .
  • Measure the goodness of the approximation by means of the relative entropy (or Kullback–Leibler divergence; see ([18][Section 3.3]) from the target:
    K ( f | | g ( 2 ) ) = log f ( x ) g ( 2 ) ( x ) f ( x ) d x
  • “Adjust” the density g ( 2 ) so that K ( f | | g ( 2 ) ) K ( f | | g ( 1 ) ) . More formally, compute min θ K ( f | | g ( 2 ) ) , where θ is the vector of parameters of g ( 2 ) .
  • Repeat the preceding steps until some convergence criterion is met.
The functional form of g must satisfy the requirements of being positive for all values in the support of the target and making minimization of Equation (5) as straightforward as possible. [3,4] show that a convenient instrumental density is a finite mixture of normal distributions, namely
g ( t ) ( x ) = g ( x ; π ( t ) , μ ( t ) , σ 2 ( t ) ) = d = 1 D π d ( t ) ϕ d ( x ; μ d ( t ) σ d 2 ( t ) )
where π ( t ) = ( π 1 ( t ) , , π D ( t ) ) , μ ( t ) = ( μ 1 ( t ) , , μ D ( t ) ) and σ 2 ( t ) = ( σ 1 2 ( t ) , , σ D 2 ( t ) ) are respectively the vector of weights, expected values and variances of the D mixture components. If the target density is p-variate with p 2 , Equation (6) is a p-variate normal mixture and the dimensions of the normal parameters change accordingly.
All parameters of Equation (6) are adaptable, that is, they are supposed to be updated at each iteration of the algorithm. Furthermore, it is possible to use mixtures of different densities: [3,4] extend the algorithm to mixtures of Student-t densities. The key issue is the minimization of the relative entropy at step 4. Clearly, this depends on the choice of the instrumental density g: a detailed analysis when g is a Gaussian mixture is presented below (see the description of Algorithm 2). We can now give a more precise formulation of the algorithm.
Algorithm 1 (Adaptive Importance Sampling)
  • For t = 1 :
    -
    Choose an importance function g ( 1 ) ;
    -
    Simulate x n ( 1 ) = ( x 1 ( 1 ) , , x n ( 1 ) ) independently from g ( 1 ) ;
    -
    Compute the importance weights w 1 ( 1 ) , , w n ( 1 ) , where w j ( 1 ) = f ( x j ( 1 ) ) / g ( 1 ) ( x j ( 1 ) ) .
  • For t > 1 :
    -
    Update the importance function to g ( t + 1 ) according to the minimum Cross-Entropy criterion, using of the previous weighted sample ( x 1 ( t ) , w 1 ( t ) ) , ⋯, ( x n ( t ) , w n ( t ) ) .
    -
    Simulate x n ( t + 1 ) = ( x 1 ( t + 1 ) , , x n ( t + 1 ) ) independently from g ( t + 1 ) ;
    -
    Compute the importance weights w 1 ( t + 1 ) , , w n ( t + 1 ) .
An approximate sample from the target density is obtained by sampling with replacement from g ( t ) and weighting the observations by means of the vector w ( t ) . This algorithm cannot be, however, directly applied to the simulation of the ME density based on logarithmic moments. The reason is that the support of a normal (or Student-t) mixture is the real line, whereas the support of the logarithmic ME density is [ 0 , + ) . It follows that the approximation is necessarily bad because, when simulating from the mixture, there is a non-zero probability of obtaining negative values. To overcome this difficulty, we need an instrumental distribution with support [ a , b ) , with a 0 and b + . Fortunately, there is a convenient solution, whose great advantage consists in leaving unchanged the updating equations obtained for a normal mixture. Assume the instrumental density is a finite mixture of lognormal distributions: g ( x ) = d = 1 D π d g ( x ; μ d , σ d 2 ) , where g ( x ; μ d , σ d 2 ) is the lognormal density with parameters μ d and σ d 2 ; it is immediate to verify that, if X has density g, the density of Y = log X is Equation (6), namely a normal mixture where both the mixing weights and the parameters of the component densities are the same of g. For the setup at hand, Algorithm 1 can thus be rewritten as follows.
Algorithm 2 (Lognormal mixture Adaptive Importance Sampling)
  • For t = 1 :
    • Let g ( 1 ) = d = 1 D π d g ( x ; μ d , σ d 2 ) , where g ( x ; μ d , σ d 2 ) Logn ( μ d , σ d 2 ) ;
    • Simulate x n ( 1 ) = ( x 1 ( 1 ) , , x n ( 1 ) ) independently from g ( 1 ) ;
    • Compute the importance weights w 1 ( 1 ) , , w n ( 1 ) , where w j ( 1 ) = f ( x j ( 1 ) ) / g ( 1 ) ( x j ( 1 ) ) .
  • For t > 1 :
    • Update the importance function to g ( t + 1 ) according to the minimum Cross-Entropy criterion, using the logarithm of previous weighted sample ( x 1 ( t ) , w 1 ( t ) ) , ⋯, ( x n ( t ) , w n ( t ) ) .
    • Simulate x 1 ( t + 1 ) , , x n ( t + 1 ) independently from g ( t + 1 ) ;
    • Compute the importance weights w 1 ( t + 1 ) , , w n ( t + 1 ) .
It is worth stressing that at step 2(a) we take the logarithm of the current sample, which has a normal distribution, so that we can update π, μ = ( μ 1 , , μ D ) and σ 2 = ( σ 1 , , σ D ) using the updating equations of the normal distribution. Then, at step 2(b), we use the updated values of the parameters for simulating the lognormal distribution.
The main technical issue is the update of the importance function g at each iteration. When it is a p-variate normal mixture, the algorithm works as follows. At iteration t, the importance weights associated with the sample x 1 ( t ) , , x n ( t ) are given by
w j ( t ) = f ( x j ( t ) ) d = 1 D π d ( t ) g ( x j ( t ) ; μ d ( t ) , σ d 2 ( t ) ) , j = 1 , , n
and the normalized weights are w ¯ j ( t ) = w j ( t ) / j = 1 n w j ( t ) . Let now y = log x . Similarly to [3], the update is performed by iterating, for any i = 1 , , D , the equations:
π i ( t + 1 ) = j = 1 n w ¯ j ( t ) τ i ( y j ( t ) ; π ( t ) , μ ( t ) , σ 2 ( t ) )
μ i ( t + 1 ) = j = 1 n w ¯ j ( t ) y j ( t ) τ i ( y j ( t ) ; π ( t ) , μ ( t ) , σ 2 ( t ) ) π i ( t + 1 )
σ i 2 ( t + 1 ) = j = 1 n w ¯ j ( t ) ( y j ( t ) - μ i ( t + 1 ) ) ( y n ( t ) - μ i ( t + 1 ) ) τ i ( y j ( t ) ; π ( t ) , μ ( t ) , σ 2 ( t ) ) π i ( t + 1 )
where
τ i ( y ; π , μ , σ 2 ) = π i g ( x ; μ i , σ i 2 ) d = 1 D π d g ( x ; μ d , σ d 2 )
Convergence of the algorithm has been established by [4]. In practice, to determine a stopping criterion, the goodness of the approximation must be evaluated at each iteration. An approximate diagnostic directly related to the relative entropy Equation (5) is the normalized perplexity p e r p n = exp { H n } / n , where H n = - j = 1 n ω ¯ j log ω ¯ j is the Shannon entropy of the normalized weights ω ¯ j . ([4][Section 2.1]) show that p e r p n is an estimator of exp { K ( f | | g ) } and that 0 per p n 1 . On average, the normalized perplexity increases at each iteration, so that it is reasonable to stop the algorithm when it cannot be further increased (hence the entropy cannot be further decreased). Thus, the algorithm can be stopped when the normalized perplexity does not change significantly over some (five, say) successive iterations or a predefined “large” number of iterations is reached. Alternatively, it is possible to fix in advance a perplexity value x p e r p such that the algorithm stops the first time that the normalized perplexity reaches x p e r p : this guarantees the desired accuracy level.
In a simulation setup such as the present one, we are interested in assessing whether the simulated data can actually be treated as generated by the ME density. To this aim, we shall also test the discrepancy between the simulated data and the cdf of the ME density, computed numerically, by means of the Kolmogorov–Smirnov test.

4. Simulation and Application

In this section we perform some simulation experiments and analyze a real dataset with four aims: (i) determining the goodness of fit of the ME distribution; (ii) studying the relationship of the goodness of fit to the number of moment constraints k; (iii) evaluating the precision of the AIS-based simulation methodology; (iv) assessing the comparative performance of the ME and of two commonly used parametric approaches.

4.1. Simulation

If the true distribution is a member of one of the parametric families encompassed by the ME density, the estimates should be equal to the values of the parameters corresponding to that family. We check this for the versions of the ME distribution corresponding to the Lognormal and Pareto distribution.

4.1.1. Simulate Lognormal

For B = 10 , 000 replications, we simulate n = 1000 observations from the standard Lognormal distribution and fit the ME density. The Lognormal distribution is a logarithmic ME density with k = 2 (see Section 2). We estimate the ME density with k { 2 , 3 , 4 } and use the likelihood ratio criterion to determine the appropriate value of k ( k * , say). Using Equation (3), we know that, when μ = 0 and σ 2 = 1 , the true values of the parameters are λ 0 = ( 1 / 2 ) log ( 2 π ) = 0 . 9189 , λ 1 = 1 and λ 2 = 1 / 2 .
Table 1. Averages of estimated parameters of the ME density across 10,000 replications of the simulation of the Logn ( 0 , 1 ) distribution. Sample size is n = 1 , 000 at each replication.
Table 1. Averages of estimated parameters of the ME density across 10,000 replications of the simulation of the Logn ( 0 , 1 ) distribution. Sample size is n = 1 , 000 at each replication.
k = 1 k = 2 k = 3 k = 4
λ 0 1.8550.9260.9250.905
λ 1 11.0011.0021.002
λ 2 0.4920.4930.537
λ 3 −0.000−0.000
λ 4 −0.008
Tables Table 1 and Table 2 respectively show the averages of the estimated parameters and the values of the tests; llr is the log-likelihood ratio test (p-values in parentheses) and ST is the indistinguishability test proposed by [19]. All the tests give a clear indication in favor of the ME(2) (lognormal) distribution. The values of the estimated parameters in Table 1 for k = 2 also confirm that the ME coincides with the lognormal distribution.
Table 2. Averages of the values of the tests (for llr, p-values are in parentheses) across 10,000 replications of the simulation of the Logn ( 0 , 1 ) distribution. Sample size is n = 1 , 000 at each replication.
Table 2. Averages of the values of the tests (for llr, p-values are in parentheses) across 10,000 replications of the simulation of the Logn ( 0 , 1 ) distribution. Sample size is n = 1 , 000 at each replication.
llrSTAICBIC
k = 1 5.0585.068
k = 2 172.590 (0)0.3555.5005.515
k = 3 0.250 (0.617)0.0005.5035.523
k = 4 0.422 (0.516)0.0015.5065.530
Table 3. Averages of the values of the tests (for llr, p-values are in parentheses) across 10,000 replications of the simulation of the Par ( 5 , 1 . 5 ) distribution. Sample size is n = 1 , 000 at each replication.
Table 3. Averages of the values of the tests (for llr, p-values are in parentheses) across 10,000 replications of the simulation of the Par ( 5 , 1 . 5 ) distribution. Sample size is n = 1 , 000 at each replication.
llrSTAICBIC
k = 1 5740.6235750.411
k = 2 0.015 (0.902)0.0005742.3945757.276
k = 3 0.058 (0.810)0.0005743.4125763.355
k = 4 0.060 (0.806)0.0005744.4785769.333
Table 4. Averages of estimated parameters of the ME density across 10,000 replications of the simulation of the Par ( 5 , 1 . 5 ) distribution. Sample size is n = 1000 at each replication.
Table 4. Averages of estimated parameters of the ME density across 10,000 replications of the simulation of the Par ( 5 , 1 . 5 ) distribution. Sample size is n = 1000 at each replication.
k = 1 k = 2 k = 3 k = 4
λ 0 −2.834−2.879−3.101−3.583
λ 1 2.5052.5382.7643.362
λ 2 −0.006−0.076−0.331
λ 3 0.0070.051
λ 4 −0.003

4.1.2. Simulate Pareto

For B = 10 , 000 replications, we simulate n = 10 , 000 observations from the Pareto distribution Par ( c , α ) with scale c = 5 and shape α = 1 . 5 and fit the ME density. The Pareto distribution is a logarithmic ME density with k = 1 (see Section 2): in particular, from Equation (4), λ 0 = - 2 . 8196 and λ 1 = 2 . 5 . We estimate the ME density with k { 1 , 2 , 3 , 4 } and use the likelihood ratio criterion to determine k * . All the tests in Table 3 strongly suggest k * = 1 , i.e., a Pareto distribution, and the average estimated parameter values in Table 4 obtained for k = 1 are in good agreement with the theoretical values corresponding to c = 5 and α = 1 . 5 .

4.1.3. Simulate a Lognormal–Pareto Mixture

To assess how well the ME method fits a non-standard distribution, we simulate a mixture of a right-truncated lognormal and a Pareto distribution. The density is given by
f ( x ) = r 1 Φ log μ - c σ f 1 ( x ) I { x c } + ( 1 - r ) f 2 ( x ) I { x c }
where Φ is the cdf of the normal distribution, f 1 and f 2 are the Logn ( μ , σ 2 ) and the Par ( c , α ) densities and I A is the indicator function of the set A. If we impose the condition that the density be continuous and differentiable at c, the lognormal expected value μ and the mixing weight r are no longer free parameters; in particular, they are given by [20]:
μ = log c - α σ 2 ; r = 2 π α σ Φ ( α σ ) e 1 2 ( α σ ) 2 2 π α σ Φ ( α σ ) e 1 2 ( α σ ) 2 + 1
In this case, estimating the parameters of the true data-generating process in a parametric approach is rather difficult, and the ME method is an appealing alternative. The model-selection strategy fits sequentially the ME density with k = 1 , 2 , , computes the llr test and the BIC criterion, stops at the first value of k ( k 0 , say) such that either the llr test cannot reject the hypothesis k = k 0 or BIC ( k 0 ) < BIC ( k 0 + 1 ) and concludes that k * is equal to k 0 . As before, in the simulation experiment we use B = 10 , 000 replications and sample size n = 1000 ; moreover, we set α = 2 , c = 20 and σ 2 = 0 . 25 , resulting in μ = 2 . 5 and r = 0 . 777 .
Table 5. Averages of estimated parameters of the ME density across 10,000 replications of the simulation of the Lognormal–Pareto mixture distribution. Sample size is n = 1000 at each replication.
Table 5. Averages of estimated parameters of the ME density across 10,000 replications of the simulation of the Lognormal–Pareto mixture distribution. Sample size is n = 1000 at each replication.
k = 1 k = 2 k = 3 k = 4 k = 5 k = 6 k = 7 k = 8 k = 9 k = 10
λ 0 0.6548.77214.39818.29512.5524.4588.70314.82722.264−71.207
λ 1 1.380−5.372−12.024−17.483−6.89511.823−0.512−20.404−45.028347.685
λ 2 1.2233.6826.345−0.968−17.743−3.54222.75656.542−641.972
λ 3 −0.286−0.8211.5539.0540.580−18.050−43.331652.963
λ 4 0.037−0.326−2.1010.7528.53319.917−413.034
λ 5 0.0210.233−0.312−2.280−5.482170.761
λ 6 −0.0100.0450.3420.901−46.851
λ 7 −0.002−0.027−0.0858.446
λ 8 0.0010.004−0.961
λ 9 -0-.0000.062
λ 10 −0.002
Table 6. Averages of the values of the tests (for llr, p-values are in parentheses) across 10,000 replications of the simulation of the Lognormal–Pareto mixture distribution. Sample size is n = 1000 at each replication.
Table 6. Averages of the values of the tests (for llr, p-values are in parentheses) across 10,000 replications of the simulation of the Lognormal–Pareto mixture distribution. Sample size is n = 1000 at each replication.
llrSTAICBIC
k = 1 85031.17185046.559
k = 2 13563.365 (<0.001)0.49171470.82271492.978
k = 3 842.316 (<0.001)0.04170630.33670658.398
k = 4 68.841 (<0.001)0.00370563.73470599.448
k = 5 28.944 (<0.001)0.00170536.39270579.887
k = 6 29.659 (<0.001)0.00170508.05670559.811
k = 7 3.692 (0.055)<0.00170506.92470564.493
k = 8 2.286 (0.131)<0.00170506.66470571.292
k = 9 3.049 (0.081)<0.00170505.26870577.853
k = 10 896.16 (<0.001)<0.00170511.54670585.093
The averages of the estimated parameters and the values of the tests are reported in Table 5 and Table 6. The llr test is marginally significant for k * = 6 and non-significant for k * = 7 ; the BIC also suggests k * = 7 . Figure 1 displays the histogram of the simulated data superimposed with the Lognormal–Pareto mixture and the ME(7) density; for readability, in the graph we only show the observations smaller than 200. The fit is extremely good, as the two densities are virtually indistinguishable. The ME(6) density, not shown in the graph, is essentially identical to the ME(7), so that choosing k * = 6 would be appropriate as well.
Figure 1. The simulated Lognormal–Pareto observations, the true Lognormal–Pareto mixture density and the fitted ME(7) density.
Figure 1. The simulated Lognormal–Pareto observations, the true Lognormal–Pareto mixture density and the fitted ME(7) density.
Entropy 15 01100 g001

4.2. Application

In Section 4.1 we have studied numerically some properties of the ME estimation methodology. Now we turn to the AIS simulation method. Thus, after finding the ME density that best fits a loss dataset, we implement AIS simulation of the estimated ME distribution and evaluate its performance; finally, we perform a risk assessment and a comparison with standard parametric models.

4.2.1. The General Liability Claims Dataset: Estimation and Simulation

For the empirical analysis, we use a loss dataset from the insurance field. The data consist of n = 1500 indemnity payments (losses) in USD and are available in the R package evd [21]. We find it more convenient to divide the original data by 1000, so that in the following the observations are in thousands of USD. The analysis consists of four steps:
  • Estimate the ME density.
  • Simulate the estimated ME density;
  • Compute the Kolmogorov–Smirnov (KS) distance and the significance level;
  • Estimate tail probabilities using various models and compare the outcomes.
As concerns the implementation of these three steps, some remarks are in order. The fitted ME density is only non-zero for x ( 1 ) x x ( n ) , where x ( 1 ) and x ( n ) are respectively the smallest and the largest observation. On the other hand, the AIS procedure based on lognormal densities simulates random numbers on ( 0 , ) . As a result, the efficiency of the algorithm is reduced, because all the simulated observations not belonging to ( x ( 1 ) , x ( n ) ) have zero weight. Thus, it would be better to define the ME density on ( 0 , ) . Such an extension can be constructed as follows. First, estimate the parameters λ 0 , , λ k . The corresponding ME density is defined on ( x ( 1 ) , x ( n ) ) , and can be seen as a truncated version of the ME density f defined on ( 0 , + ) . Thus, it must exist c norm R such that c norm 0 f ( x ) d x = x ( 1 ) x ( n ) f ( x ) d x = 1 . The value of c norm can be found by solving numerically for c norm the equation c norm 0 f ( x ) d x = 1 , and the density c norm f is the extension of the estimated ME density from ( x ( 1 ) , x ( n ) ) to ( 0 , + ) .
We now proceed to the estimation of the ME density. The results shown in Table 7 are not univocal: according to the llr p-value, the test for H 0 : k = 2 is marginally significant, k = 3 is rejected and k = 4 is accepted. On the other hand, the AIC and BIC criteria would both suggest k = 4 . According to the model-selection criterion in Section 2, we conclude that k * = 4 .
Figure 2 shows the estimated ME(2), ME(3) and ME(4) densities superimposed on the normalized histogram of the data smaller than 100. The inset focuses on the tail of the distribution, i.e., the observations larger than 100. The main differences between the distributions are concentrated in the left tail, which is not particularly important for risk analysis purposes. On the other hand, the differences in the right tail, shown in the inset, are very small (note the different scale in the main graph and in the inset): the ME(2), ME(3) and ME(4) densities are respectively equal to 3 · 10 - 5 , 1 · 10 - 4 and 9 · 10 - 5 when x = 1 , 000 (the largest claim in the sample).
Table 7. Values of the tests for the General Liability Claims dataset.
Table 7. Values of the tests for the General Liability Claims dataset.
kllrp-valueAICBIC
1 13345.15713355.781
21569.080 < 0 . 001 11778.07511794.012
34.2130.04011775.87811797.126
420.254 < 0 . 001 11757.62811784.199
50.1280.72111759.49511791.378
60.7870.37411760.72611797.987
Figure 2. The histogram of the General Liability Claims data with the estimated ME(3), ME(4) and ME(5) densities.
Figure 2. The histogram of the General Liability Claims data with the estimated ME(3), ME(4) and ME(5) densities.
Entropy 15 01100 g002
Next we simulate the ME density using AIS. The algorithm is implemented with D = 7 and with the following starting values:
π = 1 / 7 1 / 7 1 / 7 1 / 7 1 / 7 1 / 7 1 / 7 , μ = 1 2 4 5 6 6 . 5 7 , σ 2 = 1 1 1 0 . 2 0 . 15 0 . 15 0 . 15
With several different initializations the algorithm always converges to the same parameter values. It therefore seems that the starting values are not particularly important, a result not unexpected because the EM algorithm is quite insensitive to the initial values of the parameters [22].
We simulate n = 10 , 000 observations obtaining a normalized perplexity equal to 0.9981 at the 17-th iteration, a large value that guarantees a very high precision level. Figure 3 displays the estimated ME(4) density superimposed on the normalized histogram of the simulated observations. The graph confirms that the fit is excellent. As a final check, we compute the one-sample Kolmogorov–Smirnov test K S = max | F n ( x ) - F ( x ) | , where F n is the empirical cdf of the simulated data and F ( x ) is the theoretical cdf of the estimated ME(4) distribution. We got K S = 0 . 0098 , whose p-value is 0.289. Thus, we accept the hypothesis that the data-generating process of the simulated observations is the estimated ME(4) density.
Figure 3. The estimated ME(4) density superimposed on the normalized histogram of the simulated observations.
Figure 3. The estimated ME(4) density superimposed on the normalized histogram of the simulated observations.
Entropy 15 01100 g003
Turning now to risk analysis, we estimate tail probabilities using the ME density and compare them with those obtained by means of traditional parametric approaches. With loss data, typical choices are the Lognormal or the Pareto distribution, depending on tail heaviness.
Specifically, for various values x between 60 and 900 we compute tail probabilities p x = P ( L > x ) for the lognormal (which coincides with ME(2)), Pareto, ME(3) and ME(4) distributions, with the aim of assessing the robustness of the results with respect to the ME densities that we were uncertain about in the analysis carried out above. We also show the frequencies # { l i > x } / n , where l i ( i = 1 , , n ) are the observed losses. According to the discussion in Section 2, the Pareto distribution should be ruled out, as the llr test rejects the hypothesis k * = 1 at any significance level. Nonetheless, we include it either because it is a commonly used model for this kind of data or because by so doing we can check ex post whether the decision made on the basis of the ME test is correct. As often happens, the choice of the Pareto threshold u is not easy: from the mean excess function and the Hill plot we find that u = 14 seems to be a reasonable compromise between bias and variance. The outcomes are shown in Table 8.
Among the distributions considered in the experiment, the ME(4) and the lognormal (i.e., ME(2)) show the minimum distances from the observed frequencies. Such a similarity is not surprising because the ME(4) is the best distribution according to the llr test and the information criteria, but the lognormal (ME(2)) is nearly accepted by the llr test. The Pareto tail is definitely too heavy, as expected from the results of the ME test. The ME(3) is heavier-tailed than the ME(4) distribution, but the difference is not dramatic.
Table 8. Tail probabilities obtained via various ME densities and standard parametric approaches for the General Liability Claims dataset.
Table 8. Tail probabilities obtained via various ME densities and standard parametric approaches for the General Liability Claims dataset.
ThresholdLogn(2.04, 2.54)Par(14, 0.94)ME(3)ME(4)Obs
600.09860.20660.09900.09650.0960
800.07080.16470.07360.06760.0673
1000.05370.13720.05820.05010.0493
1200.04230.11770.04800.03860.0427
1400.03430.10320.04080.03060.0353
1600.02840.09190.03540.02490.0293
1800.02390.08290.03130.02050.0267
2000.02040.07560.02810.01720.0220
2200.01760.06940.02550.01460.0193
2400.01540.06420.02330.01250.0180
2600.01350.05980.02150.01090.0160
2800.01200.05590.02000.00950.0140
3000.01070.05260.01870.00830.0107
3500.00830.04570.01610.00620.0093
4000.00660.04040.01420.00470.0073
4500.00530.03630.01270.00370.0060
5000.00440.03290.01160.00300.0020
5500.00370.03020.01060.00240.0020
6000.00310.02780.00980.00200.0020
6500.00270.02590.00920.00170.0020
7000.00230.02410.00860.00140.0020
7500.00200.02260.00810.00120.0013
8000.00180.02130.00760.00100.0013
8500.00160.02010.00720.00090.0013
9000.00140.01910.00690.00080.0007
Finally, Figure 4 shows the ME(4) and the lognormal densities superimposed on the histogram of the data. The fit is very good, and the two densities are almost indistinguishable.

5. Conclusions

In this paper we have developed a non-parametric approach to loss distribution analysis based on the concept of Maximum Entropy. The main technical contribution is a flexible procedure for approximate simulation of the ME distribution, based on an extension of the Adaptive Importance Sampling algorithm introduced by [3,4]. More generally, the paper details the merits of the ME approach in the typical risk analysis process of loss distributions, from model building to estimation, simulation and risk measurement.
Figure 4. The estimated ME(4) and lognormal densities superimposed on the histogram of the General Liability Claims observations.
Figure 4. The estimated ME(4) and lognormal densities superimposed on the histogram of the General Liability Claims observations.
Entropy 15 01100 g004
The method is appealing for various reasons. First, it allows to fit the best distribution (“best” being defined according to the Maximum Entropy principle) for the data at hand, which is particularly useful when there are no theoretical reasons for adopting a parametric distribution. Second, it may serve as a test of the choice between two (or more) parametric models. Finally, both estimation and simulation are computationally feasible.
The simulation method proposed in this paper has proved to be very precise and may be of crucial importance in models such as the compound Poisson distribution commonly used for modelling non-life insurance claims and operational losses.
A relevant topic that requires further research is the extension of the AIS instrumental density to mixtures that can approximate target distributions with support [ c , + ) for some c > 0 , such as the Pareto or the truncated lognormal distribution.

Acknowledgements

The author would like to thank Massimo Riccaboni and Stefano Schiavo for useful discussions about the ME methodology and two anonymous referees whose valuable comments greatly helped to improve the contents of this paper.

References

  1. Jaynes, E.T. Information theory and statistical mechanics. Phys. Rev. 1957, 106, 620–630. [Google Scholar]
  2. Wu, X. Calculation of maximum entropy densities with application to income distribution. J. Econom. 2003, 115, 347–354. [Google Scholar] [CrossRef]
  3. Cappé, O.; Guillin, A.; Marin, J.; Robert, C. Population Monte Carlo. J. Comput. Graph. Stat. 2004, 13, 907–929. [Google Scholar] [CrossRef]
  4. Cappé, O.; Douc, R.; Guillin, A.; Robert, C. Adaptive importance sampling in general mixture classes. Stat. Comput. 2008, 18, 447–459. [Google Scholar] [CrossRef] [Green Version]
  5. Wraith, D.; Kilbinger, M.; Benabed, K.; Cappé, O.; Cardoso, J.F.; Fort, G.; Prunet, S.; Robert, C. Estimation of cosmological parameters using adaptive importance sampling. Phys. Rev. D 2009, 80, 023502. [Google Scholar] [CrossRef]
  6. Bee, M. Adaptive importance sampling for simulating copula-based distributions. Insur. Math. Econ. 2011, 48, 237–245. [Google Scholar] [CrossRef]
  7. Kapur, J. Maximum Entropy Models in Science and Engineering; Wiley: New York, USA, 1989. [Google Scholar]
  8. Buchen, P.; Kelly, M. The maximum entropy distribution of an asset inferred from option prices. J. Financ. Quant. Anal. 1996, 31, 143–159. [Google Scholar] [CrossRef]
  9. Hawkins, R. Maximum entropy and derivative securities. Adv. Econ. 1997, 12, 277–301. [Google Scholar]
  10. Stutzer, M. A simple nonparametric approach to derivative security valuation. J. Financ. 1996, 51, 1633–1652. [Google Scholar] [CrossRef]
  11. Breuer, T.; Csiszár, I. Systematic stress tests with entropic plausibility constraints. J. Bank. Financ. 2013, in press. [Google Scholar] [CrossRef]
  12. Neri, C.; Schneider, L. Maximum entropy distributions inferred from option portfolios on an asset. Financ. Stoch. 2012, 16, 293–318. [Google Scholar] [CrossRef]
  13. Rodriguez, J.O.; Santosa, F. Estimation of asset distributions from option prices: Analysis and regularization. SIAM J. Financ. Math. 2012, 3, 374–401. [Google Scholar] [CrossRef]
  14. Bee, M.; Riccaboni, M.; Schiavo, S. Pareto versus lognormal: A maximum entropy test. Phys. Rev. E 2011, 84, 026104. [Google Scholar] [CrossRef]
  15. Zhang, P. Nonparametric importance sampling. J. Am. Stat. Assoc. 1996, 91, 1245–1253. [Google Scholar] [CrossRef]
  16. Rubinstein, R. The Cross-Entropy method for combinatorial and continuous optimization. Methodol. Comput. Appl. Probab. 1999, 2, 127–190. [Google Scholar] [CrossRef]
  17. Morio, J. Extreme quantile estimation with nonparametric adaptive importance sampling. Simul. Modell. Pract. Theory 2012, 27, 76–89. [Google Scholar] [CrossRef]
  18. Rubinstein, R.; Kroese, D. The Cross-Entropy Method; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  19. Soofi, E.; Ebrahimi, N.; Habibullah, M. Information Distinguishability with Application to Analysis of Failure Data. J. Am. Stat. Assoc. 1995, 90, 657–668. [Google Scholar] [CrossRef]
  20. Scollnik, D. On composite lognormal-Pareto models. Scand. Actuar. J. 2007, 1, 20–33. [Google Scholar] [CrossRef]
  21. Stephenson, A.G. evd: Extreme Value Distributions. R News 2002, 2, 31–32. [Google Scholar]
  22. McLachlan, G.; Krishnan, T. The EM Algorithm and Extensions, 2nd ed.; Wiley: New York, USA, 2008. [Google Scholar]

Share and Cite

MDPI and ACS Style

Bee, M. A Maximum Entropy Approach to Loss Distribution Analysis. Entropy 2013, 15, 1100-1117. https://doi.org/10.3390/e15031100

AMA Style

Bee M. A Maximum Entropy Approach to Loss Distribution Analysis. Entropy. 2013; 15(3):1100-1117. https://doi.org/10.3390/e15031100

Chicago/Turabian Style

Bee, Marco. 2013. "A Maximum Entropy Approach to Loss Distribution Analysis" Entropy 15, no. 3: 1100-1117. https://doi.org/10.3390/e15031100

APA Style

Bee, M. (2013). A Maximum Entropy Approach to Loss Distribution Analysis. Entropy, 15(3), 1100-1117. https://doi.org/10.3390/e15031100

Article Metrics

Back to TopTop