Next Article in Journal
Portfolio Construction by Using Different Risk Models: A Comparison among Diverse Economic Scenarios
Previous Article in Journal
Pricing, Risk and Volatility in Subordinated Market Models
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Bayesian Internal Model for Reserve Risk: An Extension of the Correlated Chain Ladder

by
Carnevale Giulio Ercole
1 and
Clemente Gian Paolo
2,*
1
PartnerRe, Hardstrasse 301, 8005 Zürich, Switzerland
2
Department of Mathematics for Economic, Financial and Actuarial Sciences, Università Cattolica del Sacro Cuore, Largo Agostino Gemelli 1, 20123 Milan, Italy
*
Author to whom correspondence should be addressed.
Risks 2020, 8(4), 125; https://doi.org/10.3390/risks8040125
Submission received: 28 September 2020 / Revised: 4 November 2020 / Accepted: 9 November 2020 / Published: 19 November 2020

Abstract

:
The goal of this paper was to exploit the Bayesian approach and MCMC procedures to structure an internal model to quantify the reserve risk of a non-life insurer under Solvency II regulation. To this aim, we provide an extension of the Correlated Chain Ladder (CCL) model to the one-year time horizon. In this way, we obtain the predictive distribution of the next year obligations and we are able to assess a capital requirement compliant with Solvency II framework. Numerical results compare the one-year CCL with other traditional approaches, such as Re-Reserving and the Merz and Wüthrich formula. One-year CCL proves to be a legitimate alternative, providing values comparable with the more traditional approaches and more robust and accurate risk estimations, that embed external knowledge not present in the data and allow for a more precise and tailored representation of the risk profile of the insurer.

1. Introduction

The most recent regulatory requirements for European insurers, known as Solvency II, introduced the possibility of tailored risk modeling for many sources of risks borne by each undertaking. As an alternative to the Standard Formula, according to its own technical capabilities and upon approval of the supervisory authority, each insurer can develop its own internal model. For a specific risk, or a particular set of risks, such a partial internal model is supposed to capture the insurer’s risk profile in a more consistent way than the Standard Formula, equal for all the market participants. To accomplish this task, actuarial literature produced several models. In this paper, we provide a partial internal model for assessing reserve risk’s capital requirement using Bayesian procedures. In the context of tailored risk modeling, a Bayesian approach has indeed a great potential. In fact, Bayesian techniques are able to include in statistical models external knowledge and subjective judgments that are not necessarily deduced by the data, other than the fact that they require the explicit specification of all the assumptions made. If properly used then, these features enable a better description of the risks faced by an insurer. Anyway, the use of techniques with a Bayesian flavor is not something new for actuarial science (see Klugman (1992)) and with respect to claims reserving can be traced back to Bornhuetter and Ferguson (1972). Despite this, Bayesian models often proved to be too complex to be implemented on real world applications as this kind of modeling often leads to mathematically intractable forms. However, this mathematical complexity can be overcome using Markov Chain Monte Carlo (MCMC) simulation techniques that provide empirical approximations when closed formulas are not available. With respect to claims reserving, the first appearance of MCMC procedures is credited to Ntzoufras and Della Portas in 2002 Ntzoufras and Dellaportas (2002), which has been followed by many other papers (see, e.g., Peters et al. (2017) and Wüthrich (2007) for recent applications in the field of claims reserves).
Concerning claim reserving, one of the most popular deterministic methods is the chain ladder (see, e.g., Friedland (2010) and Hindley (2017) for a review of main deterministic methodologies). As well-known, the ultimate cost of each accident year is predicted using run-off triangles of paid and/or incurred losses and assuming that prior patterns of losses persist also in the future (e.g., in terms of settlement speed or behavior of the case reserve). Several stochastic methods have been proposed in the literature in order to measure a variability of the Chain Ladder methodology (see, e.g., Mack (1993) and Wüthrich and Merz (2007)). In this context, it is worth to be mentioned the Correlated Chain Ladder (see Meyers (2015); Frees et al. (2016)) that exploits the advantages of Bayesian models and allows to model a correlation between accident years. In this framework, we move from the Correlated Chain Ladder (CCL) and we provide an extension of this approach in order to model the claim development result distribution in a framework compliant with Solvency II. On a one-year time-frame, this proposal represents an alternative of two classical approaches1, Re-Reserving Diers (2009) and Merz and Wüthrich formula Wüthrich et al. (2008), widely used in practice when chain ladder is the underlying deterministic method. However, this approach is not a mere third option. Indeed, Bayesian techniques represent a more refined and sophisticated approach to obtain claims reserves variability, and this is especially true in comparison to bootstrap based algorithms2 (see, e.g., Hastie et al. (2009)). In the context of modeling regulatory capital, using a fully specified probabilistic framework, able to integrate external information, provides a more advanced approach than methods that simply fit semi-parametrically the variability on a limited set of observations. Typically, when speed and simplicity are critical, bootstrap estimates could be more appropriate, while when accuracy is important, Bayesian modeling could represent a more consistent choice. With respect to stochastic claim reserving, we think Bayesian techniques can have a role in improving current standards where the accuracy of estimates is critical, like internal modeling practice.
This paper is organized as follows. In Section 2, the original CCL model, introduced in Meyers (2015), is described. In Section 3, we provide the extension of the model in a one-year view. In particular, we adapt the idea of a Bayesian update originally found in Meyers (2017) in order to assess the distribution of claims development result. In Section 4, we develop a case study based on motor third-party liability data. In particular, in Section 4.1, we focus on the assumptions and on the calibration procedure. In Section 4.2 and Section 4.3, we provide main results comparing our proposal to classical approaches provided in the literature. We show how one-Year CCL can be a viable alternative to assess a capital requirement compliant with Solvency II regulation. Conclusions follow.

2. The Correlated Chain Ladder

In Bayesian claims reserving, inference is performed by computing a posterior distribution of the parameters. This distribution is derived via Bayes theorem by combining a prior, representative of external knowledge, and observations. To this extent, we need a data structure that eases the procedure of prior parameters specification. Concerning reserving models, we refer to a cross-classified structure. Given a run-off triangle:
D t = { X i j : i = 0 , , t ; j = 0 , , t ; i + j t }
where i , j , and t represent the accident year, the development year, and the evaluation period respectively, and being X i j whatever claims figure (typically either incremental or cumulative payments). In general, a cross-classified structure has the following form:
X i j = f ( α i , β j )
Claims figures are thought as a function of parameters, where α i acts as a parameter linked to the accident year and β j as a development parameter.3 This is a potentially superior informative structure that allows us to link the data to phenomena separately related to either the accident or the development year. With respect to Bayesian inference, this structure is clearly best suited to receive prior information about different factors that affect the reserving process.4

2.1. Model Specification

The Correlated Chain Ladder, introduced by Meyers (2015), is basically a re-parameterization of the traditional Chain Ladder model, that exploits exactly a cross-classified structure in order to enhance the possibilities of the traditional algorithm and re-interpret it in a Bayesian context.
Given a triangle of cumulative payments:
D t = { C i j : i = 0 , , t ; j = 0 , , t ; i + j t }
data are assumed generated by a log-normal distribution:
C i j l o g N o r m a l μ i j , σ j
for i = 0 , , t and j = 0 , , t , where:
  • μ 0 j = α 0 + β j
  • μ i j = α i + β j + ρ · l o g ( C i 1 , j ) μ i 1 , j for i > 0
In this framework, a level parameter α i embeds information about accident year i, β j is a sort of new development factor, that captures information about the development of payments and ρ is a feature that models the correlation between different accident years. This last parameter is one of the most remarkable innovations brought by this model, in fact it relaxes the hypothesis of independence between accident years, one of the main assumptions of several stochastic models related to Chain Ladder (see Mack (1993); Wüthrich et al. (2008)).

2.2. Parameter Specification

At this point we can elicit prior distribution for each parameter of the model. This step is critical (see Berger et al. (2009)), as we have to choose reasonable distributions and values that are consistent with our data. As a general case, we have the following:
α i N o r m a l l o g ( B i ) + [ l o g ( e l r i ) ] , ϵ i for i = 0 , , t
where B i represents known gross premiums related to the accident year i and [ l o g ( e l r i ) ] is a further random variable representative of the logarithm of the expected loss ratio for each accident year i, while ϵ i is the precision, rather than the variance, in accordance to the parameterization adopted by most inference engines. It is necessary to further elicit a prior distribution for l o g ( e l r ) i and as a general case we specify a non-informative uniform prior with parameters γ i and δ i (as defined in Meyers (2015)):
log ( e l r i ) U n i f o r m γ i , δ i .
Obviously, the model can be easily adapted by defining different parameterizations or selecting different distributions5. By virtue of the parameterization in (1) and (2), the level parameter α i describes the logarithm of the ultimate cost of the accident year i. This hyperparameter is critical since its mean is defined on the log-space and even a slight variation of the parameters highly affects the final result. As a result of this, we should state a reasonable domain of variation of the parameters that will result in values of the claims reserve consistent with our data.
The development parameter β j is defined as a uniform random variable on a negative support:
β j U n i f o r m η , 0 for j = 0 , , t 1
with η R + . According to this assumption, the parameter β j can be interpreted as the portion of the ultimate cost paid up to the development year j and hence, it is strictly depending on the settlement speed of the portfolio. For j = t , we set β j equal to 0, assuming that all claims are settled until the development year t. The approach can be easily extended in case the estimation of a tail is needed. Regarding the log-variance, we have:
σ j 2 = h = j t τ h
where τ h U n i f o r m ( 0 , 1 ) .
As for the correlation parameter, the choice is still left to the modeler. As a general case, it is possible to set a non-informative uniform prior:
ρ U n i f o r m 1 , 1

2.3. Posterior and Predictive Distributions

After model specification and prior elicitation then we use an inference engine, like STAN or JAGS. to generate samples from the posterior distribution. This step is critical as well, as we need a strong sample that we can deem truly representative of the posterior distribution of parameters.
To this extent, we suggest to sample from n different chains that start from n different points, to allow for a sufficiently long warm-up phase and to set an appropriate thin factor.
At this point we have K parameters sets Θ ( k ) , each one representative of a reserving scenario:
Θ ( k ) = { α i } i = 0 t , { β j } j = 0 t 1 , { σ j } j = 0 t , ρ k with k = 1 , , K .
From the posterior distribution, we can obtain the predictive distribution for each cell of the lower triangle, and in particular for the last column. This will allow us to obtain the predictive distribution of the ultimate cost and thus of the claims reserve. For each k, we sample this ultimate cost from a log-normal:
C i t ( k ) l o g N o r m a l μ i t ( k ) , σ t ( k ) for i = 1 , , t .
As usual, we get the k-th determination of the predictive distribution of the claims reserve by the following formula:
R ( k ) = i = 1 t C i t ( k ) i = 1 t C i , t i .
Iterating the procedure K times, we have a Bayesian predictive distribution of the claims reserve in a total run-off framework.

3. An Extension in a One-Year Framework

Now we propose a way to adapt the correlated Chain Ladder on the one-year time horizon in order to describe the reserve risk according to Solvency II guidelines. In this context, a solvency capital requirement (SCR) for reserve risk, at the end of time t, can be derived as:
S C R 0.995 = V a R 0.995 i = 1 t P i , t i + 1 + R D t + 1 v 1 R D t
where P i , t i + 1 denotes the incremental payments of calendar year t + 1 for claims incurred in the accident year i, R D t is the best estimate at time t and R D t + 1 is the best estimate at time t + 1 considering only existing claims at time t. Hence, the capital requirement at a confidence level equal to 99.5 % is obtained as the difference between the Value at Risk of the distribution of the next year insurer obligations, opportunely discounted with the risk-free discount factor v 1 , and the best estimate6 at time t. Being R D t a known amount at the valuation date, we can directly focus on the sum of next year payments for claims already incurred and the residual reserve that will be posted after experiencing one more year of run-off.
Adapting a reserving model on a given time horizon means making an assumption about the payments patterns on such a time horizon and then computing a residual reserve in light of this new information. To this extent, the CCL tries to ground this informative update on a Bayesian approach, making the Bayes theorem a starting point for the determination of the residual reserve after one year of run-off. Main idea is to simulate a set of losses given the parameters sets generated with the MCMC procedure and then re-evaluate the probability of the same sets using the Bayes theorem. Each set could be thought as a different scenario for the reserving process and this effectively means to re-weight probabilities of future scenarios in light of latest realizations. The idea of Bayesian update is originally found in Meyers (2017) and Frees et al. (2016). Here, we adapt it in order to provide an algorithm capable of producing a predictive distribution of the next year obligations over the one-year time horizon. Such a distribution is compliant with Solvency II guidelines and is suitable to represent an internal model for reserve risk. Our goal then will be modeling the following random variable:
Y 1 y r = i = 1 t P i , t i + 1 + R D t + 1 .
To do this, we perform the following steps:
  • The starting point is represented by the K parameters sets Θ ( k ) simulated with the MCMC procedure.
    Θ ( k ) = { α i } i = 0 t , { β j } j = 0 t 1 , { σ j 2 } j = 0 t , ρ k with k = 1 , , K
    We rearrange them according to the original CCL parametrization in order to proceed with the simulations. For each k, we have:
    μ 0 j = α 0 + β j for i = 0 and j = 0 , , t μ i j = α i + β j + ρ l o g ( C i 1 , j ) + μ i 1 , j for i = 1 , , t and j = 0 , , t
    where, being the cumulative payment C i 1 , j not available in lower triangle (i.e., for i + j > t ), we simulate it from a l o g N o r m a l μ i 1 , j , σ j .
    Hence, for each parameters set we have a t × t matrix M k = [ μ i j k ] containing the posteriorlog-mean parameters and a vector σ k = [ σ j k ] containing posterior log-variance parameters.
  • We simulate the sets of losses starting from the elements of M and σ . Following a one-year approach, we only generate, for each k, S different realizations of the next year future cumulative payments C i , t i + 1 ( s , k ) (with i = 1 , , t , k = 1 , , K , s = 1 , , S ), hence performing a total of K · S simulations. We end up with an array of dimension K · S where each element is a trapezoid T ( s , k ) composed of the original triangle D ( t ) and a further simulated diagonal related to payments in the calendar year t + 1 . In Table 1 we provide a visualization to clarify this object: each column stands for a different parameter set k and each row is representative of the s-th batch of simulations over the parameters sets, i.e., a Bayesian predictive distribution.
  • Next year incremental payments reported in Formula (9) can be easily obtained by transforming the array of simulated cumulative values in an array of incremental amounts with the following formula:
    P i , t i + 1 k , s = C i , t i + 1 k , s C i , t i
    for i = 0 , , t . In this step, we obtained K · S diagonals of simulated values of the next year payments.
  • Now, we perform the learning update. We use the information generated by the s-th batch of simulations to update the probability of each parameter set. To this extent, we compute the likelyhood of the observations for each element of the array. Being Φ L ( x | μ , σ ) the density function of a log-normal random variable, for every element we have:
    L T ( s , k ) | Θ ( k ) = C i j ( s , k ) T ( s , k ) Φ L C i j ( s , k ) | μ i j ( s , k ) , σ j ( s , k )
    Given the sample of K sets, resampling from the sample guarantees that the probability of sampling a specific set is the same for all the sets. In other words, all the sets initially sampled from the MCMC procedure are equally likely. Then, by means of the Bayes theorem, it is possible to obtain the posterior probability of the k-th parameter set given the simulated losses as:
    P Θ ( k ) | T ( s , k ) = L T ( s , k ) | Θ ( k ) · P ( Θ ( k ) ) h = 1 K L T ( s , h ) | Θ ( h ) · P ( Θ ( h ) ) = L T ( s , k ) | Θ ( k ) h = 1 K L T ( s , h ) | Θ ( h ) .
    In other words, for each simulation, we re-evaluate the probability of each set obtaining a posterior probability distribution. If each set is representative of a possible scenario of the reserving process, we have effectively the posterior distribution of all possible scenarios. By iterating the process we obtain S different posterior distributions for the parameters.
  • By virtue of the log-normal assumption, for each parameters’ set, the expected cumulative payments at times greater than t can be computed as:
    E C i j ( k ) = e x p μ i j ( k ) + σ j 2 ( k ) 2
    for i = 0 , , t , j = 0 , , t and i + j > t . The Best Estimate R k D ( t + 1 ) easily follows.
    For each batch of simulations s, we use the posterior distribution computed at step 4 in order to obtain a post run-off re-weighted next year Best Estimate. For every s:
    R ^ s D ( t + 1 ) = k = 1 K R k D ( t + 1 ) · P Θ ( k ) | T ( s , k )
    Thus, we are able to obtain a predictive distribution of the expected values of the residual reserve according to Solvency II standards.
  • The predictive distribution of claims development results can be derived by assessing the distribution of the obligations at the end of year t + 1 . For each simulation, we add to each value of the residual reserve distribution the realized diagonal. For each s, we have at disposal K batches of diagonals over all the parameter sets. Thus, we sample a diagonal and we add it to the value calculated at step 6. In this way, we obtain the s-th realization of the next year obligations:
    Y s 1 y r = i = 1 t P ˜ i , t i + 1 s + R ^ s D ( t + 1 )
    Again, by iterating this process over all the S batches of simulations, we finally obtain a predictive distribution of future obligations.
We can then use this distribution to compute the reserve risk capital charge according to the Solvency II directive by applying (8), i.e., by subtracting the Best Estimate to the discounted 99.5% quantile.
Previous steps have been described in order to explain the framework of the algorithm we are providing. When coding it, many short-cuts could be taken. For instance, in Table 1 the array is composed of trapezoids. It is not strictly necessary to save the original triangle for each element, not only because of computational speed issues but also because this information will be simplified when computing the learning update. In general, the algorithm could be implemented not necessarily following the outline we provided.
We provide also a generalization on an n-year time frame, allowing to obtain a predictive distribution for the next-n-years obligations. Mathematical details are reported in Appendix A.

4. An Application of the Model

4.1. Dataset and Model Calibration

In this section, we describe the results obtained applying the correlated chain ladder to a 11 × 11 triangle representative of a Motor third party liability portfolio7. The triangle has been obtained simulating cumulative payments from a log-normal distribution starting from an observed triangle to preserve the confidentiality of the data. This allows to check if the parameters of the posterior distributions are consistent with the data used.
As described later, the posterior distribution has been obtained using a Hamiltonian Monte Carlo (HMC) procedure. To implement the methodology, we maintained the log-normal hypothesis for the observed cumulative payments in the triangle and we elicited the following prior structure, structured according to our knowledge about the business.
In particular, the log-ultimate cost is approximated by the parameter α i , defined as follows:
α i l o g ( B i ) + [ l o g ( e l r i ) ] + u i
where:
-
l o g ( B i ) is the natural logarithm of the premium earned for the i-th generation.
-
[ l o g ( e l r i ) ] is the natural logarithm of the expected loss ratio, for the i-th generation.
-
u i is a random noise defined as follows:
u i = 0 for i = 0 , , 3 u i U n i f o r m ( 0.6 , 0.6 ) for i = 4 , , 10
We deemed the variability of the loss ratio sufficient, given that oldest accident years are almost closed.
With respect to the e l r variable, we choose a log-normal distribution centered on a prior loss ratio differentiated by accident year, mainly based on the analyses made by the company for underwriting purposes. Values are specified in Table 2. Since a limited claims pattern is observed for recent accident years, a higher volatility is assumed.
For development parameter β j , we have:
β j U n i f o r m ( 3 , 0 ) for j = 0 , , 9 β j = 0 for j = 10
With respect to the variability parameter, we recall that σ j 2 is defined as:
σ j 2 = h = j t τ h
where we set τ h B e t a ( 1 , 7 ) 8.
Finally, we kept the original assumption about the correlation parameter ρ :
ρ U n i f o r m ( 1 , 1 )
using a non-informative prior. Having formalized these assumptions, we generated a posterior distribution of 10,000 parameters sets with the help of STAN inference engine. In this sampling, we stuck to the default sampling procedure adopted by STAN: the No-U-Turn Sampler (NUTS), an extension of HMC procedure (for details, see Algorithm 2 in Hoffman and Gelman (2014)). As shown in Hoffman and Gelman (2014), empirically, NUTS performs at least as efficiently as a well tuned standard HMC method and, as shown later in the document, allowed us to obtain a strong sample.
In particular, we simulated 4 chains, each composed of 27,500 iteration, utilizing the first 2500 as warm-up and thinning the chain by a factor of 10.9 With this parameterization we traded computational time for accuracy: for each parameter, the R-hat statistic is equal to 1, the traceplot shows a well centered random noise, and the effective sample size is well above 9000, implying that we have a strong sample, truly representative of the posterior. For each parameter of the posterior, we provide in Table 3 the effective sample size10, a measure of equivalence between dependent and independent samples.
Additionally, we have verified that the parameters used to simulate cumulative claims are in line with the parameters of the posterior distributions selected by the procedure. To give an indication of the behaviour, we report in Appendix B, Table A2 and Table A3, the comparison between these values. As expected, there is not much difference between the parameter μ used to simulate and the mean of the a posteriori distribution. We have indeed that the values are very close with an absolute difference higher than 0.5% only for some payments of the accident year 0. This is further confirmation of the successful sampling with the MCMC algorithm. Finally, we show in Table A4 how our prior structure in terms of loss ratio is reflected very well in the posterior loss ratios.
The important feature of the CCL, and more in general of Bayesian models, is that it embeds in itself a learning procedure. In fact, the posterior distribution of parameters is a mix between the prior opinions of the modeler and the experience represented by the data. By looking at them we can effectively see what the model learned from both the data and our prior. In Figure 1 we show the caterpillar plots for the level, development, log-variance, and expected loss ratio parameters, which provide an immediate overview of the learning update. In the top right, we see the log-variance parameters σ 2 . Their behavior is exactly the one we imposed when eliciting the prior structure as the variability is decreasing with respect to the increase in the development year. In the top left we see the level parameters, where we immediately notice that they are almost all centered on zero. On bottom left we have the development parameters, where we see that a posteriori they acquired a meaning consistent with the fact that the model is built on cumulative payments. These parameters represent the negative shift from the log-ultimate cost: for parameters corresponding to the first development year this shift is consistent, for the last is really tiny. Lastly, the plot at the bottom right represents the highest posterior density regions for the expected loss ratio. Again, we can immediately see a behavior consistent with our assumptions, where the domain of variation is wider for the youngest generation and really tiny, if not non-existing, for the older ones.
With respect to the correlation parameter ρ , it is worth looking at its posterior distribution. In Figure 2, we see how the learning process affected this parameter. We see how the posterior is no longer uniform with a peak slightly higher than zero, thus implying that the posterior correlation is almost zero.

4.2. Main Results in a Run-Off View

After performing checks and studying the posterior distributions, we generate a predictive distribution for the claims reserve on a total run-off time frame and we analyze the results of the model. To this end, in Table 4, we compare for different accident years main characteristics of claims reserve computed by means of CCL and Bootstrap-ODP. Similarly, in Table 5, we provide the total claims reserve, compared with both the bootstrap ODP and the Mack formula. Finally, we plot the predictive distribution in Figure 3. With respect to the accident years, both methods show comparable results. CCL provides a higher mean and variability on the youngest accident years because of a broader prior. The total predictive distribution shows a mean of 205 million of euros, slightly lower than the results obtained with the Bootstrap-ODP and the Mack formula. The volatility returned by the CCL (for instance, CV is equal to 9.7%) is comparable to the one provided by the Bootstrap-ODP (CV equal to 9%). Both values are higher than the volatility provided by formula proposed by Mack. The most interesting comparison anyway is about the shape of the distribution where the CCL shows higher skewness, kurtosis (equal to 0.46 and 4.43 respectively), and upper quantiles than Boostrap. In general, we observe that the CCL seems to better describe the usual heaviness of the right tail.

4.3. One-Year View

We show the results of the model on the one-year time frame. The starting point is given by the parameters sets generated in the context of the total run-off analysis. We proceeded generating 10,000 samples of the one-year predictive distribution of claims reserve according to the described algorithm. In Table 6, with regard to the whole next-year obligations, we show the results of the one-year CCL alongside with the same results produced by the Re-Reserving method, the one-year frequentist simulative model for claims reserves. Similarly, we include the results produced by applying the Merz–Wüthrich formula to our triangle. The distribution produced by the CCL has a mean of 206 million euros and a coefficient of variation of 8.4%. It captures about 87% percent of the total run-off variability and still shows the classical features of claims reserves distributions with a not negligible skewness and some excess of kurtosis. We applied the CCL as an internal model by computing the solvency capital requirement (SCR) with a V a R 99.5 % risk measure obtained by subtracting the Best Estimate11. The model turned out to provide an SCR equal to 52 million euros, equivalent to the 25.5% of the CCL Best Estimate. We then followed the same procedure for Re-Reserving. This model shows lower variability and skewness, thus implying a lower SCR (equal to 18.1% of the CL Best Estimate) mainly due to a weaker estimate of the right tail. This is also confirmed by a very different behavior in terms of Expected Shortfall. Finally, we compared the results of the simulative models with the Merz–Wüthrich closed formula fitted on a log-normal distribution. In this case, parameters of the log-normal distribution have been obtained using the CL Best Estimate and the σ u s p M W that the regulation allows to use as an undertaking specific parameter. As provided by the Solvency II regulation, the relative volatility has been derived weighting the CV (equal to 6.37%), obtained as a ratio between the MW standard error and the Best Estimate, with the market wide parameter. The weighting has been made using a credibility factor that took into consideration the time depth of the triangle. The SCR ratio resulted to be equal to 19.6%, a bit higher than the value obtained by the Re-Reserving methodology because of the higher skewness implied by the log-normal assumption.
To conclude, we notice that the one-year CCL is not only in line with the more traditional approaches, but can qualify as a model able to produce more robust estimates of the tail of our interest, and a third alternative approach when the other two do not provide a conclusive evidence. Finally, we provided in Figure 4 a plot of the one-year distribution generated by this adapted CCL.

4.4. Important Remarks

The one-year CCL proved to be computationally intensive. In fact, for all the 10,000 sets of parameters, we generated 10,000 batches of simulations of the first diagonal, each composed of other 10 simulations. The algorithm resulted in approximately 10 9 simulations to get 10,000 realizations relative to an 11 × 11 triangle. The method took a while to produce these results and it is advised both to implement it on a fast language as C and to exploit the possibilities of parallel computing to split the work on many different cores, or distributed computing, to split the work on different machines. In this study, we implemented the model in R, exploiting the possibilities offered by the package parallel that allowed us to split the calculations on three different cores of a laptop. The code to reproduce these results is available on the github profile ’GiulioErcole’.

5. Conclusions

At end of this study, the one-year CCL performed well and proved to be a legitimate alternative to the other well known models to quantify reserve risk. We sense that this model has something more to offer with respect to Re-Reserving, Merz–Wüthrich formula and the market wide parameter as it captures far more information about the variability of loss liabilities and provide a more truthful representation of the risks connected to the claims reserve. Despite the computational issues, the Bayesian nature of this model allows for embedding prior and external knowledge that could potentially lead to more accurate and tailored representations of the risk profile of the insurer and that are even more in line with the principles behind the possibilities of structuring internal models. Thus we are convinced that more in general a Bayesian approach has still a lot of potential in light of the Solvency II framework. Anyway, it is to be kept in mind that this feature could be a double edged sword as this approach includes some sort of subjectivity that poses the risk that a company could distort purposefully their estimates in order to relieve its capital charges. As a result of this, we want to stress that it is extremely important that priors are elicited truthfully in order to obtain sound and plausible estimates.

Author Contributions

All authors contributed equally to this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Legal Disclaimer

The opinions expressed in the paper are solely those of the author. This paper is for general information, education and discussion purposes only. It does not constitute legal or professional advice and does not necessarily reflect, in whole or in part, any corporate position, opinion or view of PartnerRe or its affiliates.

Appendix A. An N-Years Generalization

The algorithm shown above could be easily generalized on an n-year time frame, allowing to obtain a predictive distribution for the next-n-years obligations:
Y ˜ n y r s = h = 1 n i = h t P i , t i + h + R D ( t + n )
From this distribution it is possible to obtain a hypothetical capital charge on such a time frame by applying a risk measure. A longer time period can be useful for instance for own risk assessment purposes as provided by the second pillar of Solvency II. What follows is an extension of the Solvency II guidelines on an n-years time horizon:
  • Starting from the point 2 of the outline of the algorithm structured in the previous section, we simulate a K · S array in which every element is composed by a number n of diagonals, where again K is the number of parameter sets and S is the number of simulations for each parameter set. The s , k -th set of diagonals represent an hypothetical development of the triangle on which we will build a recursive update. In the one-year model, we had for the first diagonal:
    P Θ ( k ) | T 1 ( s , k ) = L T 1 ( s , k ) | Θ ( k ) · P ( Θ ( k ) ) h = 1 K L T 1 ( s , h ) | Θ ( h ) · P ( Θ ( h ) ) = L T 1 ( s , k ) | Θ ( k ) h = 1 K L T 1 ( s , h ) | Θ ( h )
    where T 1 stands for the trapezoid obtained by adding 1 future diagonal to the triangle. We can then use the probabilities obtained in this way to update our knowledge after a second year of run-off:
    P Θ ( k ) | T 2 ( s , k ) = L T 2 ( s , k ) | Θ ( k ) · P Θ ( k ) | T 1 ( s , k ) h = 1 K L T 2 ( s , h ) | Θ ( h ) · P Θ ( h ) | T 1 ( s , k )
    Then, recursively obtain the probabilities of each scenario after n years:
    P Θ ( k ) | T n ( s , k ) = L T n ( s , k ) | Θ ( k ) · P Θ ( k ) | T n 1 ( s , k ) h = 1 K L T n ( s , h ) | Θ ( h ) · P Θ ( h ) | T n 1 ( s , k )
    Iterating over S, we have S probability distributions that we will use as before to compute S determinations of the residual reserve after n years.
  • At this point, for each parameters’ set we calculate the Best Estimate at time t + n for claims incurred at time t. As for the one-year version, we compute the expected values of the lower triangle:
    E C i j ( k ) = e x p μ i j ( k ) + σ j 2 ( k ) 2
    for i + j > t + n . Again, from these values we can obtain the expected values of incremental amounts and the new Best Estimate for a given parameters’ set R k D ( t + n ) .
  • By iterating step 2 on all parameters sets we obtain K values of the next-n-years Best Estimate. For each batch of simulations s we use the posterior distribution of parameters’ sets computed at step 1 in order to reweight the Best Estimates in light of the realized simulations. For every s we obtain one value:
    R ^ s D ( t + n ) = k = 1 K R k D ( t + n ) · P Θ ( k ) | T n ( s , k )
  • As before we use the simulated cumulative developments stored in the array to obtain a new array of simulated incremental developments:
    P i , t i + h s , k = C i , t i + h s , k C i , t i + h 1 s , k , for h = 2 , , n P i , t i + 1 s , k = C i , t i + 1 s , k C i , t i
    for i = 0 , , t .
  • Having at disposal simulated diagonals and the n years Best Estimates it is possible to obtain the predictive distribution of the next-n-years obligations. For every value of the Best Estimate we add a realized development of payments for the first n years. For the s-th value of the Best Estimate we have K simulated scenarios of developments. Hence, we sample a scenario and we add it to the s-th realization of the n years Best Estimate:
    Y s n y r s = h = 1 n i = h t P i , t i + h s , w + R ^ s D ( t + n )
    where w is a random number between 1 and K. Again, by iterating this process over all the S batches of simulations, we finally obtain a predictive distribution of the next-n-years obligations.
Clearly the methodology described in the n-years time frame is a generalization of the one-year approach reported in the previous section. This extension can be useful for risk margin purposes because it is possible to modify a bit Equation (A5) in order to catch the pattern of reserve risk capital requirement until total run-off. Additionally, analyses on a longer time frame are useful in the context of capital management and decision making. For instance, the main purpose of the forward looking assessment of the undertaking’s own risks, is to ensure that the undertaking engages in the process of assessing all the risks inherent to its business and determines the corresponding capital needs. As specified in the Solvency II regulation, the results of the own risk solvency assessment process need to be developed including medium term capital management.

Appendix B. Data

Here, we provide the triangle and the premium that have been used as a starting point for the analyses contained in this paper. The triangle covers 11 years and is representative of a motor third party liability portfolio. To preserve confidentiality of the data, claims have been generated from a log-normal distribution.
Table A1. Triangle of cumulative payments and earned premiums. Amounts in thousands of euros.
Table A1. Triangle of cumulative payments and earned premiums. Amounts in thousands of euros.
AccidentDevelopment YearsEarned
Years 0 1 2 3 4 5 6 7 8 9 10 Premium
050,145.22108,869.70118,157.58123,434.78128,075.39128,620.06133,727.32137,249.55139,652.12140,224.86140,668.36187,498.55
166,529.63120,628.35135,607.54138,325.18141,986.84143,254.48148,625.10151,619.72153,318.71154,132.17 209,638.07
267,249.55120,410.05132,236.67139,283.38143,759.42146,514.73148,870.33153,126.08155,180.52 217,899.50
371,335.57127,456.02140,645.27147,157.83147,993.28150,819.76152,306.83155,879.16 218,391.25
476,200.45146,032.65160,291.64168,785.03171,834.53172,940.93176,259.91 234,357.93
575,407.41155,886.72174,502.23181,683.61189,903.49192,026.62 243,614.50
660,923.30115,047.56122,880.04131,293.91136,295.32 216,966.57
760,214.31120,050.52132,031.54137,061.78 197,976.27
851,171.99100,917.33113,701.60 192,253.76
961,167.95112,561.89 211,541.73
1070,564.48 247,891.00
Table A2. Parameter μ of the log-normal distributions used to simulate values.
Table A2. Parameter μ of the log-normal distributions used to simulate values.
AccidentDevelopment Years
Years 0 1 2 3 4 5 6 7 8 9 10
010.8811.6011.6811.7211.7611.7611.8011.8311.8511.8511.85
111.0711.7011.8111.8411.8611.8711.9111.9311.9411.95
211.1311.7011.7911.8411.8811.9011.9111.9411.95
311.1511.7611.8511.9011.9011.9211.9311.96
411.2511.8911.9912.0412.0512.0612.08
511.2311.9612.0712.1112.1512.17
611.0111.6511.7211.7911.82
710.9811.7011.7911.83
810.8411.5211.64
911.0111.63
1011.12
Table A3. Expected value of the parameter μ of the a-posteriori distribution.
Table A3. Expected value of the parameter μ of the a-posteriori distribution.
AccidentDevelopment Years
Years 0 1 2 3 4 5 6 7 8 9 10
010.9511.6011.7011.7411.7711.7811.8111.8311.8511.8511.85
111.0411.7011.8011.8411.8711.8811.9011.9311.9411.94
211.0511.7111.8111.8511.8811.8911.9111.9411.95
311.0711.7311.8311.8711.9011.9111.9311.96
411.2311.8811.9812.0212.0512.0612.09
511.3211.9812.0812.1212.1512.16
610.9911.6411.7411.7911.81
711.0311.6911.7911.83
810.8811.5311.63
910.9811.64
1011.17
Table A4. Comparison between loss ratio a priori and posterior value.
Table A4. Comparison between loss ratio a priori and posterior value.
YearsPrior Loss RatioPosterior Loss Ratio
10.737560.7359
20.71780.7163
30.7310.7282
40.8060.8045
50.850.8487
60.6850.6838
70.7870.7863
80.7050.7053
90.720.7209
100.740.7407

References

  1. Berger, James O., José M. Bernardo, and Dongchu Sun. 2009. The formal definition of reference priors. The Annals of Statistics 37: 905–38. [Google Scholar] [CrossRef]
  2. Bornhuetter, Ronald L., and Ronald E. Ferguson. 1972. The actuary and IBNR. In Proceedings of the Casualty Actuarial Society. Arlington: Casualty Actuarial Society, pp. 181–95. [Google Scholar]
  3. Dacorogna, Michel, Alessandro Ferriero, and David Krief. 2018. One-Year Change Methodologies for Fixed-Sum Insurance Contracts. Risks 6: 75. [Google Scholar] [CrossRef] [Green Version]
  4. Diers, Dorothea. 2009. Stochastic re-reserving in multi-year internal models—An approach based on simulations. Presented at Astin Colloquium, Helsinki, Finland, June 1–4. [Google Scholar]
  5. Efron, Bradley. 2011. The Bootstrap and Markov Chain Monte Carlo. The Journal of Biopharmaceutical Statistics 21: 1052–62. [Google Scholar] [CrossRef] [PubMed]
  6. England, Peter D., and Richard Verrall. 1999. Analytic and bootstrap estimates of prediction errors in claims reserving. Insurance Mathematics and Economics 25: 281–93. [Google Scholar] [CrossRef]
  7. European Commission. 2009. Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the Taking-Up and Pursuit of the Business of Insurance and Reinsurance (Solvency II). Luxembourg: European Commission. [Google Scholar]
  8. Frees, Edward W., Richard A. Derrig, and Glenn Meyers, eds. 2016. Predictive Analytics Applications in Actuarial Science. Cambridge: Cambridge University Press. [Google Scholar]
  9. Friedland, Jacqueline. 2010. Estimating Unpaid Claims Using Basic Techniques. Casualty Actuarial Society. Available online: https://www.casact.org/press/index.cfm?fa=viewArticle&articleID=816 (accessed on 19 November 2020).
  10. Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2009. The Elements of Statistical Learning, Data Mining Inference and Prediction, 2nd ed. Stanford: Stanford University. [Google Scholar]
  11. Hindley, David. 2017. Deterministic Reserving Methods. In Claims Reserving in General Insurance (International Series on Actuarial Science). Cambridge: Cambridge University Press, pp. 40–145. [Google Scholar]
  12. Hoffman, Matthew D., and Andrew Gelman. 2014. The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research 15: 1351–81. [Google Scholar]
  13. Klugman, Stuart A. 1992. Bayesian Statistics in Actuarial Science. Boston: Kluwer. [Google Scholar]
  14. Mack, Thomas. 1993. Distribution Free calculation of the standard error of Chain Ladder reserve estimates. ASTIN Bulletin The Journal of the IAA 23: 213–25. [Google Scholar] [CrossRef] [Green Version]
  15. Meyers, Glenn. 2017. A Cost of Capital Risk Margin Formula For Non-Life Insurance Liabilities. Variance. [Google Scholar]
  16. Meyers, Glenn. 2015. Stochastic Loss Reserving Using Bayesian MCMC Models. CAS monograph series; Arlington: Casualty Actuarial Society. [Google Scholar]
  17. Ntzoufras, Ioannis, and Petros Dellaportas. 2002. Bayesian Modelling of Outstanding Liabilities Incorporating Claim Count Uncertainty. North American Actuarial Journal 6: 113–28. [Google Scholar] [CrossRef] [Green Version]
  18. Peters, Gareth W., Rodrigo S. Targino, and Mario V. Wüthrich. 2017. Full bayesian analysis of claims reserving uncertainty. Insurance Mathematics and Economics 73: 41–53. [Google Scholar] [CrossRef]
  19. Wüthrich, Mario V., and Michael Merz. 2007. Stochastic Claims Reserving Methods in Insurance. Hoboken: Wiley. [Google Scholar]
  20. Wüthrich, Mario V., Michael Merz, Hans Bühlmann, Massimo De Felice, Alois Gisler, and Franco Moriconi. 2008. Modelling the Claims Development Result for Solvency Purpose. CAS E-Forum. [Google Scholar]
  21. Wüthrich, Mario V. 2007. Using a Bayesian Approach for Claims Reserving. Variance 1: 292–301. [Google Scholar]
1.
In this framework, the dynamics of the one-year change in P&C insurance reserves estimation has been also studied in Dacorogna et al. (2018) by analyzing the process that leads to the ultimate risk in the case of “fixed-sum” insurance contracts.
2.
Statistical literature drew many parallels between Bayesian MCMC and Bootstrap procedures. In statistical inference, these algorithms represent two different ways offered respectively by the Bayesian and frequentist paradigms to assess parameter uncertainty (see, e.g., Efron (2011); Hastie et al. (2009)). The Bootstrap algorithm, and especially its non-parametric version, generates many samples from the data, from which is possible to obtain the distribution of a statistic or a parameter, mimicking the Bayesian effect of a posterior distribution. This distribution is produced in a quick and simple way, requiring no probabilistic assumptions, no prior elicitation nor MCMC procedures and can be effectively considered an “approximation of a non-parametric, non-informative posterior distribution” (Hastie et al. 2009).
3.
In general, parameters could be more than two and left up to the needs of the modeler.
4.
Anyway this structure is not exclusive of Bayesian models as it is applied also in frequentist models, as for instance the Over-Dispersed Poisson bootstrap Model England and Verrall (1999).
5.
For instance, different distributions may be chosen for different accident years i.
6.
According to Article 76 of the Solvency II Directive European Commission (2009), the claims reserve must be equal to the current amount that insurance and reinsurance undertakings would have to pay if they were to transfer their insurance and reinsurance obligations immediately to another insurance or reinsurance undertaking. This definition leads to a claims reserve evaluated as the sum of the best estimate and risk margin. As prescribed by Solvency II, risk margin is not considered in (8) to avoid problems of circularity.
7.
Cumulative payments are reported in Table A1.
8.
As a general rule we advise to check the order of magnitude of parameters implied by the observations in the triangle, in order to feed the model with consistent inputs and speed up convergence.
9.
With adapt-delta = 0.98 and maximum treedepth = 13.
10.
Computed by the package rstan.
11.
For each parameters set we calculated the expected value of the lower triangle and discounted it according to the December 2018 Eiopa term structure. Subsequently, we obtained a CCL BE by calculating the mean of these discounted expected values.
Figure 1. We show the caterpillar plots for the parameters of the Correlated Chain Ladder (CCL). The higher posterior density regions (HPD) give an immediate overview of the learning update for each parameter. The thick line represents the 90% HPD while the thinner one extends it to a 95% HPD (Indexes start from 1).CCL Caterpillar plots
Figure 1. We show the caterpillar plots for the parameters of the Correlated Chain Ladder (CCL). The higher posterior density regions (HPD) give an immediate overview of the learning update for each parameter. The thick line represents the 90% HPD while the thinner one extends it to a 95% HPD (Indexes start from 1).CCL Caterpillar plots
Risks 08 00125 g001
Figure 2. Posterior distribution of the correlation parameter. We see that the learning update implied a distribution no longer uniform and concentrated slightly above 0.Posterior distribution for ρ
Figure 2. Posterior distribution of the correlation parameter. We see that the learning update implied a distribution no longer uniform and concentrated slightly above 0.Posterior distribution for ρ
Risks 08 00125 g002
Figure 3. Predictive distribution (10,000 realizations) of claims reserve (CCL methodology).CCL Cumulative Reserve
Figure 3. Predictive distribution (10,000 realizations) of claims reserve (CCL methodology).CCL Cumulative Reserve
Risks 08 00125 g003
Figure 4. Distribution of the claims reserve over a one-year time horizon, obtained from 10,000 simulations. The lines represent the 99.5% quantile and the Best Estimate, respectively. By subtracting the Best Estimate from the quantile, we obtain the Solvency II capital requirement that represents the hypothetical SCR for reserve risk returned by this internal model. As reported in Table 6, this value is equal to 52,766.51. Amounts are in thousands of euros.One year CCL distribution
Figure 4. Distribution of the claims reserve over a one-year time horizon, obtained from 10,000 simulations. The lines represent the 99.5% quantile and the Best Estimate, respectively. By subtracting the Best Estimate from the quantile, we obtain the Solvency II capital requirement that represents the hypothetical SCR for reserve risk returned by this internal model. As reported in Table 6, this value is equal to 52,766.51. Amounts are in thousands of euros.One year CCL distribution
Risks 08 00125 g004
Table 1. We provide an immediate visualization of what was described in the step 2. T stands for the trapezoid obtained adding to the original triangle the new diagonal containing simulated payments of first calendar year after the evaluation date. The upper parenthesis locates the simulated diagonals in terms of parameter set k and number of simulations s.
Table 1. We provide an immediate visualization of what was described in the step 2. T stands for the trapezoid obtained adding to the original triangle the new diagonal containing simulated payments of first calendar year after the evaluation date. The upper parenthesis locates the simulated diagonals in terms of parameter set k and number of simulations s.
Simulation Array
Θ ( 1 ) Θ ( k ) Θ ( K )
1 T ( 1 , 1 ) T ( 1 , k ) T ( 1 , K )
s T ( s , 1 ) T ( s , k ) T ( s , K )
S T ( S , 1 ) T ( S , k ) T ( S , K )
Table 2. We report the priors we elicited for each accident year. The older the accident year, the more evident is the final loss ratio and then the smaller the variability parameter. With respect to years 1 and 2, the variability is negligible as the final development years will have a non-significant effect on the final loss ratio. On the other hand, for the three youngest accident years, we opted for a quite wide distribution.
Table 2. We report the priors we elicited for each accident year. The older the accident year, the more evident is the final loss ratio and then the smaller the variability parameter. With respect to years 1 and 2, the variability is negligible as the final development years will have a non-significant effect on the final loss ratio. On the other hand, for the three youngest accident years, we opted for a quite wide distribution.
YearsPrior Loss RatioLog-MeanLog-Std
10.73756−0.306450.000005
20.7178−0.33360.000005
30.731−0.315310.001
40.806−0.21770.008
50.85−0.164550.025
60.685−0.380370.035
70.787−0.241560.05
80.705−0.351590.08
90.72−0.330540.08
100.74−0.303140.085
Table 3. The thin factor of 10 and the tight parameterization for the HMC procedure ensured the strong sample.
Table 3. The thin factor of 10 and the tight parameterization for the HMC procedure ensured the strong sample.
Effective Sample Size
α 0 - β 0 9740 σ 0 2 9687 e l r 0 10,152
α 1 - β 1 10,214 σ 1 2 9719 e l r 1 9586
α 2 - β 2 10,130 σ 2 2 9564 e l r 2 10,361
α 3 - β 3 10,010 σ 3 2 9137 e l r 3 9244
α 4 10,024 β 4 10,290 σ 4 2 9413 e l r 4 10,089
α 5 9902 β 5 9940 σ 5 2 9343 e l r 5 10,048
α 6 10,362 β 6 9595 σ 6 2 10,044 e l r 6 10,387
α 7 9736 β 7 9875 σ 7 2 9626 e l r 7 9767
α 8 9874 β 8 9743 σ 8 2 9773 e l r 8 9595
α 9 9936 β 9 9605 σ 9 2 9863 e l r 9 9935
α 10 10,166 ρ 9747 σ 10 2 9872 e l r 10 10,064
Table 4. We provide a comparison between CCL and Bootstrap-ODP method. For both methods, we report main characteristics (mean, coefficient of variation (CV), and 99% quantile). All the amounts are expressed in thousands of Euros and differentiated for accident years.
Table 4. We provide a comparison between CCL and Bootstrap-ODP method. For both methods, we report main characteristics (mean, coefficient of variation (CV), and 99% quantile). All the amounts are expressed in thousands of Euros and differentiated for accident years.
ReserveCV99% Quantile
Years CCL Bootstrap-ODP CCL Bootstrap-ODP CCL Bootstrap-ODP
1144.21502.864.271.611914.013148.01
2910.771259.900.670.912681.074706.00
33138.163448.710.210.504901.308262.38
49010.508305.660.180.3213,046.0315,440.29
511,923.1713,865.660.180.2516,994.2722,596.10
610,111.2711,535.850.180.2614,627.6419,249.20
715,857.0215,843.070.150.2221,780.2424,550.01
817,128.4018,939.310.170.2024,191.4428,935.08
932,759.7332,282.180.150.1645,654.9445,031.20
10104,906.96103,560.530.170.12153,970.68133,452.21
Table 5. Mean, standard deviation (SD), and CV of claims reserve obtained by CCL, Boostrap-ODP, and Mack formula, respectively. For CCL and Boostrap-ODP methods, skewness, kurtosis, and quantiles of the distribution are also reported. All the amounts are expressed in thousands of Euros.
Table 5. Mean, standard deviation (SD), and CV of claims reserve obtained by CCL, Boostrap-ODP, and Mack formula, respectively. For CCL and Boostrap-ODP methods, skewness, kurtosis, and quantiles of the distribution are also reported. All the amounts are expressed in thousands of Euros.
CCLBootstrap-ODPMack
Mean205,890.19209,543.74209,255.94
SD19,912.0318,872.7116,335.99
CV0.0970.0900.078
Skewness0.460.06-
Kurtosis4.433.05-
1st quartile192,706.64196,582.00-
Median204,958.17209,066.00-
3rd quartile217,790.79222,285.75-
99% quantile257,935.78254,001.40-
99.5% quantile268,426.73259,138.41-
Max357,274.68282,362.20-
Table 6. We provide all the main results obtained applying three alternative one-year models to our triangle. Figures in italics regarding the Merz–Wüthrich formula have been produced by assuming a log-normal distribution for the claims reserve fitted on the BE and the σ u s p M W that the regulations would have allowed to use as undertaking specific parameter. Amounts in thousands of euros.
Table 6. We provide all the main results obtained applying three alternative one-year models to our triangle. Figures in italics regarding the Merz–Wüthrich formula have been produced by assuming a log-normal distribution for the claims reserve fitted on the BE and the σ u s p M W that the regulations would have allowed to use as undertaking specific parameter. Amounts in thousands of euros.
One-Year CCLRe-ReservingMerz–Wüthrich
Best Estimate206,902.20210,651.33210,651.33
Mean of one-year obligations205,756.09209,184.98209,255.94
SD17,325.3214,748.6313,421.28
CV0.0840.0710.071
One Year on Total0.870.780.90
Skewness0.360.180.21
Kurtosis3.862.983.08
Quantile 75%216,180.39219,181.26220,356.51
Quantile 95%251,118.22245,427.65247,556.06
Quantile 99.5%259,668.71248,781.01251,946.20
Max313,101.79267,166.99-
Expected Shortfall 99%262,529.18250,566.40253,601.99
Reserve Risk SCR52,766.5138,129.6841,294.86
SCR over BE25.50%18.10%19.60%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ercole, C.G.; Paolo, C.G. A Bayesian Internal Model for Reserve Risk: An Extension of the Correlated Chain Ladder. Risks 2020, 8, 125. https://doi.org/10.3390/risks8040125

AMA Style

Ercole CG, Paolo CG. A Bayesian Internal Model for Reserve Risk: An Extension of the Correlated Chain Ladder. Risks. 2020; 8(4):125. https://doi.org/10.3390/risks8040125

Chicago/Turabian Style

Ercole, Carnevale Giulio, and Clemente Gian Paolo. 2020. "A Bayesian Internal Model for Reserve Risk: An Extension of the Correlated Chain Ladder" Risks 8, no. 4: 125. https://doi.org/10.3390/risks8040125

APA Style

Ercole, C. G., & Paolo, C. G. (2020). A Bayesian Internal Model for Reserve Risk: An Extension of the Correlated Chain Ladder. Risks, 8(4), 125. https://doi.org/10.3390/risks8040125

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop