Next Article in Journal
Interest Rates Term Structure under Ambiguity
Next Article in Special Issue
An Integrated Approach to Pricing Catastrophe Reinsurance
Previous Article in Journal
A Cointegrated Regime-Switching Model Approach with Jumps Applied to Natural Gas Futures Prices
Previous Article in Special Issue
Effects of Gainsharing Provisions on the Selection of a Discount Rate for a Defined Benefit Pension Plan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Model Uncertainty in Operational Risk Modeling Due to Data Truncation: A Single Risk Case

1
School of Computer Science and Mathematics, University of Central Missouri, Warrensburg, MO 64093, USA
2
Department of Mathematical Sciences, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, WI 53201, USA
*
Author to whom correspondence should be addressed.
Risks 2017, 5(3), 49; https://doi.org/10.3390/risks5030049
Submission received: 27 April 2017 / Revised: 15 August 2017 / Accepted: 1 September 2017 / Published: 13 September 2017

Abstract

:
Over the last decade, researchers, practitioners, and regulators have had intense debates about how to treat the data collection threshold in operational risk modeling. Several approaches have been employed to fit the loss severity distribution: the empirical approach, the “naive” approach, the shifted approach, and the truncated approach. Since each approach is based on a different set of assumptions, different probability models emerge. Thus, model uncertainty arises. The main objective of this paper is to understand the impact of model uncertainty on the value-at-risk (VaR) estimators. To accomplish that, we take the bank’s perspective and study a single risk. Under this simplified scenario, we can solve the problem analytically (when the underlying distribution is exponential) and show that it uncovers similar patterns among VaR estimates to those based on the simulation approach (when data follow a Lomax distribution). We demonstrate that for a fixed probability distribution, the choice of the truncated approach yields the lowest VaR estimates, which may be viewed as beneficial to the bank, whilst the “naive” and shifted approaches lead to higher estimates of VaR. The advantages and disadvantages of each approach and the probability distributions under study are further investigated using a real data set for legal losses in a business unit (Cruz 2002).

1. Introduction

Basel II/III and Solvency II are the leading international regulatory frameworks for banking and insurance industries, and mandate that financial institutions build separate capital reserves for operational risk. Within the advanced measurement approach (AMA) framework, the loss distribution approach (LDA) is the most sophisticated tool for estimating the operational risk capital. According to LDA, the risk-based capital is an extreme quantile of the annual aggregate loss distribution (e.g., the 99.9th percentile), which is called value-at-risk or VaR. Some recent discussions between the industry and the regulatory community in the United States reveal that the LDA implementation still has a number of “thorny” issues (AMA Group 2013). One such issue is the treatment of data collection threshold. Here is what is stated on page 3 of the same document: “Although the industry generally accepts the existence of operational losses below the data collection threshold, the appropriate treatment of such losses in the context of capital estimation is still widely debated.”
Various assumptions about the data collection threshold have been considered in the existing literature: known threshold (Baud et al. 2002; Shevchenko and Temnov 2009), threshold as unknown parameter (Baud et al. 2002), stochastic threshold whose distribution has to be modeled (Baud et al. 2002; de Fontnouvelle et al. 2006), and time varying threshold that may scale according to inflation and business factors (Shevchenko and Temnov 2009). In this paper, we will assume that the threshold is known. Given (external) operational risk databases (which often collect losses exceeding, for example, $1 million), such an assumption is appropriate.
Further, the annual aggregate loss variable is a combination of two variables—loss frequency and loss severity—and there are different ways to estimate risk-based capital. One way is to estimate the untruncated severity and truncation-adjusted frequency and then compute VaR. This approach follows directly from the results described by Brazauskas, Jones, and Zitikis (Brazauskas et al. 2015). Another way is to estimate the truncated severity and unadjusted frequency to compute VaR. For a comprehensive review of analytic techniques for truncated data in the context of operational risk modeling, see Cruz, Peters, Shevchenko (Cruz et al. 2015, sct. 7.9). Furthermore, as is known in practice, the severity distribution is a key driver of the capital estimate (Opdyke 2014). This is the part of the aggregate model where initial assumptions about the data collection threshold are most influential. A number of authors have examined some aspects of this topic in the past (e.g., Cavallo et al. 2012; Chernobai et al. 2007; Ergashev et al. 2016; Luo et al. 2007; Moscadelli et al. 2005). The modeling approaches they (collectively) considered include: the empirical approach, the “naive” approach, the shifted approach, and the truncated approach. Since each approach is based on a different set of assumptions, different probability models emerge. Thus, model uncertainty arises.
The main objective of this paper is to understand the impact of model uncertainty on risk measurements, and (hopefully) help settle the debate about the treatment of data collection threshold in the context of capital estimation. Solving such a problem under a general setup (i.e., by considering many interdependent risks and multiple stakeholders) is only possible through extensive simulations, but that would not produce much insight. Therefore, we simplify the problem by taking the bank’s perspective and by studying a single risk. Under this simplified scenario, we can solve the problem analytically (when the underlying distribution is exponential), and show that it uncovers similar patterns among VaR estimates to those based on the simulation approach (when data follow a Lomax distribution). We demonstrate that for a fixed probability distribution, the choice of the truncated approach yields lowest VaR estimates, which may be viewed as beneficial to the bank, whilst the “naive” and shifted approaches lead to higher estimates of VaR. As for the choice of severity distributions, besides the Lomax distribution (which is heavy tailed and hence appropriate in operational risk modeling), we intentionally select the light-tailed exponential distribution to show what happens to VaR estimates when incorrect assumptions are made. Moreover, our step-by-step analysis not only shows “what happens” to VaR estimates, but it helps understand the questions of “how” and “why” it happens. Additionally, perhaps surprisingly, our numerical illustrations reveal why the shifted approach is still popular. That is because it is flexible enough to pass standard model validation tests and thus cannot be discarded from practical use based on such tools alone. In summary, this paper contributes to the existing literature by performing an extensive investigation of the impact that model uncertainty has on the VaR estimators, justifies the soundness of the regulatory recommendation (i.e., use the truncated approach), and paves the way for a number of research problems in this important area.
It is worth noting here that the model uncertainty considered in this paper is an epistemic one, not a random uncertainty. It can be reduced—but not completely eliminated—by employing sound model validation tools, and in some cases (e.g., when the shifted approach is used) may require out-of-model knowledge. In a more general context, model uncertainty is an important topic within the model risk governance framework as regulated by the OCC and the Federal Reserve Bank in the U.S. and the Basel Committee on Banking Supervision for the G20 countries (e.g., Basel Coordination Committee 2014; Office of the Comptroller of the Currency 2011).
The rest of the paper is structured as follows. In Section 2, we describe how model uncertainty emerges and study its effects on VaR estimates. This is done by employing theoretical results (presented in Appendix A) and via Monte Carlo simulations. Next, in Section 3, these explorations are further illustrated using a real data set for legal losses in a business unit. Finally, concluding remarks are offered in Section 4. Additionally, in Appendix A we provide some technical tools that are essential for analytic treatment of the problem. In particular, key probabilistic features of the generalized Pareto distribution are presented, and several asymptotic theorems of mathematical statistics are specified.

2. Model Uncertainty

We start this section by introducing the problem and describing how model uncertainty arises. Then, in Section 2.2, we review several typical models used for estimating VaR. Finally, using the theoretical results of Appendix and Monte Carlo simulations, we finish with two parametric examples, where we evaluate the probability of overestimating true VaR for exponential and Lomax distributions.

2.1. Motivation

In order to fully understand the problem, in this paper we will walk the reader through the entire modeling process and demonstrate how our assumptions affect the end product, which is the estimate of severity VaR. Since the problem involves collected data, initial assumptions, and statistical inference (in this case, point estimation and assessment of estimates’ variability), it will be tackled with statistical tools, including theoretical tools (asymptotics), Monte Carlo simulations, and real-data case studies. Let us briefly discuss data, assumptions, and inference. As noted in Section 1, it is generally agreed that operational losses exist above and below the data collection threshold. Therefore, this implies that choosing a modeling approach is equivalent to deciding on how much probability mass there is below the threshold.
In Figure 1, we provide graphs of truncated, naive, and shifted probability density functions of two distributions (studied formally in Section 2.3): Exponential, which is a light-tailed model; and Lomax, with the tail parameter α = 3.5 , which is a moderately-tailed model (it has three finite moments). We clearly see that those models are quite different below the threshold t = 195 , 000 , but in practice that would be unobserved. On the other hand, in the observable range (i.e., above t = 195 , 000 ), the plotted density functions are similar (note that the vertical axes are in very small units, 10 6 ) and converge to each other as losses get larger (note how little differentiation there is among the curves when losses exceed 1,000,000). Moreover, it is even difficult to spot a difference between the corresponding exponential and Lomax models, though the two distributions possess distinct theoretical properties (e.g., for one all moments are finite, whereas for the other only three are). Additionally, since probability mass below the threshold is one of the “known unknowns,” it will have to be estimated from the observed data (above t). As will be shown in the case study of Section 3, this task may look straightforward, but its outcomes vary and are heavily influenced by the initial assumptions.
To formalize this dicussion, suppose that Y 1 , , Y N represent (positive and i.i.d.) loss severities resulting from operational risk, and let us denote their probability density function (pdf), cumulative distribution function (cdf), and quantile function (qf) as f, F, and F 1 , respectively. Then, the problem of estimating VaR-based capital is equivalent to finding an estimate of qf at some probability level (e.g., F 1 ( β ) ). The difficulty here is that we observe only those Y i ’s that exceed some known data collection threshold t 0 . That is, the actually observed variables are X i ’s with
X 1 = d Y i 1 | Y i 1 > t , , X n = d Y i n | Y i n > t ,
where = d denotes “equal in probability” and n = j = 1 N 1 Y j > t . Their cdf F * , pdf f * , qf F * 1 are related to F, f, F 1 , and given by
F * ( x ) = F ( x ) F ( t ) 1 F ( t ) , f * ( x ) = f ( x ) 1 F ( t ) , F * 1 ( u ) = F 1 u + ( 1 u ) F ( t )
for x t and 0 < u < 1 , and for x < t , f * ( x ) = F * ( x ) = 0 .
Further, let us investigate the behavior of F * 1 ( u ) from a purely mathematical point of view. Since the qf of continuous random variables (which is the case for loss severities) is a strictly increasing function and ( 1 u ) F ( t ) 0 , it follows that
F * 1 ( u ) = F 1 u + ( 1 u ) F ( t ) F 1 ( u ) , 0 < u < 1 ,
with the inequality being strict unless F ( t ) = 0 . This implies that any quantile of the observable variable X is never below the corresponding quantile of the unobservable variable Y, which is true VaR. This fact is certainly not new (for example, see the extensive analysis by Opdyke (2014), about the effect of Jensen’s inequality in VaR estimation). However, if we now change our perspective from mathematical to statistical and take into account the method of how VaR is estimated, we could augment the above discussion with new insights and improve our understanding.
A review of existing methods shows that, besides estimation of VaR using Equations (1) and (2) under the truncated distribution framework, there are other parametric methods that employ different strategies, such as the naive and shifted approaches (described in Section 2.2.2). In particular, those two approaches use the data X 1 , , X n and either ignore t or recognize it in some other way than Equation (2). Thus, model uncertainty emerges.

2.2. Typical Models

2.2.1. Empirical Model

As mentioned earlier, the empirical model is restricted to the range of observed data. So, it uses data from Equation (1), but since the empirical estimator F ^ ( t ) = 0 , Formulas (2) simplify to F ^ * ( x ) = F ^ ( x ) , f ^ * ( x ) = f ^ ( x ) , for x t , and F ^ * 1 ( u ) = F ^ 1 ( u ) . Thus, the model cannot take full advantage of Equation (2). In this case, the VaR( β ) estimator is simply F ^ 1 ( β ) = X ( n β ) , and as follows from Theorem A1,
X ( n β ) i s AN F * 1 ( β ) , 1 n β ( 1 β ) f * 2 ( F * 1 ( β ) ) .
We now can evaluate the probability of overestimating the true VaR by a certain percentage; i.e., we want to study function H ( c ) : = P X ( n β ) > c F 1 ( β ) for c 1 . Using Z to denote the standard normal random variable and Φ for its cdf, and taking into account Equation (2), we proceed as follows:
H ( c ) = P X ( n β ) > c F 1 ( β ) P Z > c F 1 ( β ) F * 1 ( β ) × 1 n β ( 1 β ) f * 2 ( F * 1 ( β ) ) 1 / 2 = 1 Φ n β ( 1 β ) c F 1 ( β ) F 1 ( β + ( 1 β ) F ( t ) ) × f F 1 ( β + ( 1 β ) F ( t ) ) 1 F ( t ) .
From this formula, we clearly see that 0.50 H ( 1 ) < 1 , with the lower bound being achieved when F ( t ) = 0 . Additionally, at the other extreme, when c , we observe H ( c ) 0 . Additional numerical illustrations are provided in Table 1.
Several conclusions emerge from the table. First, the case F ( t ) = 0 is a benchmark case that illustrates the behavior of the empirical estimator when data is completely observed (and in that case X ( n β ) would be a consistent method for estimating VaR ( β ) ). We see that H ( 1 ) = 0.5 , and then it quickly decreases to 0 as c increases. The decrease is quickest for the light-tailed distribution, exponential ( σ = 1 ) , and slowest for the heavy-tailed Lomax ( α = 1 , θ 2 = 1 ) , which has no finite moments. Second, as less data is observed (i.e., as F ( t ) increases to 0.5 and 0.9), the probability of overestimating true VaR increases for all types of distribution. For example, while the probability of overestimating VaR ( 0.995 ) by 20% ( c = 1.2 ) for the light-tailed distribution is only 0.226 for F ( t ) = 0 , it increases to 0.398 and 0.811 for F ( t ) = 0.5 and 0.9 , respectively. If severity follows the heavy-tailed distribution, then H ( 1.2 ) is 0.444, 0.612, 0.734 for F ( t ) = 0 , 0.5 , 0.9 , respectively. Finally, in practice, typical scenarios would be near F ( t ) = 0.9 with moderate- or heavy-tailed severity distributions, which corresponds to quite unfavorable patterns in the table. Indeed, function H ( c ) declines very slowly, and the probability of overestimating VaR ( 0.995 ) by 100% seems like a norm (0.577 and 0.715).

2.2.2. Parametric Models

We discuss three parametric approaches: truncated, naive, and shifted.
Truncated Approach: The truncated approach uses the observed data X 1 , , X n and fully recognizes its distributional properties. That is, it takes into account Equation (2) and derives maximum likelihood estimator (MLE) values by maximizing the following log-likelihood function:
log L T θ 1 , , θ k | X 1 , , X n = i = 1 n log f * ( X i ) = i = 1 n log f ( X i ) 1 F ( t ) ,
where θ 1 , , θ k are the parameters of pdf f. Once parameter MLEs are available, VaR ( β ) estimate is found by plugging those MLE values into F 1 ( β ) .                                            □
Naive Approach: The naive approach uses the observed data X 1 , , X n , but ignores the presence of threshold t. That is, it bypasses Equation (2) and derives MLE values by maximizing the following log-likelihood function:
log L N θ 1 , , θ k | X 1 , , X n = i = 1 n log f ( X i ) .
Notice that since f ( X i ) f ( X i ) / [ 1 F ( t ) ] = f * ( X i ) , with the inequality being strict for F ( t ) > 0 , the log-likelihood of the naive approach will always be less than that of the truncated approach. This in turn implies that parameter MLEs of pdf f derived using the naive approach will always be suboptimal, unless F ( t ) = 0 . Finally, VaR ( β ) estimate is computed by inserting parameter MLEs (the ones found using the naive approach) into F 1 ( β ) .                                            □
Shifted Approach: The shifted approach uses the observed data X 1 , , X n and recognizes threshold t by first shifting the observations by t. Then, it derives parameter MLEs by maximizing the following log-likelihood function:
log L S θ 1 , , θ k | X 1 , , X n = i = 1 n log f ( X i t ) .
By comparing Equations (4) and (5), we can easily see that the naive approach is a special case of the shifted approach (with t = 0 ). Moreover, although this may only be of interest to theoreticians, one could introduce a class of shifted models by considering f ( X i s ) , with 0 s t , and create infinitely many versions of the shifted model. Finally, VaR ( β ) is estimated by applying parameter MLEs (the ones found using the shifted approach) to F 1 ( β ) + t .                                            □

2.3. Parametric VaR Estimation

2.3.1. Example 1: Exponential Distribution

Suppose Y 1 , , Y N are i.i.d. and follow an exponential distribution, with pdf, cdf, and qf given by Equations (A1), (A2), and (A4), respectively, with γ = 0 and μ = 0 . However, we observe only variable X, whose relation to Y is governed by Equations (1) and (2). Now, by plugging exponential pdf and/or cdf into the log-likelihoods Equations (3)–(5), we obtain
log L T σ | X 1 , , X n = i = 1 n log f ( X i ) 1 F ( t ) = i = 1 n log σ 1 e X i / σ e t / σ = n log σ + i = 1 n ( X i t ) σ ,
log L N σ | X 1 , , X n = i = 1 n log f ( X i ) = i = 1 n log σ 1 e X i / σ = n log σ + i = 1 n X i σ ,
log L S σ | X 1 , , X n = i = 1 n log f ( X i t ) = i = 1 n log σ 1 e ( X i t ) / σ = n log σ + i = 1 n ( X i t ) σ ,
where the subscripts T , N , S (for L ) denote “truncated”, “naive”, and “shifted”, respectively. Then, by maximizing the log-likelihoods (6)–(8) with respect to σ , we get the following MLE formulas for parameter σ under the truncated, naive, and shifted approaches:
σ ^ T = X ¯ t , σ ^ N = X ¯ , σ ^ S = X ¯ t ,
where X ¯ = n 1 i = 1 n X i .
Next, by inserting σ ^ T , σ ^ N , and σ ^ S into the corresponding qf’s as described in Section 2.2.2, we get the following VaR ( β ) estimators:
VaR ^ T ( β ) = σ ^ T log ( 1 β ) , VaR ^ N ( β ) = σ ^ N log ( 1 β ) , VaR ^ S ( β ) = σ ^ S log ( 1 β ) + t .
Further, a direct application of Theorem A2 for σ ^ T (with obvious adjustment for σ ^ N ), yields that
σ ^ T i s AN σ , σ 2 n , σ ^ N is AN σ + t , σ 2 n , σ ^ S is AN σ , σ 2 n .
Furthermore, having established AN for parameter MLEs, we can apply Theorem A3 and specify asymptotic distributions for VaR estimators. They are as follows:
VaR ^ T ( β ) is AN σ log ( 1 β ) , σ 2 log 2 ( 1 β ) n ,
VaR ^ N ( β ) is AN ( σ + t ) log ( 1 β ) , σ 2 log 2 ( 1 β ) n ,
VaR ^ S ( β ) is AN σ log ( 1 β ) + t , σ 2 log 2 ( 1 β ) n .
Note that while all three estimators are equivalent in terms of the asymptotic variance, they are centered around different targets. The mean of the truncated estimator is the true quantile of the underlying exponential model (estimating which is the objective of this exercise) and the mean of the other two methods is shifted upwards; in both cases, the shift is a function of threshold t.
Finally, as was done for the empirical VaR estimator in Section 2.2.1, we now define function H ( c ) = P VaR ^ ( β ) > c F 1 ( β ) for c 1 , the probability of overestimating the target by ( c 1 ) 100 % for each parametric VaR estimator and study its behavior:
H T ( c ) 1 Φ ( c 1 ) n , H N ( c ) 1 Φ ( c 1 ) n n ( t / σ ) ,
H S ( c ) 1 Φ ( c 1 ) n + n ( t / σ ) log 1 ( 1 β ) .
Table 2 provides numerical illustrations of functions H T ( c ) , H N ( c ) , H S ( c ) . We select the same parameter values as in the light-tailed cases of Table 1. From Table 2, we see that the case F ( t ) = 0 is special in the sense that all three methods become identical and perform well. For example, the probability of overestimating true VaR by 20% is only 0.023 for all three methods, and it is essentially 0 as c 1.5 . In this case, parametric estimators outperform the empirical estimator (see Table 1) because they are designed for the correct underlying model. However, as the proportion of unobserved data increases (i.e., as F ( t ) increases to 0.5 and 0.9), only the truncated approach maintains its excellent performance. Additionally, while the shifted estimator is better than the naive, both methods perform poorly and even rarely improve the empirical estimator. For example, in the extreme case of F ( t ) = 0.9 , the naive and shifted methods overestimate true VaR ( 0.95 ) by 50% with probability 1.000 and 0.996, respectively, whereas the corresponding probability for the empirical estimator is 0.968.

2.3.2. Example 2: Lomax Distribution

Suppose that Y 1 , , Y N are i.i.d. and follow a Lomax distribution, with pdf, cdf, and qf given by Equations (A1), (A2), and (A4), respectively, with α = 1 / γ , θ = σ / γ , and μ = 0 . However, we observe only variable X whose relation to Y is governed by Equations (1) and (2). Now, unlike the exponential case, maximization of the log-likelihoods (3)–(5) does not yield explicit formulas for MLEs of a Lomax model. So, in order to evaluate functions H T ( c ) , H N ( c ) , H S ( c ) , we use Monte Carlo simulations to implement the following procedure: (i) generate a Lomax-distributed data set according to pre-specified parameters; (ii) numerically evaluate parameters α and θ for each approach; (iii) compute the corresponding estimates of VaR; (iv) check whether the inequality in function H ( c ) is true for each approach and record the outcomes; and (v) repeat steps (i)–(iv) a large number of times and report the proportion of “true” outcomes in step (iv). To facilitate comparisons with the moderate-tailed scenarios in Table 1, we select simulation parameters as follows:
  • Severity distribution Lomax ( α = 3.5 , θ 1 ) : θ 1 = 1 (for F ( t ) = 0 ), θ 1 = 890 , 355 (for F ( t ) = 0.5 ), θ 1 = 209 , 520 (for F ( t ) = 0.9 ).
  • Threshold: t = 0 (for F ( t ) = 0 ) and t = 195 , 000 (for F ( t ) = 0.5 , 0.9 ).
  • Complete sample size: N = 100 (for F ( t ) = 0 ); N = 200 (for F ( t ) = 0.5 ); N = 1000 (for F ( t ) = 0.9 ). The average observed sample size is n = 100 .
  • Number of simulation runs: 10 , 000 .
Simulation results are summarized in Table 3, where we again observe similar patterns to those of Table 1 and Table 2. This time, however, the entries are more volatile, which is mostly due to the randomness of the simulation experiment (e.g., all entries for the T and c = 1 cases theoretically should be equal to 0.5, because those cases correspond to the probability of a normal random variable exceeding its mean, but they are slightly off). The F ( t ) = 0 case is where all parametric models perform well, as they should. However, once they leave that comfort zone ( F ( t ) = 0.5 and 0.9 ), only the truncated approach works well, with the naive and shifted estimators performing similarly to the empirical estimator. Since Lomax distributions have heavier tails than exponential, function H ( c ) under the truncated approach is also affected by that and converges to 0 (as c ) slower. In other words, for a given choice of model parameters, the coefficient of variation of VaR is larger for the Lomax model than that for the exponential model, thus resulting in larger overestimating probabilities than those in Table 2. The difference between the T entries in Table 2 and Table 3 is also influenced by the fact that the numerically found MLE does not often produce very stable or trustworthy parameter estimates for the truncated approach, which is a common technical issue. Nonetheless, the overall message here does not change: we observe certain patterns among functions H T ( c ) , H N ( c ) , and H S ( c ) , which are no different from those of Section 2.3.1, which were found using the theoretical tools.

3. Real-Data Example

In this section we illustrate how all the modeling approaches considered in this paper (empirical and three parametric) perform on real data. We go step-by-step through the entire modeling process, starting with model fitting and validation, continuing with VaR estimation, and completing the example with model-based predictions for quantities below the data collection threshold. Note that for the parametric approaches we employ both exponential and Lomax models, although exponential is clearly not a viable model for operational risk data (because its tail is too light for such data). However, the exponential distribution is a model for which all relevant formulas are explicit and can be easily verified by the reader. Moreover, the data analysis exercise also serves as an example of how to identify inappropriate models (e.g., exponential), and if the model validation step is ignored, to illustrate how wrong the predictions based on such models can be.

3.1. Data

We will use the data set from Cruz (2002, p. 57), which has 75 observations and represents the cost of legal events for a business unit. The cost is measured in U.S. dollars. To illustrate the impact of data collection threshold on the selected models, we split the data set into two parts: losses that are at least $195,000, which will be treated as observed and used for model building and VaR estimation, and losses that are below $195,000, which will be used at the end of the exercise to assess the quality of model-based predictions. This data-splitting scenario implies that there are 54 observed losses. A quick exploratory analysis of the observed data shows that it is right-skewed and potentially heavy-tailed, with the first quartile 248,342, median 355,000, and the third quartile 630,200; its mean is 546,021, standard deviation 602,912, and skewness 3.8.

3.2. Model Fitting

We fit exponential and Lomax models to the observed data and use three parametric approaches: truncated, naive, and shifted. The truncation threshold is t = 195 , 000 . For the exponential model, MLE formulas for σ are available in Section 2.3.1. For the Lomax distribution, we perform numerical maximization of the log-likelihoods (3)–(5) to compute parameter values. For the data set under consideration, the resulting MLE values are reported in Table 4. Additionally, the corresponding estimates for parameter variances and covariances were computed using Theorem A3.

3.3. Model Validation

To validate the fitted models, we employ quantile–quantile plots (QQ plots) and two goodness-of-fit statistics: Kolmogorov–Smirnov (KS) and Anderson–Darling (AD).
In Figure 2, we present plots of the fitted-versus-observed quantiles for the six models of Section 3.2. In order to avoid visual distortions due to large spacings between the most extreme observations, both axes in all the plots are measured on a logarithmic scale. That is, the points plotted in those graphs are the following pairs:
log G ^ 1 ( u i ) , log X ( i ) , i = 1 , , 54 ,
where G ^ 1 is the estimated parametric qf, X ( 1 ) X ( 54 ) denote the ordered losses, and u i = ( i 0.5 ) / 54 is the quantile level. For the truncated approach, G ^ 1 ( u i ) = F ^ 1 u i + F ^ ( 195 , 000 ) ( 1 u i ) ; for the naive approach, G ^ 1 ( u i ) = F ^ 1 ( u i ) ; for the shifted approach, G ^ 1 ( u i ) = F ^ 1 ( u i ) + 195 , 000 . Additionally, the corresponding cdf and qf functions were evaluated using the MLE values from Table 4.
We can see from Figure 2 that Lomax models show a better overall fit than exponential models, and especially in the extreme right tail. That is, most of the points in those plots do not deviate from the 45 line. The naive approach seems off, but the truncated and shifted approaches do a reasonably good job for both distributions, with Lomax models exhibiting slightly better fits.
The KS and AD goodness-of-fit statistics measure, respectively, the maximum absolute distance and the cumulative weighted quadratic distance (with more weight on the tails) between the empirical cdf F ^ n ( x ) = n 1 i = 1 n 1 { X i x } and the parametrically estimated cdf G ^ ( x ) . Their respective computational formulas are given by
K S n = max 1 i n | G ^ ( X ( i ) ) i 1 n | , | G ^ ( X ( i ) ) i n |
and
A D n = n + n i = 1 n i / n 2 log G ^ ( X ( i + 1 ) ) G ^ ( X ( i ) ) n i = 0 n 1 1 i / n 2 log 1 G ^ ( X ( i + 1 ) ) 1 G ^ ( X ( i ) ) ,
where 195 , 000 = X ( 0 ) X ( 1 ) X ( n ) X ( n + 1 ) = denote the ordered claim severities. Additionally, G ^ ( X ( i ) ) = F ^ * ( X ( i ) ) for the truncated approach, G ^ ( X ( i ) ) = F ^ ( X ( i ) ) for the naive approach, and G ^ ( X ( i ) ) = F ^ ( X ( i ) 195 , 000 ) for the shifted approach. Note that n = 54 and the corresponding cdf’s were evaluated using the MLE values from Table 4. The p-values of the KS and AD tests were computed using parametric bootstrap with 10,000 simulation runs. For a brief description of the parametric bootstrap procedure, see, for example, Klugman, Panjer, and Willmot (2012, sct. 20.4.5).
As the results of Table 5 suggest, both naive models are strongly rejected by the KS and AD tests, which is consistent with the conclusions based on QQ-plots. The truncated and shifted exponential models are also rejected, which strengthens our “weak” decisions based on QQ-plots. Unfortunately, for this data set, neither KS nor the AD test can help us with differentiating between the truncated and shifted Lomax models, as both of them fit the data very well.

3.4. VaR Estimates

Having fitted and validated the models, we now compute several point and interval estimates of VaR ( β ) for all six models. The purpose of calculating VaR ( β ) estimates for all—“good” and “bad”—models is to see the impact that model fit (which is driven by the initial assumptions) has on the capital estimates. The results are summarized in Table 6, where empirical estimates of VaR ( β ) are also reported for completeness. The confidence intervals for the exponential models are derived using Theorem A3 and based on the variance estimates from Table 4. For the Lomax models, the confidence intervals are obtained using parametric bootstrap with 10,000 simulation runs.
We see from the table that the VaR ( β ) estimates based on the naive approach significantly differ from the rest. The difference between truncated and shifted estimates at the exponential model is t = 195 , 000 . For the Lomax model, these two approaches—which exhibited nearly perfect fits to data—produce substantially different estimates, especially at the very extreme tail. Finally, in view of such large differences between parametric estimates (which resulted from models with excellent fits), the empirical estimates do not seem completely off.

3.5. Model Predictions

As the final test of our models, we check their out-of-sample predictive power. Table 7 provides the “unobserved” legal losses, which will be used to verify how accurate our model-based predictions are. To start with, we note that the empirical and shifted models are not able to produce meaningful predictions because they assume that such data were impossible to occur (i.e., F ^ ( 195 , 000 ) = 0 for these two approaches). So, we now work only with the truncated and naive models.
Firstly, we report the estimated probabilities of losses below the data collection threshold, F ^ ( 195 , 000 ) . For the exponential models, it is 0.300 (naive) and 0.426 (truncated). For the Lomax models, it is 0.310 (naive) and 0.794 (truncated). Secondly, using these probabilities we can estimate the total, observed, and unobserved number of losses. For the exponential models, N ^ = 77.2 77 (naive) and N ^ = 94.1 94 (truncated). For the Lomax models, N ^ = 78.3 78 (naive) and N ^ = 262.1 262 (truncated). Note how different from the rest the estimate of the truncated Lomax model is. (Recall that this model exhibited the best statistical fit for the observed data).
For predictions that are verifiable, in Table 8 we report model-based estimates of the number of losses, the average loss, and the total loss in the interval [150,000;175,000]. We also provide the corresponding 95% confidence intervals for the predictions. The intervals were constructed by using the variance and covariance estimates of Table 4 in conjunction with Theorem A3. Notice that by using the data points from Table 7 it is straightforward to verify that the actual number of losses is eight, the average loss is 156,627, and the total loss is 1,253,017. We see from Table 8 that, with the exception of the average loss measure, there are large disparities in predictions between different approaches. This mostly has to do with the quality of model fit for the given data set, which is good for the truncated Lomax model but bad for the other models and/or approaches. As a consequence, 95% confidence intervals based on the truncated Lomax model cover the actual values of two important measures—number of losses (eight) and total loss (1,253,017)—but those based on the truncated exponential model do not. Moreover, both naive models fit the data poorly and produce point and interval predictions that are even further from their respective targets than those of the truncated exponential model. In addition, if one chose to ignore the model validation step and proceeded directly to predictions based on the naive models, they would be (falsely) reassured by the consistency of such predictions (number of losses: 2.6 and 2.7; total loss: 426,197 and 441,155).

4. Concluding Remarks

In this paper, we have studied the problem of model uncertainty in operational risk modeling, which arises due to different (seemingly plausible) model assumptions. We have focused on the statistical aspects of the problem by utilizing asymptotic theorems of mathematical statistics, Monte Carlo simulations, and real-data examples. Similar to other authors who have studied some aspects of this topic before, we conclude that:
  • The naive and empirical approaches are inappropriate for determining VaR estimates.
  • The shifted approach—although fundamentally flawed (simply because it assumes that operational losses below the data collection threshold are impossible)—has the flexibility to adapt to data well and successfully pass standard model validation tests.
  • The truncated approach is theoretically sound when appropriate fits data well, and (in our examples) produces lower VaR-based capital estimates than those of the shifted approach.
The research presented in this paper invites follow-up studies in several directions. For example, as the first and most obvious direction, one may choose to explore these issues for other—perhaps more popular in practice—distributions such as lognormal or loggamma. If the chosen model lends itself to analytic investigations, then our Example 1 (in Section 2.3) is a blueprint for analysis. Otherwise, one may follow our Example 2 for a simulations-based approach. Second, VaR can be replaced by a different risk measure. For instance, the Expected Shortfall (also known as Tail-VaR or Conditional Tail Expectation) has some theoretical advantages over VaR (e.g., it is a coherent risk measure), and is a recommended measure in the Swiss Solvency Test. Third, due to the theoretical soundness of the truncated approach, one may try to develop model-selection strategies for truncated (but not necessarily nested) models. However, this line of work may be quite challenging due to the “flatness” of the truncated likelihoods—a phenomenon frequently encountered in practice (see Cope 2011). The fourth venue of research that may also help with the latter problem is robust model fitting. There are several excellent contributions to this topic in the operational risk literature (e.g., Chau 2013; Horbenko et al. 2011; Opdyke and Cavallo 2012), but more work can be done.

Acknowledgments

The authors are very appreciative of valuable insights and useful comments provided by two anonymous referees, which helped to substantially improve the paper.

Author Contributions

The two authors contribute equally to this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this appendix, we provide some theoretical results that are key to the analytic derivations in the paper. Specifically, in Appendix A.1, the generalized Pareto distribution (GPD) is introduced, and a few of its special and limiting cases are discussed. In Appendix A.2, the asymptotic normality theorems for sample quantiles (equivalently, value-at-risk or VaR) and the maximum likelihood estimators (MLEs) of model parameters are presented. The well-known delta method is also provided in this section.

Appendix A.1 Generalized Pareto Distribution

The cumulative distribution function (cdf) of the three-parameter GPD is given by
F GPD ( μ , σ , γ ) ( x ) = 1 1 + γ ( x μ ) / σ 1 / γ , γ 0 , 1 exp ( x μ ) / σ , γ = 0 ,
and the probability density function (pdf) by
f GPD ( μ , σ , γ ) ( x ) = σ 1 1 + γ ( x μ ) / σ 1 / γ 1 , γ 0 , σ 1 exp ( x μ ) / σ , γ = 0 ,
where the pdf is positive for x μ , when γ 0 , or for μ x μ σ / γ , when γ < 0 . The parameters < μ < , σ > 0 , and < γ < control the location, scale, and shape of the distribution, respectively. Note that when γ = 0 and γ = 1 , the GPD reduces to the shifted exponential distribution (with location μ and scale σ ) and the uniform distribution on [ μ ; μ + σ ] , respectively. If γ > 0 , then the Pareto-type distributions are obtained. In particular:
  • Choosing 1 / γ = α , σ / γ = θ , and μ = θ leads to what actuaries call a single-parameter Pareto distribution, with the scale parameter θ > 0 (usually treated as known deductible) and shape α > 0 .
  • Choosing 1 / γ = α , σ / γ = θ , and μ = 0 yields the Lomax distribution with the scale parameter θ > 0 and shape α > 0 . This is also known as a Pareto II distribution.
For a comprehensive treatment of Pareto distributions, the reader may be referred to Arnold (2015), and for their applications to loss modeling in insurance, see Klugman, Panjer, and Willmot (2012).
A useful property for modeling operational risk with the GPD is that the truncated cdf of excess values remains a GPD (with the same shape parameter γ ), and it is given by
P X x | X > t = P { t < X x } P { X > t } = 1 1 + γ x t σ + γ ( t μ ) 1 / γ , x > t ,
where the second equality follows by applying Equation (A1) to the numerator and denominator of the ratio.
In addition, besides the functional simplicity of its cdf and pdf, another attractive feature of the GPD is that its quantile function (qf) has an explicit formula. This is especially useful for model diagnostics (e.g., quantile–quantile plots) and for risk evaluations based on VaR measures. Specifically, for 0 < u < 1 , the qf is found by inverting Equation (A1) and given by
F GPD ( μ , σ , γ ) 1 ( u ) = μ + ( σ / γ ) 1 u γ 1 , γ 0 , μ σ log ( 1 u ) , γ = 0 .

Appendix A.2 Asymptotic Theorems

Suppose X 1 , , X n represent a sample of independent and identically distributed (i.i.d.) continuous random variables with cdf G, pdf g, and qf G 1 , and let X ( 1 ) X ( n ) denote the ordered sample values. We will assume that g satisfies all the regularity conditions that usually accompany theorems such as the ones formulated below (for more details on this topic, see, e.g., Serfling 1980, Sections 2.3.3 and 4.2.2). Note that a review of modeling practices in the U.S. financial service industry (see AMA Group 2013) suggests that practically all the severity distributions in current use would satisfy the regularity assumptions mentioned above. In view of this, we will formulate “user-friendly” versions of the most general theorems, making them easier to work with. Additionally, throughout the paper, the notation AN is used to denote “asymptotically normal.”
Since VaR measure is defined as a population quantile, say G 1 ( β ) , its empirical estimator is the corresponding sample quantile X ( n β ) , where · denotes the “rounding up” operation. We start with the asymptotic normality result for sample quantiles. Proofs and complete technical details are available in Section 2.3.3 of Serfling (1980).
Theorem A1
(Asymptotic Normality of Sample Quantiles). Let 0 < β 1 < < β k < 1 , with k > 1 , and suppose that pdf g is continuous, as discussed above. Then, the k-variate vector of sample quantiles X ( n β 1 ) , , X ( n β k ) is AN with the mean vector G 1 ( β 1 ) , , G 1 ( β k ) and the covariance–variance matrix σ i j 2 i , j = 1 k with the entries
σ i j 2 = 1 n β i ( 1 β j ) g ( G 1 ( β i ) ) g ( G 1 ( β j ) ) .
In the univariate case ( k = 1 ) , the sample quantile
X ( n β ) i s AN G 1 ( β ) , 1 n β ( 1 β ) g 2 ( G 1 ( β ) ) .
Clearly, in many practical situations the univariate result will suffice, but Theorem A1 is more general and may be used, for example, to analyze business decisions that combine a set of VaR estimates.
The main drawback of statistical inference based on the empirical model is that it is restricted to the range of observed data. For the problems encountered in operational risk modeling, this is a major limitation. Therefore, a more appropriate alternative is to estimate VaR parametrically, which first requires estimates of the distribution parameters and then those values are applied to the formula of G 1 ( β ) to find an estimate of VaR. The most common technique for parameter estimation is MLE. The following theorem summarizes its asymptotic distribution. Description of the method, proofs, and complete technical details are available in Section 4.2 of (Serfling 1980).
Theorem A2
(Asymptotic Normality of MLEs). Suppose pdf g is indexed by k unknown parameters, ( θ 1 , , θ k ) , and let θ ^ 1 , , θ ^ k denote the MLE of those parameters. Then, under the regularity conditions mentioned above,
θ ^ 1 , , θ ^ k i s AN θ 1 , , θ k , 1 n I 1 ,
where I = I i j i , j = 1 k is the Fisher information matrix, with the entries given by
I i j = E log g ( X ) θ i · log g ( X ) θ j .
In the univariate case ( k = 1 ) ,
θ ^ i s AN θ , 1 n 1 E log g ( X ) θ 2 .
Having parameter MLEs, θ ^ 1 , , θ ^ k , and knowing their asymptotic distribution is useful. Our ultimate goal, however, is to estimate VaR—a function of θ ^ 1 , , θ ^ k —and to evaluate its properties. For this we need a theorem that would specify the asymptotic distribution of functions of asymptotically normal vectors. The delta method is a technical tool for establishing asymptotic normality of smoothly transformed asymptotically normal random variables. Here we will present it as a direct application to Theorem A2. For the general theorem and complete technical details, see Serfling (1980, Section 3.3).
Theorem A3
(The Delta Method). Suppose that θ ^ 1 , , θ ^ k is AN with the parameters specified in Theorem A2. Let the real-valued functions h 1 θ 1 , , θ k , , h m θ 1 , , θ k represent m different risk measures, tail probabilities, or other functions of model parameters. Then, under some smoothness conditions on functions h 1 , , h m , the vector of MLE-based estimators
h 1 θ ^ 1 , , θ ^ k , , h m θ ^ 1 , , θ ^ k i s AN h 1 θ 1 , , θ k , , h m θ 1 , , θ k , 1 n DI 1 D ,
where D = [ d i j ] m × k is the Jacobian of the transformations h 1 , , h m evaluated at ( θ 1 , , θ k ) , that is, d i j = h i / θ ^ j | ( θ 1 , , θ k ) . In the univariate case ( m = 1 ) , the parametric estimator
h θ ^ 1 , , θ ^ k i s AN h θ 1 , , θ k , 1 n dI 1 d ,
where d = h / θ 1 ^ , , h / θ k ^ | ( θ 1 , , θ k ) .

References

  1. AMA Group. 2013. AMA Quantification Challenges: AMAG Range of Practice and Observations on “The Thorny LDA Topics”. Munich: Risk Management Association. [Google Scholar]
  2. Arnold, Barry C. 2015. Pareto Distributions, 2nd ed. London: Chapman & Hall. [Google Scholar]
  3. Basel Coordination Committee. 2014. Supervisory guidance for data, modeling, and model risk management under the operational risk advanced measurement approaches. Basel Coordination Committee Bulletin 14: 1–17. [Google Scholar]
  4. Baud, Nicolas, Antoine Frachot, and Thierry Roncalli. 2002. Internal Data, External Data and Consortium Data for Operational Risk Measurement: How to Pool Data Properly? Working Paper, Groupe de Recherche Opérationnelle, Crédit Lyonnais, France. [Google Scholar]
  5. Brazauskas, Vytaras, Bruce L. Jones, and Ričardas Zitikis. 2015. Trends in disguise. Annals of Actuarial Science 9: 58–71. [Google Scholar] [CrossRef]
  6. Cavallo, Alexander, Benjamin Rosenthal, Xiao Wang, and Jun Yan. 2012. Treatment of the data collection threshold in operational risk: A case study with the lognormal distribution. Journal of Operational Risk 7: 3–38. [Google Scholar] [CrossRef]
  7. Chau, Joris. 2013. Robust Estimation in Operational Risk Modeling. Master’s thesis, Department of Mathematics, Utrecht University, Utrecht, The Netherland. [Google Scholar]
  8. Chernobai, Anna S., Svetlozar T. Račev, and Frank J. Fabozzi. 2007. Operational Risk: A Guide to Basel II Capital Requirements, Models, and Analysis. Hoboken: Wiley. [Google Scholar]
  9. Cope, Eric. 2011. Penalized likelihood estimators for truncated data. Journal of Statistical Planning and Inference 141: 345–58. [Google Scholar] [CrossRef]
  10. Cruz, Marcelo G. 2002. Modeling, Measuring and Hedging Operational Risk. Hoboken: Wiley. [Google Scholar]
  11. Cruz, Marcelo G., Gareth W. Peters, and Pavel V. Shevchenko. 2015. Fundamental Aspects of Operational Risk and Insurance Analytics: A Handbook of Operational Risk. Hoboken: Wiley. [Google Scholar]
  12. De Fontnouvelle, Patrick, Virginia Dejesus-Rueff, John S. Jordan, and Eric S. Rosengren. 2006. Capital and risk: New evidence on implications of large operational losses. Journal of Money, Credit, and Banking 38: 1819–46. [Google Scholar] [CrossRef]
  13. Ergashev, Bakhodir, Konstantin Pavlikov, Stan Uryasev, and Evangelos Sekeris. 2016. Estimation of truncated data samples in operational risk modeling. Journal of Risk and Insurance 83: 613–40. [Google Scholar] [CrossRef]
  14. Horbenko, Nataliya, Peter Ruckdeschel, and Taehan Bae. 2011. Robust estimation of operational risk. Journal of Operational Risk 6: 3–30. [Google Scholar] [CrossRef]
  15. Klugman, Stuart A., Harry Panjer, and Gordon E. Willmot. 2012. Loss Models: From Data to Decisions, 4th ed. Hoboken: Wiley. [Google Scholar]
  16. Luo, Xiaolin, Pavel V. Shevchenko, and John B. Donnelly. 2007. Addressing the impact of data truncation and parameter uncertainty on operational risk estimates. Journal of Operational Risk 2: 3–26. [Google Scholar] [CrossRef]
  17. Moscadelli, Marco, Anna Chernobai, and Svetlozar T. Rachev. 2005. Treatment of missing data in the field of operational risk: The impacts on parameter estimates, EL and UL figures. Operational Risk 6: 28–34. [Google Scholar]
  18. Office of the Comptroller of the Currency. 2011. Supervisory guidance on model risk management. SR Letter 11: 1–21. [Google Scholar]
  19. Opdyke, John Douglas. 2014. Estimating operational risk capital with greater accuracy, precision, and robustness. Journal of Operational Risk 9: 3–79. [Google Scholar] [CrossRef]
  20. Opdyke, John Douglas, and Alexander Cavallo. 2012. Estimating operational risk capital: The challenges of truncation, the hazards of maximum likelihood estimation, and the promise of robust statistics. Journal of Operational Risk 7: 3–90. [Google Scholar] [CrossRef]
  21. Serfling, Robert J. 1980. Approximation Theorems of Mathematical Statistics. Hoboken: Wiley. [Google Scholar]
  22. Shevchenko, Pavel V., and Grigory Temnov. 2009. Modeling operational risk data reported above a time-varying threshold. Journal of Operational Risk 4: 19–42. [Google Scholar] [CrossRef]
Figure 1. Truncated, naive, and shifted E x p o n e n t i a l ( σ ) and L o m a x ( α = 3.5 , θ 1 ) probability density functions. Data collection threshold t = 195 , 000 , with 50% of data unobserved. Parameters σ and θ 1 are chosen to match those in Tables 2 and 3 (see Section 2.3).
Figure 1. Truncated, naive, and shifted E x p o n e n t i a l ( σ ) and L o m a x ( α = 3.5 , θ 1 ) probability density functions. Data collection threshold t = 195 , 000 , with 50% of data unobserved. Parameters σ and θ 1 are chosen to match those in Tables 2 and 3 (see Section 2.3).
Risks 05 00049 g001
Figure 2. Fitted-versus-observed log-losses for exponential (top row) and Lomax (bottom row) distributions, using truncated (left), naive (middle), and shifted (right) approaches.
Figure 2. Fitted-versus-observed log-losses for exponential (top row) and Lomax (bottom row) distributions, using truncated (left), naive (middle), and shifted (right) approaches.
Risks 05 00049 g002
Table 1. Function H ( c ) evaluated for various combinations of c, confidence level β , proportion of unobserved data F ( t ) , and severity distributions with varying degrees of tail heaviness ranging from light- and moderate-tailed to heavy-tailed. The sample size is n = 100 .
Table 1. Function H ( c ) evaluated for various combinations of c, confidence level β , proportion of unobserved data F ( t ) , and severity distributions with varying degrees of tail heaviness ranging from light- and moderate-tailed to heavy-tailed. The sample size is n = 100 .
c β F ( t ) = 0 F ( t ) = 0.5 F ( t ) = 0.9
LightModerateHeavyLightModerateHeavyLightModerateHeavy
10.950.5000.5000.5000.9440.9250.8741.0001.0000.981
0.9950.5000.5000.5000.6880.6720.6380.9490.8840.738
0.9990.5000.5000.5000.5870.5790.5630.7670.7030.612
1.20.950.0850.1780.3310.5850.7530.8241.0001.0000.978
0.9950.2260.3490.4440.3980.5510.6120.8110.8400.734
0.9990.3310.4240.4750.4140.5170.5500.6150.6680.610
1.50.950.0000.0100.1380.0320.3260.7260.9680.9960.975
0.9950.0300.1670.3620.0830.3640.5710.4030.7560.727
0.9990.1370.3170.4370.1910.4240.5320.3580.6130.606
20.950.0000.0000.0150.0000.0090.5230.0560.9300.968
0.9950.0000.0260.2400.0010.1270.5010.0170.5770.715
0.9990.0140.1700.3760.0250.2800.5000.0730.5160.600
Threshold t is 0 for F ( t ) = 0 and 195 , 000 for F ( t ) = 0.5 , 0.9 . Distributions: L i g h t = exponential ( σ ) , M o d e r a t e = Lomax ( α = 3.5 , θ 1 ) , H e a v y = Lomax ( α = 1 , θ 2 ) . For F ( t ) = 0 : σ = θ 1 = θ 2 = 1 . For F ( t ) = 0.5 : σ = 281 , 326 , θ 1 = 890 , 355 , θ 2 = 195 , 000 . For F ( t ) = 0.9 : σ = 84 , 687 , θ 1 = 209 , 520 , θ 2 = 21 , 667 .
Table 2. Functions H T ( c ) , H N ( c ) , H S ( c ) evaluated for various combinations of c, confidence level β , and proportion of unobserved data F ( t ) . (The sample size is n = 100 .)
Table 2. Functions H T ( c ) , H N ( c ) , H S ( c ) evaluated for various combinations of c, confidence level β , and proportion of unobserved data F ( t ) . (The sample size is n = 100 .)
c β F ( t ) = 0 F ( t ) = 0.5 F ( t ) = 0.9
TNSTNSTNS
10.950.5000.5000.5000.5001.0000.9900.5001.0001.000
0.9950.5000.5000.5000.5001.0000.9050.5001.0001.000
0.9990.5000.5000.5000.5001.0000.8420.5001.0001.000
1.20.950.0230.0230.0230.0231.0000.6230.0231.0001.000
0.9950.0230.0230.0230.0231.0000.2450.0231.0000.991
0.9990.0230.0230.0230.0231.0000.1590.0231.0000.909
1.50.950.0000.0000.0000.0000.9730.0040.0001.0000.996
0.9950.0000.0000.0000.0000.9730.0000.0001.0000.257
0.9990.0000.0000.0000.0000.9730.0000.0001.0000.048
20.950.0000.0000.0000.0000.0010.0000.0001.0000.010
0.9950.0000.0000.0000.0000.0010.0000.0001.0000.000
0.9990.0000.0000.0000.0000.0010.0000.0001.0000.000
Note: Threshold t is 0 for F ( t ) = 0 and 195 , 000 for F ( t ) = 0.5 , 0.9 . Exponential ( σ ) , with σ = 1 (for F ( t ) = 0 ), σ = 281 , 326 (for F ( t ) = 0.5 ), σ = 84 , 687 (for F ( t ) = 0.9 ).
Table 3. Functions H T ( c ) , H N ( c ) , H S ( c ) evaluated for various combinations of c, confidence level β , and proportion of unobserved data F ( t ) . The average sample size is n = 100 .
Table 3. Functions H T ( c ) , H N ( c ) , H S ( c ) evaluated for various combinations of c, confidence level β , and proportion of unobserved data F ( t ) . The average sample size is n = 100 .
c β F ( t ) = 0 F ( t ) = 0.5 F ( t ) = 0.9
TNSTNSTNS
10.950.4530.4530.4530.4590.9510.9820.5470.9081.000
0.9950.4330.4330.4330.4350.6920.7340.4440.8910.998
0.9990.4260.4260.4260.4370.1490.6240.3310.8670.944
1.20.950.1310.1310.1310.0950.9450.7910.3560.9040.999
0.9950.2470.2470.2470.1840.2080.5180.1700.8890.993
0.9990.2970.2970.2970.2720.0590.4840.1210.8450.864
1.50.950.0090.0090.0090.0020.6260.2700.1120.8790.998
0.9950.0970.0970.0970.0440.0440.2780.0210.8750.872
0.9990.1780.1780.1780.1230.0160.3130.0190.8430.708
20.950.0000.0000.0000.0000.0320.0100.0020.8650.984
0.9950.0250.0250.0250.0040.0040.0900.0000.8510.563
0.9990.0750.0750.0750.0320.0020.1470.0010.2240.459
Note: Threshold t is 0 for F ( t ) = 0 and 195 , 000 for F ( t ) = 0.5 , 0.9 . Lomax ( α = 3.5 , θ 1 ) , with θ 1 = 1 (for F ( t ) = 0 ), θ 1 = 890 , 355 (for F ( t ) = 0.5 ), θ 1 = 209 , 520 (for F ( t ) = 0.9 ).
Table 4. Parameter maximum likelihood estimators (MLEs, with variance and covariance estimates in parentheses) of the exponential and Lomax models, using truncated, naive, and shifted approaches.
Table 4. Parameter maximum likelihood estimators (MLEs, with variance and covariance estimates in parentheses) of the exponential and Lomax models, using truncated, naive, and shifted approaches.
ModelTruncatedNaiveShifted
Exponential σ ^ = 351 , 021 2.28 × 10 9 σ ^ = 546 , 021 5.52 × 10 9 σ ^ = 351 , 021 2.28 × 10 9
Lomax α ^ = 1.91 0.569 α ^ = 22.51 5 , 189.86 α ^ = 1.91 0.569
θ ^ = 151 , 234 3.84 × 10 10 θ ^ = 11 , 735 , 899 1.54 × 10 15 θ ^ = 346 , 234 3.84 × 10 10
c o v ^ ( α ^ , θ ^ ) = 138 , 934 c o v ^ ( α ^ , θ ^ ) = 2.82 × 10 9 c o v ^ ( α ^ , θ ^ ) = 138 , 934
Table 5. Values of KS and AD statistics (with p-values in parentheses) for the fitted models, using truncated, naive, and shifted approaches.
Table 5. Values of KS and AD statistics (with p-values in parentheses) for the fitted models, using truncated, naive, and shifted approaches.
ModelKolmogorov–SmirnovAnderson–Darling
TruncatedNaiveShiftedTruncatedNaiveShifted
Exponential 0.186 (0.004) 0.307 (0.000) 0.186 (0.004) 3.398 (0.000) 4.509 (0.000) 3.398 (0.000)
Lomax 0.072 (0.632) 0.316 (0.000) 0.072 (0.631) 0.272 (0.671) 4.696 (0.000) 0.272 (0.678)
Table 6. Value-at-risk ( VaR ) ( β ) estimates (with 95% confidence intervals in parentheses), measured in millions and based on the fitted models, using truncated, naive, and shifted approaches.
Table 6. Value-at-risk ( VaR ) ( β ) estimates (with 95% confidence intervals in parentheses), measured in millions and based on the fitted models, using truncated, naive, and shifted approaches.
Model β TruncatedNaiveShifted
Exponential0.951.052 (0.771; 1.332)1.636 (1.199; 2.072)1.247 (0.966; 1.527)
0.9951.860 (1.364; 2.356)2.893 (2.121; 3.665)2.055 (1.559; 2.551)
0.9992.425 (1.778; 3.071)3.772 (2.766; 4.778)2.620 (1.973; 3.266)
Lomax0.950.576 (0.071; 1.160)1.670 (1.134; 2.206)1.514 (0.978; 2.755)
0.9952.281 (0.413; 4.758)3.114 (2.257; 5.023)5.417 (2.213; 20.604)
0.9995.504 (1.100; 13.627)4.214 (3.019; 8.586)12.797 (3.649; 89.992)
Empirical estimates of VaR ( β ) : 1.416 (for β = 0.95 ) and 3.822 (for β = 0.995 and 0.999 ).
Table 7. Unobserved costs of legal events (below $195,000).
Table 7. Unobserved costs of legal events (below $195,000).
142,774.19146,875.00151,000.00160,000.00176,000.00182,435.12191,070.31
143,000.00150,411.29153,592.54165,000.00176,000.00185,000.00192,806.74
145,500.50150,930.39157,083.00165,000.00180,000.00186,330.00193,500.00
Source: Cruz (2002, p. 57).
Table 8. Model-based predictions (with 95% confidence intervals in parentheses) of several statistics for the unobserved losses between $150,000 and $175,000.
Table 8. Model-based predictions (with 95% confidence intervals in parentheses) of several statistics for the unobserved losses between $150,000 and $175,000.
ModelTruncatedNaive
Number of LossesAverage LossTotal LossNumber of LossesAverage LossTotal Loss
Exponential4.2162,352685,1082.6162,405426,197
(3.0; 5.5)(162,312; 162,391)(452,840; 917,376)(1.9; 3.4)(162,379; 162,430)(141,592; 710,802)
Lomax9.9162,0171,609,6492.7162,397441,155
(3.3; 16.5)(161,647; 162,388)(543,017; 2,676,281)(1.8; 3.7)(162,343; 162,451)(288,324; 593,985)

Share and Cite

MDPI and ACS Style

Yu, D.; Brazauskas, V. Model Uncertainty in Operational Risk Modeling Due to Data Truncation: A Single Risk Case. Risks 2017, 5, 49. https://doi.org/10.3390/risks5030049

AMA Style

Yu D, Brazauskas V. Model Uncertainty in Operational Risk Modeling Due to Data Truncation: A Single Risk Case. Risks. 2017; 5(3):49. https://doi.org/10.3390/risks5030049

Chicago/Turabian Style

Yu, Daoping, and Vytaras Brazauskas. 2017. "Model Uncertainty in Operational Risk Modeling Due to Data Truncation: A Single Risk Case" Risks 5, no. 3: 49. https://doi.org/10.3390/risks5030049

APA Style

Yu, D., & Brazauskas, V. (2017). Model Uncertainty in Operational Risk Modeling Due to Data Truncation: A Single Risk Case. Risks, 5(3), 49. https://doi.org/10.3390/risks5030049

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop