Next Article in Journal
Accurate Evaluation of Expected Shortfall for Linear Portfolios with Elliptically Distributed Risk Factors
Previous Article in Journal
Capital Structure Arbitrage under a Risk-Neutral Calibration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Determination of the Optimal Retention Level Based on Different Measures

by
Başak Bulut Karageyik
* and
Şule Şahin
Department of Actuarial Sciences, Hacettepe University, 06800 Ankara, Turkey
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2017, 10(1), 4; https://doi.org/10.3390/jrfm10010004
Submission received: 6 December 2016 / Revised: 17 January 2017 / Accepted: 18 January 2017 / Published: 25 January 2017
(This article belongs to the Section Risk)

Abstract

:
This paper deals with the optimal retention level under four competitive criteria: survival probability, expected profit, variance and expected shortfall of the insurer’s risk. The aggregate claim amounts are assumed to be distributed as compound Poisson, and the individual claim amounts are distributed exponentially. We present an approach to determine the optimal retention level that maximizes the expected profit and the survival probability, whereas minimizing the variance and the expected shortfall of the insurer’s risk. In the decision making process, we concentrate on multi-attribute decision making methods: the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and the VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR) methods with their extended versions. We also provide comprehensive analysis for the determination of the optimal retention level under both the expected value and standard deviation premium principles.

1. Introduction

There has been a growing interest in ruin probability, and considerable attention has been paid to determine the optimal reinsurance level under the ruin probability constraint. As a very first study, De Finetti discusses how optimal levels should be calculated for both the excess of loss and proportional reinsurance under the minimum variance criterion for the insurer’s expected profit [1]. Dickson and Waters [2] develop De Finetti’s approach and focus on minimizing the ruin probability instead of the variance criterion in [1]. Kaluszka presents the optimal reinsurance, which aims to minimize the ruin probability for the truncated stop loss reinsurance [3]. Dickson and Waters consider minimizing the ruin probability in compliance with a dynamic reinsurance strategy [4]. Kaishev and Dimitrova suggest a joint survival optimal reinsurance model for the excess of loss reinsurance [5]. Nie et al. propose an approach to calculate the optimal reinsurance for a reinsurance arrangement in the lower barrier model with capital injection [6]. Centeno and Simoes present a survey about the state-of-the-art of optimal reinsurance [7].
In addition, Value at Risk (VaR) and Conditional Value at Risk (CVaR) are commonly-used risk measures in the determination of optimal reinsurance. Borch proposes reinsurance as an effective risk management tool for managing an insurer’s risk exposure [8]. Cai and Tan study the optimal retention level according to the VaR and CVaR risk measures for a stop loss reinsurance [9]. Chi and Tan suggest the optimal reinsurance model [10], which aims to minimize VaR and CVaR assuming that the reinsurance premium principle satisfies three basic axioms: distribution invariance, risk loading and stop-loss ordering preserving. Trufin et al. describe a VaR-type risk measure as the value at risk of the maximal deficit of the ruin process in infinite time [11].
Previous studies indicate that researchers usually consider only a single constraint, such as ruin probability, VaR, CVaR, expected profit or expected utility. Very few publications are available in the literature that discuss the issue of optimal reinsurance under more than one constraint. For example, Karageyik and Dickson [12] suggest optimal reinsurance criteria as the released capital, expected profit and expected utility of resulting wealth under the minimum finite time ruin probability. They aim to find the pair of initial surplus and reinsurance level that maximizes the output of these three quantities under the minimum finite time ruin probability by using the translated gamma process to approximate the compound Poisson process. They determine the optimal initial surplus and retention level in a set of alternatives so that each pair satisfies the minimum ruin probability constraint. In the decision making process, they use the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method with the Mahalanobis distance.
Different from Karageyik and Dickson [12], in this study, we investigate the optimal retention level that makes the survival probability and expected profit maximum, whereas variance and expected shortfall minimum. We explore the survival probability in the determination of the optimal reinsurance as a new criteria rather than a constraint. We also concentrate on the determination of the optimal retention level in an alternative set that contains constant initial surplus and corresponding possible reinsurance levels. The premium is calculated by using the expected value and standard deviation premium principles. On the other hand, in the decision making process, we use two multi-attribute decision making methods: TOPSIS and VlseKriterijumska Optimizacija I Kompromisno Resenje (VIKOR), as well as their modified versions.
The paper is organized as follows. In Section 2, we introduce optimal reinsurance criteria to determine the optimal retention level. We define the exact finite time ruin probability, expected profit, variance and expected shortfall for a compound Poisson risk model from the insurer’s point of view under the excess of loss reinsurance. The calculation of the expected profit is based on two fundamental premium principles: the expected value and standard deviation. Since the main purpose of the paper is to draw attention to determining the retention level on the excess of loss reinsurance, we show how these criteria change according to the retention level, M. In Section 3 we present the decision theory and Multi-Attribute Decision Making (MADM). In this context, we use two frequently-used MADM methods: TOPSIS and VIKOR with their modified versions. In Section 4, we provide an analysis of the determination of the optimal retention level by considering four criteria under the excess of loss reinsurance. We show the effect of the premium principle assumption on the optimal reinsurance level. Furthermore, we examine the sensitivity of the model to the choice of initial surplus and time horizons. In Section 5, we conclude the paper.

2. Main Factors on the Determination of Optimal Reinsurance

The aim of this paper is to calculate the optimal retention levels by considering the survival probability, expected profit, variance and expected shortfall criteria from the insurer’s point of view. We show how these criteria are calculated under the excess of loss reinsurance arrangement.

2.1. The Exact Finite Time Ruin Probability

The surplus process (insurer’s risk process) comprises three main components: the initial capital u, the premium income per unit of time and the aggregate claim amount up to time t, denoted by S ( t ) . Premium income is assumed as payable with a constant rate c per unit of time. The insurer’s surplus (or risk) process, { U ( t ) } t 0 , is defined by:
U ( t ) = u + c t S ( t ) .
The aggregate claim amount up to time t, S ( t ) , is:
S ( t ) = i = 1 N ( t ) X i ,
where N ( t ) denotes the number of claims that occur in the fixed time interval [ 0 , t ] . The individual claim amounts, modeled as independent and identically distributed (i.i.d.) random variables { X i } i = 1 with distribution function F ( x ) = Pr ( X 1 x ) , such that F ( 0 ) = 0 and X i is the amount of the i-th claim. We use notations f and m k to represent the density function and k-th moment of X 1 , respectively, and it is assumed that c > E [ N ]   m 1 .
The ruin probability in finite time, ψ ( u , t ) , is given by:
ψ ( u , t ) = Pr U ( s ) < 0 for   some   s , 0 < s t ,
where ψ ( u , t ) is the probability that the insurer’s surplus falls below zero in the finite time interval ( 0 , t ] .
In the classical risk model, the number of claims has a Poisson distribution with rate λ, so that the aggregate claims have a compound Poisson process with Poisson parameter λ, and the individual claim amounts are exponentially distributed with distribution function F ( x ) . For a fixed value of t > 0 , the random variable S ( t ) has a compound Poisson distribution with Poisson parameter λ t .
In practice, ψ ( u , t ) , where t is the planning horizon of the company, is more interesting than the infinite time ruin probability. The finite time ruin probability enables the insurance company to develop the risk business or increase the premium if the risk business behaves badly. Especially in non-life insurance, four of five years of finite time planning is reasonable [13]. Especially, the finite time ruin probability may become useful and significant for the operation risk of the insurance company when the real data are available.
The finite time ruin probability can be calculated analytically for a few special types of the individual claim amount distribution. Prabhu proposes a finite time ruin probability formula [14], when u 0 , as a function of the distribution of the total claim amount in a specified time interval. Seal develops Prabhu’s formula considering the exponential individual claims [15]. De Vylder suggests a simple method that approximates a classical risk process { U ( t ) } t 0 by another classical risk process { U ^ ( t ) } t 0 [16]. Segerdahl proposes a formula that extends the Cramer–Lundberg approximation by adding a time factor to obtain the finite time ruin probability [17]. Iglehart [18], Grandell [13] and Asmussen and Albrecher [19] study the finite time ruin probability by using diffusion approximation techniques. Dufresne et al. investigate the infinite time ruin probability when the aggregate claims process is the standardized gamma process [20]. Then, Dickson and Waters suggest gamma and the translated gamma process approximations in the classical risk model to calculate the finite time ruin probability [21]. The finite time ruin probability can also be approximated by using Monte Carlo simulations even though it is a time-consuming procedure.
Asmussen and Albrecher present an exact finite time ruin probability formula when the individual claim amounts are exponentially distributed [19]. In this formula, it is assumed that the individual claim amounts are distributed exponentially with parameter β with β = 1 , the number of claims have a Poisson distribution with the parameter λ and the premium rate per unit of time is equal to one ( c = 1 ). Then, the finite time ruin probability is calculated as:
ψ ( u , t ) = λ exp { ( 1 λ ) u } 1 π 0 π f 1 ( x ) f 2 ( x ) f 3 ( x ) d x ,
where:
f 1 ( x ) = λ exp 2 λ t cos ( x ) ( 1 + λ ) t + u ( λ cos ( x ) 1 ) ,
f 2 ( x ) = cos u λ sin ( x ) cos u λ sin ( x ) + 2 x ,
and:
f 3 ( x ) = 1 + λ 2 λ cos ( x ) .
The major drawback of this approach is the limitation of the parameter of the individual claims distribution ( β = 1 ) and premium rate ( c = 1 ). When β 1 , the following equation is applied [19].
ψ λ , β ( u , t ) = ψ λ β , 1 ( β u , β t ) ,
and the following equation is applicable when c 1 [22].
ψ λ , c ( u , t ) = ψ λ c , 1 ( u , c t ) .

The Exact Finite Time Ruin Probability on the Excess of Loss Reinsurance

Under the excess of loss reinsurance, a claim is shared between an insurer and a reinsurer according to the fixed amount called the retention level, M. When a claim X occurs, the insurer pays X I = min ( X , M ) , and the reinsurer pays X R = max ( 0 , X M ) with X = X I + X R . Hence, the distribution function of X I , F X I ( x ) , is:
F X I ( x ) = F X ( x ) for x < M , 1 for x M ,
and the moments of X I are:
E [ ( X I ) n ] = 0 M x n f ( x ) d x + M n 1 F ( M ) .
Similarly, the moments of X R are:
E [ ( X R ) n ] = M x M n f ( x ) d x .
Under the excess of loss reinsurance, the aggregate claims for the reinsurer have a compound Poisson distribution with Poisson parameter λ e x p ( β M ) , and the individual claim amounts are exponential distributed with parameter β. In a similar manner, the aggregate claims for the insurer have a compound Poisson parameter λ ( 1 e x p ( β M ) ) , and the individual claim amounts are exponentially distributed with parameter β [23].
The survival probability is defined as the probability that ruin does not occur in the finite time horizon ( 0 , t ] and is shown as ψ ¯ ( u , t ) = ( 1 ψ ( u , t ) ) . In this study, we aim to determine the optimal retention level that makes the insurer’s survival probability maximum.

2.2. Variance of the Insurer’s Risk

When the individual claim amount is distributed exponentially with parameter β, the moments of the insurer’s individual claim amount under the excess of loss reinsurance can be obtained as [24]:
m k = E [ X I k ] = k β k   γ ( k , β M ) for k = 1 , 2 , ,
where γ ( k , M ) is the incomplete gamma function and defined as:
γ ( k , M ) = 0 M t k 1   e t   d t .
The variance of the individual claim amount for the insurer’s risk is calculated by using the variance principle.
V ( X I ) = E [ X I 2 ] E [ X I ] 2 ,
and then by using the assumption of the classical risk model, the variance of the aggregate total claim amount for the insurer, denoted as V ( S I ) , is calculated as [23],
V ( S I ) = E [ X I ] 2 V ( N ) + E [ N ] V ( X I ) .
where E [ N ] is the expected number of claims and V ( N ) is the variance of the number of claims. In this study, we aim to determine the optimal retention level that makes the variance of the insurer’s risk minimum.

2.3. Expected Profit

In general, the expected profit of the insurance company is determined as the difference between the insurer’s income and liabilities to the policyholders. Insurer’s income is the total premium income, whereas liabilities are the benefit payments. Insurer’s profit is influenced by many factors, such as pure risk premium, total claim amount, reinsurance level, insurance and reinsurance loading factor, investment incomes, taxes, capital gains and dividends. In this study, we assume that the premiums and claims are the main components of the insurance profit. Thus, we ignore the other factors, such as investment incomes, dividend payments and taxes.
Premium principles have a significant influence on the calculation of the expected profit. In this study, we calculate the expected profit according to two basic premium principles. First, we use the expected value premium principle, which is commonly used in the literature. This premium principle depends on the expected aggregate claims, insurance and reinsurance loading factors. The second method is the standard deviation premium principle, which considers the expected value, as well as the standard deviation of the aggregate claims.
In this study, we aim to calculate the optimal retention level that makes the insurer’s expected profit maximum by using the expected and the standard deviation premium principles.

2.3.1. Expected Value Premium Principle

In the classical risk model, it is assumed that the number of claims has a Poisson distribution with parameter λ. According to the expected value premium principle with the insurance loading factor θ and the reinsurance loading factor ξ, the insurer’s premium income per unit of time after the reinsurance premium (i.e., net of reinsurance) is defined as:
c = Total   Premium   Income     Reinsurance   Premium , = ( 1 + θ )   E [ S ] ( 1 + ξ )   E [ S R ] , = ( 1 + θ )   E [ N ]   E [ X ] ( 1 + ξ )   E [ N ]   E [ X R ] ,
where E [ S ] is the expected aggregate claim and E [ S R ] is the expected aggregate claim paid by the reinsurer. It is also assumed that ξ θ > 0 and that c > λ   E [ X I ] .
The net profit of the insurance company after the reinsurance arrangement is obtained by subtracting the expected total claim amount paid by the insurer, E [ S I ] , from the expected net insurance premium income, c .
Net   Insurance   Profit = c E [ S I ] .
where E [ S I ] = E [ N ]   E [ X I ] .

2.3.2. Standard Deviation Premium Principle

According to the standard deviation premium principle with loading α, the insurer’s premium income per unit of time after the reinsurance premium (i.e., net of reinsurance) is defined as:
c = E [ S ] + α V ( S ) E [ S R ] + α V ( S R ) ,
where V ( S R ) denotes the variance of the aggregate claim amount paid by the reinsurance. This method is preferred when the fluctuation of the aggregate claims is important. Hence, this method enables the insurer to calculate a more accurate premium than the expected value premium principle provides.

2.4. Expected Shortfall

Value at Risk (VaR) is the probability that the loss on the portfolio over the given time horizon exceeds a threshold value. VaR of a portfolio at a confidence level p ( 0 , 1 ) is given by the smallest number l, such that the probability of the loss L does not exceed l is at least (p) [25]. L is the loss of a portfolio, and it is usually appropriate to assume in insurance contexts that the loss L is non-negative. V a R p ( L ) is the level p-quantile, i.e.:
V a R p ( L ) = min { l R : P r [ L l ] p } .
Expected Shortfall (ES) is one of the financial risk measures to investigate the market risk or credit risk of a portfolio. Expected Shortfall is defined as an average of V a R p of X at level p. Expected Shortfall is preferred to VaR, since it is more sensitive to the shape of the loss distribution in the tail of the distribution. Expected Shortfall is also called CVaR, Average Value at Risk (AVaR) or Expected Tail Loss (ETL). The ES at confidence level p ( 0 , 1 ) is given by the following equation:
E S p ( X ) = 1 1 p p 1 V a R u ( X ) d u .
The aggregate claims for the insurer have a compound Poisson parameter λ ( 1 e x p ( β M ) ) , and the individual claim amounts are exponentially distributed with parameter β under the excess of loss reinsurance. We calculate the ES for the compound Poisson distribution according to the retention level, M. The ES of a compound Poisson model is calculated by using the R programming language [26].
An increase in the retention level causes an increase of the insurer’s responsibility, and thus, the ES increases. In this study, we aim to determine the optimal retention level that makes the insurer’s ES minimum.

3. Multi-Attribute Decision Making

Multiple-Criterion Decision Making (MCDM) is used to make a decision in the presence of multiple, usually conflicting criteria. The problems of MCDM are constitutively classified into two categories: MADM and Multiple Objective Decision Making (MODM). Multiple Objective Decision Making depends on designing a problem, whereas MADM is based on solving a problem by selection among a finite number of alternatives [27]. Hence, we focus on MADM to determine the optimal retention levels.
Multi-Attribute Decision Making methods are mainly comprised of four components: alternatives, attributes, weight of the relative importance of each attribute and measures of the performance of alternatives regarding the attributes. The decision matrix D is an m × n matrix, and it shows m alternative options, which need to be assessed on n attributes (criteria). Each element, X i j , is either a single numerical value or a single grade, representing the performance of alternative i on criterion j. The decision table is shown in Table 1. In our analysis, we assume that the variance of the insurer risk, expected shortfall, expected profit and survival probability are the attributes (criteria), whereas the alternative sets are retention levels. Hence, X i j shows the values of each criterion according to the corresponding retention levels.
In MADM, the importance of each attribute is described by the weights of the attributes. A set of weights for n attributes is shown as:
w T = ( w 1 , w 2 , , w n ) ,
where j = 1 n w j = 1 .
Hwang and Yoon suggest four techniques in MCDM to calculate the weights of criteria [28]: the eigenvector method, the weighted least squares method, the entropy method and the linear programming technique for multidimensional analysis of the preference method.
In this study, we choose to focus on the entropy method of the determination of the weights of the criteria due to its practicality. In the entropy method, it is assumed that a criterion for the amount of uncertainty is presented by a discrete probability distribution, P i . The project outcomes of attribute j, P i j can be defined as:
P i j = X i j i = 1 m X i j ,   i , j .
The entropy E j of the set of project outcomes of attribute j is:
E j = k i = 1 m P i j ln P i j ,   i , j . ,
where k = 1 / ln ( m ) and 0 E j 1 . The degree of diversification, d j , is calculated as d j = 1 E j for all j and then, the weights are calculated as:
w j = d j j = 1 n d j , j .
In the expression of the inter-attribute preference information, the methods for the cardinal preference of the attribute given are commonly preferred. The methods for the cardinal preference of the attribute given can be categorized into seven methods: the Linear Assignment Method (LAM), the Simple Additive Weighting Method (SAW), the Hierarchical Additive Weighting Method (HAWM), the Analytical Hierarchy Process (AHP), the Elimination and Choice Translating Reality (ELECTRE), the TOPSIS and the VIKOR.

3.1. Technique for Order of Preference by Similarity to Ideal Solution

3.1.1. Technique for Order of Preference by Similarity to Ideal Solution Method with Euclidean Distance

Hwang and Yoon propose the TOPSIS method to determine the best alternative based on the concept of a compromised solution [28]. This method is based on choosing a solution with the shortest Euclidean distance from the positive ideal solution and the farthest Euclidean distance from the negative ideal solution. One of the main assumption of the TOPSIS method is that the criteria are monotonically increasing or decreasing. The ideal solution is defined as the one that maximizes the benefit criteria and minimizes the cost criteria, whereas the negative ideal solution is defined as the one that maximizes the cost criteria and minimizes the benefit criteria. The ranking of the alternatives is calculated according to the relative proximity to the ideal solution.
References to the TOPSIS method for real data can be found in a number of studies, including Wang and Hsu [29], Wu and Olson [30], Shih et al. [31], Jahanshahloo et al. [32], Bulgurcu [33], Zhu et al. [34] and Hosseini et al. [35].
The procedures of the TOPSIS are described as follows:
Step 1: The decision matrix is normalized by using the vector-normalization technique.
r i j = x i j i = 1 m ( x i j ) 2 ,
where r i j is the normalized value for i = 1 , 2 , , m and j = 1 , 2 , , n .
Step 2: Weighted-normalized values are calculated by using the weight vector ω = ( ω 1 , ω 2 , , ω n ) .
V i j ( x ) = w j   r i j , i = 1 , , m and j = 1 , , n .
Step 3: The positive ideal points S + are determined as:
S + = S 1 + , S 2 + , , S j + , , S n + = { ( max i   V i j   |   j J ) , ( min i   V i j   |   j J )   |   i = 1 , 2 , m } .
The negative ideal points S are determined as:
S = S 1 , S 2 , , S j , , S n = { ( min i   V i j   |   j J ) , ( max i   V i j   |   j J )   |   i = 1 , 2 , m } .
where J = { j = 1 , 2 , , n | j associated with the benefit criteria} and J = { j = 1 , 2 , , n | j associated with the cost criteria}.
Step 4: The distance between each alternative and positive ideal solution is calculated by using n-dimensional Euclidean distance.
d i + = j = 1 n ( V i j S j + ) 2 , i = 1 , 2 , , m .
The distance between each alternative and negative ideal solution is calculated by using n-dimensional Euclidean distance.
d i = j = 1 n ( V i j S j ) 2 , i = 1 , 2 , , m .
Step 5: The relative closeness of each alternative to the ideal solution is calculated as:
C i = d i d i + + d i ,
where C i [ 0 , 1 ] for i = 1 , , m . The results are sorted according to the value of C i . A higher C i means that A i is a better solution.

3.1.2. Technique for Order of Preference by Similarity to Ideal Solution Method with Mahalanobis Distance

The Euclidean distance measure depends on the ordinary (i.e., straight-line) distance between two points in Euclidean space. Hence, the TOPSIS method with Euclidean distance assumes that there is no relationship between the attributes. This approach suffers from information overlap and either overestimates or underestimates the attributes that take slack information [36]. When attributes are dependent and influence each other, the application of TOPSIS based on Euclidean distances can lead to inaccurate estimation of the relative significances of alternatives and cause improper ranking results [37]. Therefore, the Mahalanobis distance measure technique is suggested as an alternative to the Euclidean distance in the TOPSIS method.
The Mahalanobis distance, also called quadratic distance, was introduced by Mahalanobis [38]. For a multivariate vector x = ( x 1 , x 2 , x 3 , , x N ) T from a group of observations with mean μ = ( μ 1 , μ 2 , μ 3 , , μ N ) T and the covariance matrix Σ, the Mahalanobis distance is defined as follows.
D M ( x ) = ( x μ ) T Σ 1 ( x μ ) .
The Mahalanobis distance standardizes via the factor of the inverse of the covariance matrix Σ 1 . When the attributes are not related to one another, the weighted Mahalanobis distance and the weighted Euclidean distance will be equivalent [36].
References to the TOPSIS method with Mahalanobis distance can be found in a number of studies, such as Wang and Wang [36], Garca and Ibarra [39], Chang et al. [40] and Lahby et al. [41].
The TOPSIS method with Mahalanobis distance has the following steps. The decision matrix consists of the element of X i j and is standardized as follows.
r i j = x i j i = 1 m x i j .
Let the ideal solution and negative ideal solution (anti-ideal solution) be S + and S , respectively, as in the case of TOPSIS, and A i denote the i-th alternative. Hence, the Mahalanobis distance from A i to the positive/negative ideal solution point is calculated as:
d ( r i , S + / ) = { S j + / r i j } T Ω T Σ 1 Ω { S j + / r i j } i = 1 , 2 , , m
where ω is the weight vector, such as ω=( ω 1 , ω 2 , , ω n ), and Ω is defined as Ω = d i a g ( w 1 , w 2 , , w n ) . The closeness of each alternative is calculated as:
c i = d ( r i , S ) d ( r i , S ) + d ( r i , S + ) i = 1 , 2 , , m .
The results for the alternatives are sorted according to the value of c i . Higher c i suggests that A i is a better solution.

3.1.3. Modified TOPSIS Method

The traditional TOPSIS method uses Euclidean distance to calculate the overall performance of each alternative. In addition, this method compares all alternatives with the ideal and negative ideal points of each criterion. Deng et al. propose the weighted Euclidean distance instead of creating a weighted decision matrix [42]. Thus, positive and negative ideal solutions are not dependent on the weighted decision matrix [27]. According to the modified TOPSIS method, the positive ideal solution ( R + ) is defined as:
R + = ( i max r i j / j J ) , ( i min r i j / j J ) / i = 1 , 2 , , m = R 1 + , R 2 + , R 3 + , , R m +
and the negative ideal solution ( R ) is defined as:
R = ( i min r i j / j J ) , ( i max r i j / j J ) / i = 1 , 2 , , m = R 1 , R 2 , R 3 , , R m
where J = ( j = 1 , 2 , , n ) / j is associated with beneficial attributes and J = ( j = 1 , 2 , , n ) / j is associated with non-beneficial attributes. The weighted Euclidean distance is calculated as:
D i + / = { j = 1 n w j ( r i j R j + / ) 2 } 0.5 , i = 1 , 2 , , m .
The relative closeness of a particular alternative to the ideal solution is expressed as follows:
C i = D i D i + + D i ,
where C i [ 0 , 1 ] for i = 1 , , m . The set of alternatives is sorted in increasing order; the maximum value of C i is the most preferable and feasible solution.

3.2. VIseKriterijumska Optimizacija I Kompromisno Resenje

The VIKOR method, or multi-criteria optimization and compromise solution, is developed for the optimization of complex MCDM systems. It focuses on ranking and selecting from a set of alternatives in the presence of conflicting and incommensurable criteria. This method provides a multiple-criteria ranking index based on the particular measure of “closeness to the ideal” solution [43]. The idea of the compromise solution has been introduced in MCDM by Po-Lung Yu [44] and Milan Zeleny [45] and developed by Opricovic and Tzeng [46,47,48,49] and Tzeng et al. [50,51,52].
The VIKOR method depends on the L p -metric.
L p , i = j = 1 m ( w j [ ( X i j ) m a x ( x i j ) ] / [ ( x i j ) m a x ( x i j ) m i n ] ) p 1 / p ,
with 1 p and i = 1 , 2 , , n . The compromise ranking algorithm VIKOR has the following steps:
Step 1: Determine the best X j and the worst X j values of all criterion functions, j = 1 , 2 , , n . If the j-th function represents a benefit criteria, X j = max i X i j (aspired/desired level), whereas if the j-th function represents a cost criteria, X j = min i X i j (non-aspired/undesired level).
Step 2: Compute the values S i (the maximum group utility) and R i (the minimum individual regret of the opponent) for i = 1 , 2 , , m by the relations:
S i = L 1 , i = j = 1 m w j [ X j x i j ] / [ X j X j ] ,
and:
R i = max w j [ X j x i j ] / [ X j X j ] ,
where w j is the weight of the j-th criterion, which express the relative importance of the criteria.
Step 3: Compute the synthesized index, Q i for i = 1 , 2 , , m ;
Q i = ν ( S i S ) / ( S S ) + ( 1 ν ) ( R j R ) / ( R R ) ,
where S = min i S i , S = max i S i , R = min i R i , R = max i R i and ν is introduced as the weight of the strategy of S i and R i .
Step 4: Rank the alternatives, and sort by the values of S, R and Q in decreasing order. The results are three ranking lists.
Step 5: Propose as a compromise solution the alternative ( a ) , which is ranked the best by the measure minimum Q if the following two conditions are satisfied:
C1. “Acceptable advantage”: Q ( a ) - Q ( a ) D Q where a is the alternative with the second position in the ranking list by Q, D Q = 1 / ( n 1 ) and J is the number of alternatives.
C2. “Acceptable stability in decision making”: Alternative a must also be the best ranked by S or/and R. This compromise solution is stable within a decision making process, which could be: “voting by majority rule” (when ν > 0.5 is needed), “by consensus” ν≈ 0.5 or “with vote” (ν< 0.5). Here, ν is the weight of the decision-making strategy “the majority of criteria” (or “the maximum group utility”).
If one of the conditions is not satisfied, a set of compromise solutions is proposed, which consists of:
  • Alternatives a and a if only condition C 2 is not satisfied or
  • Alternatives a , a ,⋯, a n if condition C 1 is not satisfied; and a n is determined by the relation Q ( a n ) Q ( a ) < D Q for maximum n (the positions of these alternatives are “in closeness”).
The best alternative, ranked by Q, is the one with the minimum value of Q [53].
Huang et al. propose a revised VIKOR method that depends on the normalization of the levels of regret [53]. The revised VIKOR method enables one to choose the best alternative by using the best and the worst values of each criterion instead of only the best value of each criterion.

3.3. A Revised VIseKriterijumska Optimizacija I Kompromisno Resenje Method

Huang et al. revise the VIKOR method by using the regret theory [53]. The VIKOR method describes the regret as the discrimination between alternatives and the best value of each criterion, whereas regret theory defines the regret as the choiceless utility. In the revised VIKOR method, S i (choiceless utilities) and R i (discontent utilities) can be defined as:
f ( x ) = S i = j = 1 n w j 1 n 1 k = 1 m ( X k j X i j ) p , if X i j < X k j 0 , otherwise
and:
R i = j = 1 n w j ( X j X i j ) p ,
where X j is the best value of the j-th criterion and . p denotes the L p -norm. In this paper, we use the L 2 -norm to obtain the values of S and R. Then, the synthesized index Q is calculated as:
Q i = ν ( S i S ) / ( S S ) + ( 1 ν ) ( ( R j ) ( R ) ) / ( ( R ( R ) ) ,
where S = min i S i , S = max i S i , R = min i R i , R = max i R i and ν is introduced as the weight of the strategy of S i and R i .

4. Analysis of the Optimal Retention Level

In this section, we aim to calculate the optimal retention level M for the constant initial surplus under the excess of loss reinsurance. We determine the optimal retention level that makes the survival probability and expected profit maximum, whereas the variance and the expected shortfall of the insurer risk minimum.
We consider that the individual claim has an exponential distribution with parameter β = 1 , and the number of claims has a Poisson distribution with parameter λ = 1 . We use two different premium principles: the expected value and the standard deviation. It is assumed that the insurance loading factor is 0.1, and the reinsurance loading factor is 0.15 in the expected value premium principle, whereas the loading is assumed as 0.01 in the standard deviation premium principle. In the decision process, we use the TOPSIS and VIKOR methods with their improved cases. We investigate and compare the optimal retention level according to these decision making process techniques.

4.1. The Effect of Criteria on the Optimal Retention Level under the Expected Value Premium Principle

We design a set of alternatives that comprise initial surplus and retention level. We assume that the initial surplus is constant, such as 1, 5, 10. Dickson shows the condition for the minimum retention level as M > l o g ( ξ / θ ) [23]. This condition is obtained by applying c > λ E [ X I ] to the exponential individual claim amount with parameter β = 1 .
In particular, we consider values of retention level ( M ) starting from 0.4055 ( l o g ( 0.15 / 0.1 ) ) and increasing by 0.1 to three-times bigger than the initial surplus. There is no change in the results even if the increases are made with smaller increments, such as 0.05. Thus, under each fixed initial capital, all alternative retention levels are obtained. Then, we calculate the variance of the insurer risk, expected shortfall, expected profit and survival probability by using fixed initial surplus and each corresponding retention level. Therefore, we obtain an outcome set that presents all possible combinations of the retention level and the corresponding value of each criterion.

4.1.1. Determination of the Normalized Weights

We measure the importance of the criteria according to different initial surpluses and time horizons under the entropy method. Table 2 presents the weights of the variance ( V a r ( X I ) ), Expected Shortfall ( E S 0.95 ), Expected Profit (EP) and survival probability ( ψ ¯ ( u , t ) ) for the insurer risk by using the entropy method under the expected value premium principle. The results indicate that the variance has a dominant impact among other criteria. If these weights are used in the TOPSIS or VIKOR methods, the optimal values will be affected by the weight of the variance criterion. Thus, the optimal levels will be determined at the point where the variance criteria have the maximum value. The effects of the other criteria will not be observed. Since we aim to get minimum distortion on the weights, the criteria are assigned to equal weights. Hence, we carry out the following assumption:
w 1 = w 2 = w 3 = w 4 = 0.25 .

4.1.2. Evaluation of Criteria under Multi-Attribute Decision Making for the Expected Value Premium Principle

The optimal retention level is obtained by using five different MADM methods: TOPSIS, Modified TOPSIS (M-TOPSIS), TOPSIS with Mahalanobis distance (TOPSIS-Mahalanobis), VIKOR and Revised VIKOR (R-VIKOR) methods. The rankings, when initial surplus is equal to five and the time horizon is one, are obtained as shown in Figure 1.
It is observed that for the TOPSIS-Euclidean (TOPSIS-E), M-TOPSIS and TOPSIS-Mahalanobis methods, the best alternative is determined by the point at which the closeness index is maximum. However, in the VIKOR and R-VIKOR methods, the best alternative is determined by the point at which the synthesize index is maximum.
Since the revised VIKOR method takes the variation between each alternative and the best value of the criterion into account, this method is insufficient in determining the optimal retention level. We compare optimal retention levels for different initial surpluses and time horizons to observe the changes on rankings for five methods. Table 3 presents the optimal retention levels for different initial surplus and time horizons under the expected value premium principle. The results show that the optimal retention levels do not differ significantly for all methods, except the revised VIKOR method. The TOPSIS method with Euclidean distance gives the same results as the modified TOPSIS method for each initial surplus and time horizon. The findings are quite surprising and suggest that the VIKOR method gives closer results to the TOPSIS-Mahalanobis more than the TOPSIS-E and M-TOPSIS methods. An important implication of these findings is that the optimal retention levels are not influenced by the changes on initial surpluses and time horizons. In the revised VIKOR method, the optimum levels are determined as the highest retention level in each alternative set.
The covariance matrices of the normalized attributes for t = 1, 5, 10 are given in Table 4. When the covariance matrix is equal to the identity matrix, the Mahalanobis distance turns into the Euclidean distance. In addition, when the covariance matrix is diagonal, the Mahalanobis distance can be shown as the normalized Euclidean distance [36]. The small covariances between each pair of criteria indicate that there is no relationship between the criteria. Thus, the optimal levels obtained by the Mahalanobis distance measure are very close to the values obtained by the Euclidean distance measure.

4.2. The Effect of Criteria on the Optimal Retention Level under the Standard Deviation Premium Principle

4.2.1. Determination of the Normalized Weights

Table 5 shows the weights of the criteria according to the entropy method under the standard deviation premium principle.
As is seen in Table 5, variance criteria have the highest weight, and thus, they have the biggest effect on determining the optimal retention level. This weighting method might causes optimal values to be obtained at the points where the variance is maximum. It is similar to the expected value premium case. Thus, we assume that the weights of the criteria are equal.

4.2.2. Evaluation of Criteria under Multi-Attribute Decision Making for the Standard Deviation Premium Principle

When initial surplus is equal to five and the time horizon is one, the rankings are obtained under the standard deviation premium principle as shown in Figure 2. The same alternative is obtained as the optimal solution for all methods except the R-VIKOR method. The M-TOPSIS and TOPSIS-Mahalanobis depend on the closeness index, which gives the best solution as the highest index. However, the VIKOR method uses the smallest value in the synthesized index for the best solution. The optimal retention level for different initial surpluses and time horizons under the standard deviation premium principle is given in Table 6.
These results are consistent with the expected value premium principle case, which was shown in Section 4.1.2. TOPSIS-E and M-TOPSIS give the same optimal retention level for each pair of initial surplus and time horizon. Since the variance of the insurer’s risk is involved in the standard deviation premium principle, the necessity of reinsurance is higher so that the retention level is smaller than the expected value premium principle case. In TOPSIS-Mahalanobis, the effects of the existence of the relationship between the criteria are clearly seen in the standard deviation premium principle. Higher optimal levels are obtained in the standard deviation premium principle due to the higher covariances. The revised VIKOR method produces the same optimal retention levels for both premium principles.
The covariance matrices of normalized attributes for t = 1, 5, 10 are given in Table 7.

5. Conclusions

In this study, we have calculated the optimal retention levels by using four competitive criteria under the assumption that the aggregate claims have a compound Poisson process. We have obtained sets of alternatives that comprise the pair of initial surplus and retention level. Then, we have calculated survival probability, expected profit, variance and expected shortfall with regard to these pairs.
We have determined the optimal retention level that makes the survival probability and expected profit maximum, whereas variance and the expected shortfall minimum. In the decision making process, we have used two major MADM methods: TOPSIS and VIKOR, as well as their extended versions. We have compared the findings for different initial surpluses, time horizons for both expected value and standard deviation premium principles.
Based on the results, it can be concluded that the TOPSIS method with Euclidean distance and modified TOPSIS give the same results, and the VIKOR method produces very close optimal levels to TOPSIS-Mahalanobis. However, from the research that has been carried out, it might be concluded that since the relationship between the criteria is too small, the distance measure does not cause any significant effect on determining the optimal retention level. Hence, the TOPSIS method with Euclidean or Mahalanobis distance produces very close optimal retention levels. On the other hand, the premium principle technique has a vital role in optimal reinsurance. Since the variance is used in the standard deviation premium principle, lower optimal values are observed in TOPSIS-E and M-TOPSIS. On the other hand, the higher optimal levels are obtained when the relationship between the criteria is included.
The proposed method is applicable to different claim amounts and claim number distributions, as well as to other MADM methods.

Acknowledgments

The authors thank two anonymous referees for their helpful comments and suggestions.

Author Contributions

The authors contributed equally to this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. B. De Finetti. Il Problema Dei Pieni. Roma, Italy: Giorn Istituto Italiano Degli Attuari, 1940, Volume 11, pp. 1–88. [Google Scholar]
  2. D.C.M. Dickson, and H.R. Waters. “Relative reinsurance retention levels.” ASTIN Bull. 27 (1997): 207–227. [Google Scholar] [CrossRef]
  3. M. Kaluszka. “Truncated stop loss as optimal reinsurance agreement in one-period models.” ASTIN Bull. 35 (2005): 337–349. [Google Scholar] [CrossRef]
  4. D.C.M. Dickson, and H.R. Waters. “Optimal dynamic reinsurance.” ASTIN Bull. 36 (2006): 415–432. [Google Scholar] [CrossRef]
  5. V.K. Kaishev, and S.D. Dimitrina. “Excess of loss reinsurance under joint survival optimality.” Insur. Math. Econ. 39 (2006): 376–389. [Google Scholar] [CrossRef]
  6. C. Nie, D.C.M. Dickson, and S. Li. “Minimizing the ruin probability through capital injections.” Ann. Actuar. Sci. 5 (2011): 195–209. [Google Scholar] [CrossRef]
  7. M.L. Centeno, and O. Simoes. “Optimal reinsurance. Revista de la Real Academia de Ciencias Exactas Fsicas y Naturales Serie A-Matematicas.” RACSAM 103 (2009): 387–405. [Google Scholar]
  8. K. Borch. “An Attempt to Determine the Optimum Amount of Stop Loss Reinsurance.” In Proceedings of the Transactions of the 16th International Congress of Actuaries I, Brussels, Belgium, 15–22 June 1960; pp. 597–610.
  9. J. Cai, and K.S. Tan. “Optimal retention for a stop loss reinsurance under the VaR and CTE risk measures.” ASTIN Bull. 37 (2007): 93–112. [Google Scholar] [CrossRef]
  10. Y. Chi, and K.S. Tan. “Optimal reinsurance with general premium principles.” Insur. Math. Econ. 52 (2013): 180–189. [Google Scholar] [CrossRef]
  11. J. Trufin, H. Albrecher, and M. Denuit. “Properties of a Risk Measure Derived from Ruin Theory.” Geneva Risk Insur. Rev. 36 (2011): 174–188. [Google Scholar] [CrossRef]
  12. B.B. Karageyik, and D.C.M. Dickson. “Optimal reinsurance under multiple attribute decision making.” Ann. Actuar. Sci. 10 (2016): 65–86. [Google Scholar] [CrossRef]
  13. J. Grandell. Aspects of Risk Theory. Springer Series in Statistics: Probability and Its Applications; New York, NY, USA: Springer, 1991. [Google Scholar]
  14. N.U. Prabhu. “On the ruin problem of collective risk theory.” Ann. Math. Stat. 32 (1961): 757–764. [Google Scholar] [CrossRef]
  15. H.L. Seal. “Numerical Calculation of the Probability of Ruin in the Poisson Exponential Case.” Mitt. Verein. Schweiz. Versich. Math. 72 (1972): 77–100. [Google Scholar]
  16. F. De Vylder. “A Practical Solution to the Problem of Ultimate Ruin Probability.” Scand. Actuar. J. 2 (1978): 114–119. [Google Scholar] [CrossRef]
  17. C.O. Segerdahl. “When Does Ruin Occur in the Collective Theory of Risk? ” Scand. Actuar. J. 1955 (1955): 22–36. [Google Scholar] [CrossRef]
  18. D.L. Iglehard. “Diffusion Approximations in Collective Risk Theory.” J. Appl. Prob. 6 (1969): 285–292. [Google Scholar] [CrossRef]
  19. S. Asmussen, and H. Albrecher. Ruin Probabilities. Advanced Series on Statistical Science & Applied Probability. Singapore: World Scientific Publishing Company, 2010. [Google Scholar]
  20. F.S. Dufresne, H.U. Gerber, and E.S.W. Shiu. “Risk Theory with the Gamma Process.” ASTIN Bull. 21 (1991): 177–192. [Google Scholar] [CrossRef]
  21. D.C.M. Dickson, and H.R. Waters. “Gamma Processes and Finite Time Survival Probabilities.” ASTIN Bull. 23 (1993): 259–272. [Google Scholar] [CrossRef]
  22. K. Burnecki, and M. Teuerle. Statistical Tools for Finance and Insurance, 2nd ed. chap. Ruin Probability in Finite Time; Berlin, Germany: Springer, 2005, pp. 341–379. [Google Scholar]
  23. D.C.M. Dickson. Insurance Risk and Ruin. Cambridge, UK: Cambridge University Press, 2005. [Google Scholar]
  24. B.B. Karageyik. “Optimal Reinsurance under Competing Benefit Criteria.” Ph.D. Thesis, Department of Actuarial Sciences, Hacettepe University, Ankara, Turkey, 2015. [Google Scholar]
  25. P. Jorion. Value at Risk, 3rd ed. The New Benchmark for Managing Financial Risk; New York, NY, USA: McGraw-Hill Education, 2006. [Google Scholar]
  26. A. Castaner, M.M. Claramunt, and M. Marmol. “Tail Value at Risk. An Analysis with the Normal-Power Approximation.” In Statistical and Soft Computing Approaches in Insurance Problems. Hauppauge, NY, USA: Nova Science Publishers, 2013, pp. 87–112. [Google Scholar]
  27. R.V. Rao. Introduction to Multiple Attribute Decision-making (MADM) Methods, Decision Making in the Manufacturing Environment. Springer Series in Advanced Manufacturing; London, UK: Springer, 2007, pp. 27–41. [Google Scholar]
  28. C.L. Hwang, and K. Yoon. Multiple Attribute Decision Making: Methods and Applications: A State- of-the-Art Survey. Lecture Notes in Economics and Mathematical Systems; New York, NY, USA: Springer, 1981. [Google Scholar]
  29. T.C. Wang, and J.C. Hsu. “Evaluation of the Business Operation Performance of the Listing Companies by Applying TOPSIS Method.” In Proceedings of the 2004 IEEE International Conference on Systems, Man and Cybernetics, The Hague, The Netherlands, 10–13 Octomber 2004; pp. 1286–1291.
  30. D. Wu, and D.L. Olson. “A TOPSIS Data Mining Demonstration and Application to Credit Scoring.” IJDWM 2 (2006): 16–26. [Google Scholar] [CrossRef]
  31. H.S. Shih, H.J. Shyur, and E.S. Lee. “An Extension of TOPSIS for Group Decision Making.” Math. Comput. Model. 45 (2007): 801–813. [Google Scholar] [CrossRef]
  32. G.R. Jahanshahloo, F.H. Lotfi, and A.R. Davoodi. “Extension of TOPSIS for Decision-Making Problems with Interval Data: Interval Efficiency.” Math. Comput. Model. 49 (2009): 1137–1142. [Google Scholar] [CrossRef]
  33. B. Bulgurcu. “Application of TOPSIS Technique for Financial Performance Evaluation of Technology Firms in Istanbul Stock Exchange Market.” Procedia Soc. Behav. Sci. 62 (2012): 1033–1040. [Google Scholar] [CrossRef]
  34. X. Zhu, F. Wang, C. Liang, J. Li, and X. Sun. “Quality Credit Evaluation Based on TOPSIS: Evidence from Air-Conditioning Market in China.” Procedia Comput. Sci. 9 (2012): 1256–1262. [Google Scholar] [CrossRef]
  35. S.H.S. Hosseini, M.E. Ezazi, and M.R. Heshmati. “Top Companies Ranking Based on Financial Ratio with AHP-TOPSIS Combined Approach and Indices of Tehran Stock Exchange—A Comparative Study.” Int. J. Econ. Financ. 5 (2013): 126–133. [Google Scholar] [CrossRef]
  36. Z.X. Wang, and Y.Y. Wang. “Evaluation of the provincial competitiveness of the Chinese High-Tech Industry using an improved TOPSIS method.” Expert Syst. Appl. 41 (2014): 2824–2831. [Google Scholar] [CrossRef]
  37. J. Antucheviciene, E.K. Zavadskas, and A. Zakarevicius. “Multiple criteria construction management decisions considering relations between criteria.” Technol. Econ. Dev. Econ. 16 (2010): 109–125. [Google Scholar] [CrossRef]
  38. P.C. Mahalanobis. “On the Generalised Distance in Statistics.” Proc. Natl. Inst. Sci. India 2 (1936): 49–55. [Google Scholar]
  39. A.J. García, M.G. Ibarra, and P.L. Rico. “Improvement of TOPSIS Technique Through Integration of Malahanobis Distance: A Case Study.” In Proceedings of the 14th Annual International Conference on Industrial Engineering Theory, Applications and Practice, Anaheim, CA, USA, 18–21 October 2009.
  40. C. Ching-Hui, L. Jyh-Jiuan, L. Jyh-Horng, and C. Miao-Chen. “Domestic open-end equity mutual fund performance evaluation using extended TOPSIS method with different distance approaches.” Expert Syst. Appl. 37 (2010): 4642–4649. [Google Scholar] [CrossRef]
  41. M. Lahby, L. Cherkaoui, and A. Adib. “New multi access selection method based on mahalanobis distance.” Appl. Math. Sci. 6 (2012): 2745–2760. [Google Scholar]
  42. H. Deng, C.H. Yeh, and R.J. Willis. “Inter-company comparison using modified TOPSIS with objective weights.” Comput. Oper. Res. 27 (2000): 963–973. [Google Scholar] [CrossRef]
  43. S. Opricovic. Multi-criteria Optimization in Civil Engineering. Belgrade, Serbian: Faculty of Civil Engineering, 1998. [Google Scholar]
  44. P.L. Yu. “A Class of Solutions for Group Decision Problems.” Manag. Sci. 19 (1973): 936–946. [Google Scholar] [CrossRef]
  45. J.L. Cochrane, and M. Zeleny. Multiple Criteria Decision Making. Columbia, SC, USA: University of South Carolina Press, 1973. [Google Scholar]
  46. S. Opricovic, and G.H. Tzeng. “Multi-criteria planning of post-earthquake sustainable reconstruction.” Comput. Aided Civ. Infrastruct. Eng. 17 (2002): 211–220. [Google Scholar] [CrossRef]
  47. S. Opricovic, and G.H. Tzeng. “Fuzzy multi-criteria model for post-earthquake landuse planning.” Nat. Hazards Rev. 4 (2003): 59–64. [Google Scholar] [CrossRef]
  48. S. Opricovic, and G.H. Tzeng. “Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS.” Eur. J. Oper. Res. 156 (2004): 445–455. [Google Scholar] [CrossRef]
  49. S. Opricovic, and G.H. Tzeng. “Extended VIKOR method in comparison with outranking methods.” Eur. J. Oper. Res. 178 (2007): 514–529. [Google Scholar] [CrossRef]
  50. G.H. Tzeng, S.H. Tsaur, Y.D. Laiw, and S. Opricovic. “Multi-criteria analysis of environmental quality in Taipei: Public preferences and improvement strategies.” J. Environ. Manag. 65 (2002): 109–120. [Google Scholar] [CrossRef]
  51. G.H. Tzeng, M.H. Teng, J.J. Chen, and S. Opricovic. “Multi-criteria selection for a restaurant location in Taipei.” Int. J. Hosp. Manag. 21 (2002): 171–187. [Google Scholar] [CrossRef]
  52. G.H. Tzeng, C.W. Lin, and S. Opricovic. “Multi-criteria analysis of alternative-fuel buses for public transportation.” Energy Policy 33 (2005): 1373–1383. [Google Scholar] [CrossRef]
  53. J.J. Huang, G.H. Tzeng, and H.H. Liu. “A Revised VIKOR Model for Multiple Criteria Decision Making—The Perspective of Regret Theory, Cutting-Edge Research Topics on Multiple Criteria Decision Making.” In Proceedings of the 20th International Conference, MCDM 2009, Chengdu/Jiuzhaigou, China, 21–26 June 2009; Berlin/Heidelberg, Germany: Springer, 2009, pp. 761–768. [Google Scholar]
Figure 1. Comparison of rankings under the expected value premium principle when u = 5 and t = 1. TOPSIS: Technique for Order of Preference by Similarity to Ideal Solution; M-TOPSIS: Modified TOPSIS; VIKOR: VIseKriterijumska Optimizacija I Kompromisno Resenje; R-VIKOR: Revised VIKOR.
Figure 1. Comparison of rankings under the expected value premium principle when u = 5 and t = 1. TOPSIS: Technique for Order of Preference by Similarity to Ideal Solution; M-TOPSIS: Modified TOPSIS; VIKOR: VIseKriterijumska Optimizacija I Kompromisno Resenje; R-VIKOR: Revised VIKOR.
Jrfm 10 00004 g001
Figure 2. Comparison of rankings under the standard deviation premium principle when u = 5 and t = 1.
Figure 2. Comparison of rankings under the standard deviation premium principle when u = 5 and t = 1.
Jrfm 10 00004 g002
Table 1. Decision matrix for Multi-Attribute Decision Making (MADM) methods.
Table 1. Decision matrix for Multi-Attribute Decision Making (MADM) methods.
Alternatives ( A i )Attributes (Criteria) ( C j )
C 1 C 2 C 3 C n
A 1 X 11 X 12 X 13 X 1 n
A 2 X 21 X 22 X 23 X 2 n
A 3 X 31 X 32 X 33 X 3 n
A m X m 1 X m 2 X m 3 X m n
Table 2. The weights of criteria for the entropy method under the expected value premium principle.
Table 2. The weights of criteria for the entropy method under the expected value premium principle.
The Weights of Criteria under the Expected Value Premium Principle
Var ( X I ) ES 0.95 EP ψ ¯ ( u , t )
u = 1t = 10.70280.02020.00020.2768
t = 20.69990.02410.00020.2757
t = 30.69830.02640.00020.2751
t = 40.69620.02940.00010.2742
t = 50.69430.03210.00010.2735
t = 100.68910.03950.000010.2714
t = 150.68570.04390.000090.2701
t = 200.68330.04730.000250.2691
u = 5t = 10.74070.01730.000040.2420
t = 20.73390.02620.00020.2398
t = 30.72410.038860.00050.2366
t = 40.72740.034120.00080.2376
t = 50.72760.03360.00110.2377
t = 100.71740.04560.00260.2344
t = 150.71310.05050.00350.2330
t = 200.70880.05570.00390.2316
u = 10t = 10.73940.01750.000000030.2431
t = 20.73240.02680.00000050.2408
t = 30.72220.04040.0000030.2374
t = 40.72630.03490.0000090.2387
t = 50.72680.03430.000020.2389
t = 100.71720.04690.00020.2357
t = 150.71320.05190.00050.2344
t = 200.70880.05740.00090.2330
ES: Expected Shortfall; EP: Expected Profit.
Table 3. Optimal retention level under the expected value premium principle.
Table 3. Optimal retention level under the expected value premium principle.
Optimal Retention Levels
TOPSIS-EM-TOPSISTOPSIS-MahalanobisVIKORR-VIKOR
u = 1t = 11.10551.10551.10551.10552.8055
t = 51.00551.00551.20551.00552.9055
t = 101.00551.00551.20551.00552.9055
t = 151.10551.10551.20551.00552.9055
t = 201.10551.10551.10551.00552.9055
t = 251.00551.00551.10551.00552.9055
t = 501.10551.10551.10551.00552.9055
u = 5t = 11.40551.40551.20551.205514.906
t = 51.30551.30551.20551.105514.906
t = 101.40551.40551.20551.105514.906
t = 151.30551.30551.20551.105514.906
t = 201.30551.30551.20551.105514.906
t = 251.20551.20551.20551.105514.906
t = 501.20551.20551.20551.005514.906
u = 10t = 11.40551.40551.20551.205529.806
t = 51.30551.30551.20551.105529.806
t = 101.40551.40551.20551.105529.806
t = 151.30551.30551.20551.205529.906
t = 201.30551.30551.20551.205529.906
t = 251.20551.20551.20551.105529.906
t = 501.30551.30551.20551.105529.906
E: Euclidean; M: Modified; R: Revised.
Table 4. Covariance matrices of normalized attributes for t = 1, 5, 10.
Table 4. Covariance matrices of normalized attributes for t = 1, 5, 10.
Covariance Matrices
t = 1 V a r ( X I ) E S 0.95 EP ψ ¯ ( u , t )
V a r ( X I ) 1.41 × 10 4 2.01 × 10 5 −2.02 × 10 8 7.93 × 10 5
E S p 2.01 × 10 5 4.64 × 10 6 3.45 × 10 9 1.44 × 10 5
E P 2.02 × 10 8 3.45 × 10 9 3.12 × 10 12 1.26 × 10 8
ψ ¯ ( u , t ) 7.93 × 10 5 1.44 × 10 5 1.26 × 10 8 5.11 × 10 5
t = 5 V a r ( X I ) E S 0 . 95 E P ψ ¯ ( u , t )
V a r ( X I ) 1.41 × 10 4 3.15 × 10 5 −6.15 × 10 7 7.93 × 10 5
E S 0.95 3.15 × 10 5 9.02 × 10 6 −1.59 × 10 7 2.12 × 10 5
E P −6.15 × 10 7 −1.59 × 10 7 2.93 × 10 9 −3.86 × 10 7
ψ ¯ ( u , t ) 7.93 × 10 5 2.12 × 10 5 −3.86 × 10 7 5.11 × 10 5
t = 10 V a r ( X I ) E S 0.95 E P ψ ¯ ( u , t )
V a r ( X I ) 1.41 × 10 4 3.82 × 10 5 −1.97 × 10 6 7.93 × 10 5
E S 0.95 3.82 × 10 5 1.23 × 10 5 −6.13 × 10 7 2.50 × 10 5
E P −1.97 × 10 6 −6.13 × 10 7 3.08 × 10 8 −1.25 × 10 6
ψ ¯ ( u , t ) 7.93 × 10 5 2.50 × 10 5 −1.25 × 10 6 5.11 × 10 5
Table 5. The weights of the criteria for the entropy method under the standard deviation premium principle.
Table 5. The weights of the criteria for the entropy method under the standard deviation premium principle.
The Weights of Criteria under the Standard Deviation Premium Principle
Var ( X I ) ES 0 . 95 EP ψ ¯ ( u , t )
u = 1t = 10.79360.02280.00300.1805
t = 20.78690.02710.00690.1790
t = 30.78250.02960.00990.1780
t = 40.77800.03290.01210.1770
t = 50.77420.03580.01380.1761
t = 100.76410.04380.01830.1738
t = 150.75880.04860.02000.1726
t = 200.75510.05230.02090.1718
u = 5t = 10.71020.01660.00000.2731
t = 20.70400.02510.00010.2707
t = 30.69500.03730.00040.2673
t = 40.69800.03270.00080.2684
t = 50.69800.03220.00130.2684
t = 100.68750.04370.00430.2644
t = 150.68220.04830.00720.2623
t = 200.67700.05320.00940.2604
u = 10t = 10.69760.01650.00000.2859
t = 20.69140.02530.00000.2833
t = 30.68220.03820.00000.2796
t = 40.68590.03300.00000.2811
t = 50.68630.03240.00000.2812
t = 100.67770.04430.00030.2777
t = 150.67400.04900.00080.2762
t = 200.66970.05420.00160.2744
Table 6. Optimal retention level under the standard deviation premium principle.
Table 6. Optimal retention level under the standard deviation premium principle.
Optimal Retention Levels
TOPSIS-EM-TOPSISTOPSIS-MahalanobisVIKORR-VIKOR
u = 1t = 11.00551.00552.90551.20552.8055
t = 50.80550.80552.90551.10552.9055
t = 100.80550.80552.90551.10552.9055
t = 150.70550.70552.70551.10552.9055
t = 200.80550.80552.70551.10552.9055
t = 250.80550.80552.70551.10552.9055
t = 500.70550.70552.30551.10552.9055
u = 5t = 11.20551.20551.50551.405514.9055
t = 51.00551.00551.80551.305514.9055
t = 101.00551.00551.70551.405514.9055
t = 150.90550.90551.60551.305514.9055
t = 200.80550.80551.30551.305514.9055
t = 250.80550.80551.60551.305514.9055
t = 500.60550.60551.20551.205514.9055
u = 10t = 11.20551.20551.50551.405529.9055
t = 51.00551.00551.10551.305529.9055
t = 101.00551.00551.30551.405529.9055
t = 150.90550.90551.50551.405529.9055
t = 200.80550.80551.50551.405529.9055
t = 250.90550.90552.20551.405529.9055
t = 500.70550.70551.10551.305529.9055
Table 7. Covariance matrices of normalized attributes for t = 1, 5, 10 when u = 5.
Table 7. Covariance matrices of normalized attributes for t = 1, 5, 10 when u = 5.
Covariance Matrices
t = 1 V a r ( X I ) E S 0.95 EP ψ ¯ ( u , t )
V a r ( X I ) 5.69 × 10 4 8.12 × 10 5 −3.64 × 10 6 3.88 × 10 4
E S p 8.12 × 10 5 1.88 × 10 5 −6.47 × 10 7 5.75 × 10 5
E P −3.64 × 10 6 −6.47 × 10 7 2.61 × 10 8 −2.52 × 10 6
ψ ¯ ( u , t ) 3.88 × 10 4 5.75 × 10 5 −2.52 × 10 6 2.67 × 10 4
t = 5 V a r ( X I ) E S 0 . 95 E P ψ ¯ ( u , t )
V a r ( X I ) 5.69 × 10 4 1.27 × 10 4 −2.85 × 10 5 3.88 × 10 4
E S 0 . 95 1.27 × 10 4 3.62 × 10 5 −7.71 × 10 6 8.84 × 10 5
E P −2.85 × 10 5 −7.71 × 10 6 1.67 × 10 6 −1.98 × 10 5
ψ ¯ ( u , t ) 3.88 × 10 4 8.84 × 10 5 −1.98 × 10 5 2.67 × 10 4
t = 10 V a r ( X I ) E S 0 . 95 E P ψ ¯ ( u , t )
V a r ( X I ) 5.69 × 10 4 1.53 × 10 4 −5.15 × 10 5 3.88 × 10 4
E S 0 . 95 1.53 × 10 4 4.92 × 10 5 −1.67 × 10 5 1.06 × 10 4
E P −5.15 × 10 5 −1.67 × 10 5 5.67 × 10 6 −3.59 × 10 5
ψ ¯ ( u , t ) 3.88 × 10 4 1.06 × 10 4 −3.59 × 10 5 2.67 × 10 4

Share and Cite

MDPI and ACS Style

Bulut Karageyik, B.; Şahin, Ş. Determination of the Optimal Retention Level Based on Different Measures. J. Risk Financial Manag. 2017, 10, 4. https://doi.org/10.3390/jrfm10010004

AMA Style

Bulut Karageyik B, Şahin Ş. Determination of the Optimal Retention Level Based on Different Measures. Journal of Risk and Financial Management. 2017; 10(1):4. https://doi.org/10.3390/jrfm10010004

Chicago/Turabian Style

Bulut Karageyik, Başak, and Şule Şahin. 2017. "Determination of the Optimal Retention Level Based on Different Measures" Journal of Risk and Financial Management 10, no. 1: 4. https://doi.org/10.3390/jrfm10010004

APA Style

Bulut Karageyik, B., & Şahin, Ş. (2017). Determination of the Optimal Retention Level Based on Different Measures. Journal of Risk and Financial Management, 10(1), 4. https://doi.org/10.3390/jrfm10010004

Article Metrics

Back to TopTop