Next Article in Journal
Simple Equations Method (SEsM): An Effective Algorithm for Obtaining Exact Solutions of Nonlinear Differential Equations
Next Article in Special Issue
Jarzyski’s Equality and Crooks’ Fluctuation Theorem for General Markov Chains with Application to Decision-Making Systems
Previous Article in Journal
Global Stability of the Curzon-Ahlborn Engine with a Working Substance That Satisfies the van der Waals Equation of State
Previous Article in Special Issue
Weighted Cumulative Past Extropy and Its Inference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables

1
“Gheorghe Mihoc-Caius Iacob” Institute of Mathematical Statistics and Applied Mathematics, 050711 Bucharest, Romania
2
“Costin C. Kiriţescu” National Institute of Economic Research, 050711 Bucharest, Romania
3
Faculty of Mathematics and Computer Science, University of Bucharest, Academiei 14, 010014 Bucharest, Romania
4
Department of Applied Mathematics, Bucharest University of Economic Studies, 010734 Bucharest, Romania
5
Department of Mathematics and Computer Science, Technical University of Civil Engineering, 020396 Bucharest, Romania
6
Institute of Business Administration Karachi, Department of Mathematical Sciences, School of Mathematics and Computer Science, Karachi 75270, Pakistan
7
Department Financial Mathematics, Fraunhofer ITWM, Fraunhofer-Platz, 67663 Kaiserslautern, Germany
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(11), 1654; https://doi.org/10.3390/e24111654
Submission received: 25 August 2022 / Revised: 11 November 2022 / Accepted: 11 November 2022 / Published: 14 November 2022
(This article belongs to the Special Issue Measures of Information II)

Abstract

:
The aim of this paper consists in developing an entropy-based approach to risk assessment for actuarial models involving truncated and censored random variables by using the Tsallis entropy measure. The effect of some partial insurance models, such as inflation, truncation and censoring from above and truncation and censoring from below upon the entropy of losses is investigated in this framework. Analytic expressions for the per-payment and per-loss entropies are obtained, and the relationship between these entropies are studied. The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is computed for the exponential, Weibull, χ 2 or Gamma distribution. In this context, the properties of the resulting entropies, such as the residual loss entropy and the past loss entropy, are studied as a result of using a deductible and a policy limit, respectively. Relationships between these entropy measures are derived, and the combined effect of a deductible and a policy limit is also analyzed. By investigating residual and past entropies for survival models, the entropies of losses corresponding to the proportional hazard and proportional reversed hazard models are derived. The Tsallis entropy approach for actuarial models involving truncated and censored random variables is new and more realistic, since it allows a greater degree of flexibility and improves the modeling accuracy.

1. Introduction

Risk assessment represents an important topic in various fields, since it allows designing the optimal strategy in many real-world problems. The fundamental concept of entropy can be used to evaluate the uncertainty degree corresponding to the result of an experiment, phenomenon or random variable. Recent research results in statistics prove the increased interest for using different entropy measures. Many authors have dealt with this matter, among them are Koukoumis and Karagrigoriou [1], Iatan et al. [2], Li et al. [3], Miśkiewicz [4], Toma et. al. [5], Moretto et al. [6], Remuzgo et al. [7], Sheraz et al. [8] and Toma and Leoni-Aubin [9]. One of the most important information measures, the Tsallis entropy, has attracted considerable interest in statistical physics and many other fields as well. We can mention here the contributions of Nayak et al. [10], Pavlos et al. [11] and Singh and Cui [12]. Recently, Balakrishnan et al. [13] proposed a general formulation of a class of entropy measures depending on two parameters, which includes Shannon, Tsallis and fractional entropy as special cases.
As entropy can be regarded as a measure of variability for absolutely continuous random variables or measure of variation or diversity of the possible values of a discrete random variable, it can be used for risk assessment in various domains. In actuarial science, one of the main objectives which defines the optimal strategy of an insurance company is directed towards minimizing the risk of the claims. Ebrahimi [14] and Ebrahimi and Pellerey [15] studied the problem of measuring uncertainty in life distributions. The uncertainty corresponding to loss random variables in actuarial models can be evaluated also by the entropy of the loss distribution. Frequently in actuarial practice, as a consequence of using deductibles and policy limits, the practitioners have to deal with transformed data, generated by truncation and censoring. Baxter [16] and Zografos [17] developed information measure methods for mixed and censored random variables, respectively. The entropic approach enables the assessment of the uncertainty degree for loss models involving truncated and censored random variables. Sachlas and Papaioannou [18] investigated the effect of inflation, truncation or censoring from below or above on the Shannon entropy of losses of insurance policies. In this context of per-payment and per-loss models, they derived analytic formulas for the Shannon entropy of actuarial models involving several types of partial insurance coverage and studied the properties of the resulting entropies. Recent results in this field have also been obtained by Gupta and Gupta [19] and Di Crescenzo and Longobardi [20], Meselidis and Karagrigoriou [21].
This paper aims to develop several entropy-based risk models involving truncated and censored loss random variables. In this framework, the effect of some partial insurance schemes, such truncation and censoring from above, truncation and censoring from below and inflation is investigated using the Tsallis entropy. The paper is organized as follows. In Section 2 some preliminary results are presented. In Section 3 representation formulas for the Tsallis entropy corresponding to the truncated and censored loss random variables in the per-payment and per-loss approach are derived, and the relationships between these entropies are obtained. Moreover, the combined effect of a deductible and a policy limit is investigated. In Section 4, closed formulas for the Tsallis entropy corresponding to some survival models are derived, including the proportional hazard and the proportional reversed hazard models. Some concluding remarks are provided in the last section.

2. Preliminaries

2.1. The Exponential Distribution

An exponential distributed random variable X E x p ( λ ) is defined by the probability density function:
f ( x ) = λ · e λ x , i f x 0 0 , i f x < 0 ,
with λ R , λ > 0 and the cumulative distribution function:
F X ( x ) = 1 e λ x , x 0 .

2.2. The Weibull Distribution

A Weibull distributed random variable X W ( α , λ , γ ) is closely related to an exponential distributed random variable and has the probability density function:
f ( x ) = γ λ ( x α ) γ 1 · e λ ( x α ) γ , i f x α 0 , i f x < α ,
with α , λ , γ R , λ , γ > 0.
If X E x p ( 1 ) , then the Weibull distribution can be generated using the formula:
W = α + X λ 1 γ .

2.3. The χ 2 Distribution

Let Z i , 1 i γ be independent random variables, Gaussian distributed and N ( 0 , 1 ) . A random variable χ 2 with γ degrees of freedom can be represented as:
χ 2 = i = 1 γ Z i 2 , γ N * .
A χ 2 distributed random variable with γ degrees of freedom is represented by the probability density function:
f ( x ) = 1 2 γ 2 Γ γ 2 x γ 2 1 e x 2 , x 0 ,
where Γ denotes the Euler Gamma function.

2.4. The Gamma Distribution

An exponential distributed random variable X G ( α , λ , γ ) is defined by the probability density function [22]:
f ( x ) = λ γ Γ ( γ ) ( x α ) γ 1 · e λ ( x α ) , i f x α 0 , i f x < α ,
where α R , γ , λ > 0 are, respectively, the location parameter, the scale parameter and the form parameter of the variable X.
We can notice that an exponential distributed random variable is a gamma random variable G ( 0 , λ , 1 ) and a χ 2 distributed random variable is a gamma distributed random variable G ( 0 , 1 2 , γ 2 ) .
If Y G ( α , λ , γ 2 ) and Z G ( 0 , 1 2 , γ 2 ) , then we have:
Y = α + Z 2 λ .

2.5. The Tsallis Entropy

Entropy represents a fundamental concept which can be used to evaluate the uncertainty associated with a random variable or with the result of an experiment. It provides information regarding the predictability of the results of a random variable X. The Shannon entropy, along with other measures of information, such as the Renyi entropy, may be interpreted as a descriptive quantity of the corresponding probability density function.
Entropy can be regarded as a measure of variability for absolutely continuous random variables or as a measure of variation or diversity of the possible values of discrete random variables. Due to the widespread applicability and use of information measures, the derivation of explicit expressions for various entropy and divergence measures corresponding to univariate and multivariate distributions has been a subject of interest; see, for example, Pardo [23], Toma [24], Belzunce et al. [25], Vonta and Karagrigoriou [26]. Various measures of entropy and generalizations thereof have been proposed in the literature.
The Tsallis entropy was introduced by Constantino Tsallis in 1988 [27,28,29,30] with the aim of generalizing the standard Boltzmann–Gibbs entropy and, since then, it has attracted considerable interest in the physics community, as well as outside it. Recently, Furuichi [31,32] investigated information theoretical properties of the Tsallis entropy and obtained a uniqueness theorem for the Tsallis entropy. The use of Tsallis entropy enhances the analysis and solving of some important problems regarding financial data and phenomena modeling, such as the distribution of asset returns, derivative pricing or risk aversion. Recent research in statistics increased the interest in using Tsallis entropy. Trivellato [33,34] used the minimization of the divergence corresponding to the Tsallis entropy as a criterion to select a pricing measure in the valuation problems of incomplete markets and gave conditions on the existence and on the equivalence to the basic measure of the minimal k–entropy martingale measure. Preda et al. [35,36] used Tsallis and Kaniadakis entropies to construct the minimal entropy martingale for semi-Markov regime switching interest rate models and to derive new Lorenz curves for modeling income distribution. Miranskyy et al. [37] investigated the application of some extended entropies, such as Landsberg–Vedral, Rényi and Tsallis entropies to the classification of traces related to various software defects.
Let X be a real-valued discrete random variable defined on the probability space ( Ω , F , P ) , with the probability mass function p X . Let α R { 1 } . We introduce the definition of Tsallis entropy [27] for discrete and absolutely continuous random variables in terms of expected value operator with respect to a probability measure.
Definition 1. 
The Tsallis entropy corresponding to the discrete random variable X is defined by:
H α T ( X ) = E p X p X ( x ) α 1 1 α 1 ,
where E p X · represents the expected value operator with respect to the probability mass function p X .
Let X be a real-valued continuous random variable defined on the probability space ( Ω , F , P ) , with the probability density function f X . Let α R { 1 } .
Definition 2. 
The Tsallis entropy corresponding to the continuous random variable X is defined by:
H α T ( f X ) = E f X f X ( x ) α 1 1 α 1 ,
provided that the integral exists, where E f X · represents the expected value operator with respect to the probability density function f X .
In the sequel, we suppose to know the properties of the expected value operator, such as additivity and homogeneity.
Note that for α = 2 , the Tsallis entropy reduces to the second-order entropy [38] and for α 1 , we obtain the Shannon entropy [39]. The real parameter α was introduced in the definition of Tsallis entropy for evaluating more accurately the degree of uncertainty. In this regard, the Tsallis parameter tunes the importance assigned to rare events in the considered model.
Highly uncertain insurance policies are less reliable. The uncertainty for the loss associated to an insurance policy can be quantified by using the entropy of the corresponding loss distribution. In the actuarial practice, frequently transformed data are available as a consequence of deductibles and liability limits. Recent research in statistics increased the interest for using different entropy measures for risk assessment.

3. Tsallis Entropy Approach for Loss Models

We denote by X the random variable which models the loss corresponding to an insurance policy. We suppose that X is non-negative and denote by f X and F X its probability density function and cumulative distribution function, respectively. Let S X be the survival function of the random variable X, defined by S X ( x ) = P ( X > x ) .
We consider truncated and censored random variables obtained from X, which can be used to model situations which frequently appear in actuarial practice as a consequence of using deductibles and policy limits. In the next subsections, analytical expressions for the Tsallis entropy are derived, corresponding to the loss models based on truncated and censored random variables.

3.1. Loss Models Involving Truncation or Censoring from Below

Loss models with left-truncated or censored from below random variables are used when losses are not recorded or reported below a specified threshold, mainly as a result of applying deductible policies. We denote by d the value of the threshold, referred to as the deductible value. According to Kluggman et al. [40], there are two approaches used to express the random variable which models the loss, corresponding to the per-payment and per-loss cases, respectively.
In the per-payment case, losses or claims below the value of the deductible may not be reported to the insurance company, generating truncated from below or left-truncated data.
We denote by X l t ( d ) the left-truncated random variable which models the loss corresponding to an insurance policy with a deductible d in the per-payment case. It can be expressed as X l t ( d ) = X | X > d , or equivalently:
X l t ( d ) = X , i f X > d not defined , i f X d .
In order to investigate the effect of truncation from bellow, we use the Tsallis entropy for evaluating uncertainty corresponding to the loss covered by the insurance company. The following theorem establishes the relationship between the Tsallis entropy of the random variables X and X l t ( d ) . We denote by H α T ( X l t ( d ) ) the per-payment Tsallis entropy with a deductible d.
We denote by I A the indicator function of the set A, defined by:
I A ( x ) = 1 , i f x A 0 , o t h e r w i s e .
In the sequel, the integrals are always supposed to be correctly defined.
Theorem 1. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } and d > 0 . The Tsallis entropy H α T ( X l t ( d ) ) of the left-truncated loss random variable corresponding to the per-payment risk model with a deductible d can be expressed as follows:
H α T ( X l t ( d ) ) = S X α ( d ) H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d + 1 S X 1 α ( d ) α 1 .
Proof. 
The probability density function of the random variable X l t ( d ) is given by f X l t ( d ) ( x ) = f X ( x ) S X ( d ) , x > d . Therefore, the Tsallis entropy of the random variable X l t ( d ) can be expressed as follows:
H α T ( X l t ( d ) ) = 1 α 1 S X ( d ) E f X f X ( x ) S X ( d ) α 1 1 I d < X <
= 1 S X ( d ) S X 1 α ( d ) E f X f X α 1 ( x ) 1 α 1 I d < X < + S X 1 α ( d ) 1 α 1 E f X I d < X <
= 1 S X ( d ) S X 1 α ( d ) E f X f X α 1 ( x ) 1 α 1 S X 1 α ( d ) E f X f X α 1 ( x ) 1 α 1 I 0 < X < d
1 S X ( d ) S X 1 α ( d ) 1 α 1 E f X I d < X < =
= S α ( d ) H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d + 1 S X 1 α ( d ) α 1 .
Remark 1. 
For the limiting case α 1 , we obtain the corresponding results for the Shannon entropy from [18].
In the per-loss case corresponding to an insurance policy with a deductible d, all the claims are reported, but only the ones over the deductible value are paid. As only the real losses of the insurer are taken into consideration, this situation generates censored from below data.
We denote by X l c ( d ) the left-censored random variable which models the loss corresponding to an insurance policy with a deductible d in the per-loss case. As X is censored from below at point d, it results that the random variable X l c ( d ) can be expressed as follows:
X l c ( d ) = X , i f X > d 0 , i f X d .
We note that X l c ( d ) assigns a positive probability mass at zero point, corresponding to the case X d . In this case, X l c ( d ) it not absolutely continuous, but a mixed random variable, consisting of a discrete and a continuous part. We can remark that the per-payment loss random variable can be expressed as the per-loss one given that the later is positive.
In the next theorem, the relation between the Tsallis entropy of the random variables X and X l c ( d ) is established.
Theorem 2. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } and d > 0 . The Tsallis entropy H α T ( X l c ( d ) ) of the left-censored loss random variable corresponding to the per-payment risk model with a deductible d can be expressed as follows:
H α T ( X l c ( d ) ) = H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d F X ( d ) F X α 1 ( d ) 1 α 1 .
Proof. 
The Tsallis entropy of X l c ( d ) , which is a mixed random variable consisting of a discrete part at zero and a continuous part over ( d , + ) , is given by:
H α T ( X l c ( d ) ) = E f X f X α 1 ( x ) 1 α 1 I d < X < F X ( d ) F X α 1 ( d ) 1 α 1
= E f X f X α 1 ( x ) 1 α 1 + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d F X ( d ) F X α 1 ( d ) 1 α 1
and the conclusion follows. □
Remark 2. 
Let α R { 1 } and d > 0 . Then,
H α T X l c d H α T X = E f X f X α 1 ( x ) 1 α 1 I 0 < X < d F X ( d ) F X α 1 ( d ) 1 α 1 .
It results that the Tsallis entropy of the left-censored loss random variable corresponding to the per-loss risk model is greater than the Tsallis entropy of the loss random variable, and the difference can be quantified by the right-hand side of the formula above.
Let λ > 0 , α R { 1 } and d > 0 . Let X be Exp ( λ ) distributed and denoted by φ l c ( d , α , λ ) = H α T X l c d H α T X . Using Theorem 2, we obtain
φ l c ( d , α , λ ) = 1 α 1 1 α λ α λ e d λ λ α e d α λ + λ α α λ e d λ 1 e d λ α + 1 .
Figure 1 displays the graph of φ l c function for λ = 100 and different values of the Tsallis entropy parameter α .
Theorem 3. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . The Tsallis entropy measures H α T ( X l t ( d ) ) and H α T ( X l c ( d ) ) are connected through the following relationship:
S α ( d ) · H α T ( X l t ( d ) ) H α T ( X l c ( d ) ) = H α T B F ( d ) ,
where B F X ( d ) represents a Bernoulli distributed random variable with parameter F X ( d ) .
Proof. 
By multiplying (13) with S α ( d ) , we obtain:
S X α ( d ) · H α T ( X l t ( d ) ) = H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d S X α ( d ) S X α 1 ( d ) 1 α 1 .
From Theorem 2, we have:
H α T ( X l c ( d ) ) = H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d F X ( d ) F X α 1 ( d ) 1 α 1 .
By subtracting the two relations above, we obtain:
S X α ( d ) · H α T ( X l t ( d ) ) H α T ( X l c ( d ) ) = S X ( d ) S X α 1 ( d ) 1 α 1 +
+ F X ( d ) F X α 1 ( d ) 1 α 1 = H α T B F X ( d ) .
Now, we denote by λ ( x ) = f X ( x ) S X ( x ) , for S X ( x ) > 0 , the hazard rate function of the random variable X. In the next theorem, the per-payment simple or residual entropy with a deductible d is expressed in terms of the hazard or risk function of X.
Theorem 4. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . The Tsallis entropy of the left-truncated loss random variable corresponding to the per-payment risk model with a deductible d is given by:
H α T ( X l t ( d ) ) = S X α ( d ) E f X λ α 1 ( x ) 1 α 1 S X α 1 ( x ) I d < X < + 1 α .
Proof. 
From Theorem 1, we have:
H α T ( X l t ( d ) ) = S X α ( d ) H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d + 1 S X 1 α ( d ) α 1
= S X α ( d ) E f X f X α 1 ( x ) 1 α 1 + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d + 1 S X 1 α ( d ) α 1
= S X α ( d ) E f X f X α 1 ( x ) 1 α 1 I d < X < + 1 S X 1 α ( d ) α 1 .
We have:
E f X f X α 1 ( x ) 1 α 1 I d < X < = 1 α 1 E f X f X ( x ) S X ( x ) α 1 · S X α 1 ( x ) 1 I d < X <
= E f X λ α 1 ( x ) 1 α 1 · S X α 1 ( x ) I d < X < + E f X S X α 1 ( x ) 1 α 1 I d < X <
Integrating by parts the second term from the relation above, we obtain:
E f X S X α 1 ( x ) 1 α 1 I d < X < = S X α ( d ) S X ( d ) α 1 E f X S X α 1 ( x ) I d < X < =
= S X α ( d ) S X ( d ) α 1 S X α ( d ) α
Hence,
H α T ( X l t ( d ) ) = S X α ( d ) E f X λ α 1 ( x ) 1 α 1 · S X α 1 ( x ) I d < X < + S X α ( d ) S X ( d ) α 1 S X α ( d ) α +
+ 1 S X 1 α ( d ) α 1 = S X α ( d ) E f X λ α 1 ( x ) 1 α 1 S X α 1 ( x ) I d < X < + 1 α .
Theorem 5. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . The Tsallis entropy H α T ( X l t ( d ) ) of the left-truncated loss random variable corresponding to the per-loss risk model with a deductible d is independent of d if, and only if, the hazard rate function is constant.
Proof. 
We assume that the hazard function is constant, therefore λ ( x ) = k R , for any x > 0 . It results that f X ( x ) = k S X ( x ) , for any x > 0 and, using (17), we obtain:
H α T ( X l t ( d ) ) = S X α ( d ) E f X λ α 1 ( x ) 1 α 1 S X α 1 ( x ) I d < X < + 1 α
= 1 k α 1 ( α 1 ) S X α ( d ) E f X S X α 1 ( x ) I d < X < + 1 α = 1 k α 1 α ( α 1 ) + 1 α = α k α 1 α ( α 1 ) ,
which does not depend on d.
Conversely, assuming that H α T ( X l t ( d ) ) does not depend on d,
H α T ( X l t ( d ) ) d = 0 .
Using (17), we obtain
α S X α 1 ( d ) f ( d ) E f X λ α 1 ( x ) 1 α 1 S X α 1 ( x ) I d < X < + S X 1 ( d ) f X ( d ) λ α 1 ( d ) 1 α 1 = 0 ,
i.e.,
S X α ( d ) E f X λ α 1 ( x ) 1 α 1 S X α 1 ( x ) I d < X < + λ α 1 ( d ) 1 α ( α 1 ) = 0 .
Using (17) again, the last relation can be expressed as follows:
H α T ( X l t ( d ) ) 1 α + λ α 1 ( d ) 1 α ( α 1 ) = 0 ,
which implies
λ α 1 ( d ) = α α ( α 1 ) H α T ( X l t ( d ) )
therefore,
λ ( d ) = α α ( α 1 ) H α T ( X l t ( d ) ) 1 α 1 .
Using again the hypothesis that H α T ( X l t ( d ) ) does not depend on d, it follows that λ does not depend on d, therefore λ is constant. □

3.2. Loss Models Involving Truncation or Censoring from Above

Right-truncated or censored from below random variables are used in actuarial models with policy limits. In this case, losses are not recorded or reported for or above a specified threshold. We denote by u , u > 0 the value of the threshold, referred to as the policy limit or liability limit. According to Kluggman et. al [40], there are two approaches used to express the random variable which models the loss corresponding to the per-payment and per-loss cases, respectively.
In the per-payment case, losses or claims above the value of the liability limit may not be reported to the insurance company, generating truncated from above or right-truncated data.
We denote by X r t ( u ) the right-truncated random variable which models the loss corresponding to an insurance policy limit u in the per-payment case. It can be expressed as X r t ( u ) = X | X < u , or equivalently:
X r t ( u ) = X , i f X < u not defined , i f X u .
The relationship between the Tsallis entropy of the random variables X and X r t ( d ) is established in the following theorem.
Theorem 6. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . The Tsallis entropy H α T ( X r t ( u ) ) of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit u is given by:
H α T ( X r t ( u ) ) = F X α ( u ) H α T ( X ) + E f X f X ( x ) α 1 1 α 1 I u < X < + 1 F X 1 α ( u ) α 1 .
Proof. 
The probability density function of the random variable X r t ( u ) is given by f X r t ( u ) ( x ) = f X ( x ) F X ( u ) , 0 < x < u . Therefore, the Tsallis entropy of the random variable X r t ( u ) can be expressed as follows:
H α T ( X r t ( u ) ) = 1 ( α 1 ) F X ( u ) E f X f X ( x ) F X ( u ) α 1 1 I 0 < X < u =
= F X α ( u ) E f X f X α 1 ( x ) 1 α 1 I 0 < X < u + 1 F X 1 α ( u ) ( α 1 ) F X ( u ) E f X I 0 < X < u =
= F X α ( u ) E f X f X α 1 ( x ) 1 α 1 I 0 < X < u + 1 F X 1 α ( u ) α 1
= F X α ( u ) E f X f X α 1 ( x ) 1 α 1 + E f X f X α 1 ( x ) 1 α 1 I u < X < + 1 F X 1 α ( u ) α 1 =
= F X α ( u ) H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I u < X < + 1 F X 1 α ( u ) α 1 .
In the following theorem, the Tsallis entropy of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit is derived.
Theorem 7. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } and u > 0 . The Tsallis entropy H α T ( X r t ( u ) ) of the right-truncated loss random variable corresponding to the per-payment risk model with a policy limit u can be expressed in terms of the reversed hazard function as follows:
H α T ( X r t ( u ) ) = F X α ( u ) E f X τ α 1 ( x ) 1 α 1 F X α 1 ( x ) I 0 < X < u
F X 1 ( u ) E f X F X α 1 ( x ) 1 α 1 I 0 < X < u + 1 F X 1 α ( u ) α 1 .
Proof. 
The probability density function of the random variable X r t ( u ) is given by f X r t ( u ) ( x ) = f X ( x ) F X ( u ) , 0 < x < u . Therefore, the Tsallis entropy of the random variable X r t ( u ) can be expressed as follows:
H α T ( X r t ( u ) ) = 1 ( α 1 ) F X ( u ) E f X f X ( x ) F X ( u ) α 1 1 I 0 < X < u =
= 1 ( α 1 ) F X ( u ) ·
E f X f X ( x ) F X ( x ) α 1 F X ( x ) F X ( u ) α 1 F X ( x ) F X ( u ) α 1 + F X ( x ) F X ( u ) α 1 1 I 0 < X < u =
= 1 ( α 1 ) F X ( u ) E f X F X ( x ) F X ( u ) α 1 f X ( x ) F X ( x ) α 1 1 I 0 < X < u
1 ( α 1 ) F X ( u ) E f X F X ( X ) F X ( u ) α 1 1 I 0 < X < u
= F X α ( u ) α 1 E f X F X α 1 ( x ) f X ( x ) F X ( x ) α 1 1 I 0 < X < u
1 ( α 1 ) F X ( u ) E f X F X ( x ) F X ( u ) α 1 F X 1 α ( u ) + F X 1 α ( u ) 1 I 0 < X < u
= F X α ( u ) α 1 E f X F X α 1 ( x ) f X ( x ) F X ( x ) α 1 1 I 0 < X < u
F X α ( u ) E f X F X α 1 ( x ) 1 α 1 I 0 < X < u + 1 F X 1 α ( u ) ( α 1 ) F X ( u ) E f X I 0 < X < u =
= F X α ( u ) E f X τ α 1 ( x ) 1 α 1 F X α 1 ( x ) I 0 < X < u
F X α ( u ) E f X F X α 1 ( x ) 1 α 1 I 0 < X < u + 1 F X 1 α ( u ) α 1 .
Now, we consider the case of the per-loss right censoring. In this case, if the loss exceeds the value of the policy limit, the insurance company pays an amount u.
For example, a car insurance policy covers losses up to a limit u, while major losses are covered by the car owner. If the loss is modeled by the random variable X, then the loss corresponding to the insurance company is represented by X | X < u . We note that the loss model with truncation from above is different from the loss model with censoring from above, which is defined by the random variable X r c ( u ) = min { X , u } . In this case, if the loss is X u , the insurance company pays an amount u.
The loss model with censoring from above is modeled using the random variable X r c ( u ) = min { X , u } . Moreover, it can be represented as
X r c ( u ) = X , i f X < u u , i f X u .
This model, corresponding to the per-loss case, assumes that in the case where the loss is X u , the insurance company pays an amount u. Therefore, the insurer pays a maximum amount of u on a claim. We note that the random variable X r c ( u ) is not absolutely continuous.
In the following theorem, an analytical formula for the entropy corresponding to the random variable X r c ( u ) is obtained.
Theorem 8. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } and u > 0 . The Tsallis entropy of losses for the right-censored loss random variable corresponding to the per-payment risk model with a policy limit u can be expressed as follows:
H α T ( X r c ( u ) ) = H α T ( X ) + E f X f X ( x ) α 1 1 α 1 I u < X < S X ( u ) S X ( u ) α 1 1 α 1 .
Proof. 
We have:
H α T ( X r c ( u ) ) = E f X f X α 1 ( x ) 1 α 1 I 0 < X < u S X ( u ) S X α 1 ( u ) 1 α 1
= H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I u < X < S X ( u ) S X α 1 ( u ) 1 α 1 .

3.3. Loss Models Involving Truncation from Above and from Below

We denote by d the deductible and by u the retention limit, with d < u . The deductible is applied after the implementation of the retention limit u. Therefore, if the value of the loss is grater than u, then the value of the maximum payment is u d . We denote by X l r ( d , u ) the loss random variable which models the payments to the policy holder under a combination of deductible and retention limit policies. X l r ( d , u ) is a mixed random variable, with an absolutely continuous part over the interval ( 0 , u d ) and two discrete parts at 0, with probability mass F X ( d ) and at u d and with probability mass S X ( u ) . Following [40], the loss random variable X l r ( d , u ) can be expressed by:
X l r ( d , u ) = 0 , i f X d X d , i f d < X u u d , i f X > u ,
The deductible d is applied after the implementation of the retention limit u, which means that if the loss is greater than u, then the maximum payment is u d . The random variable X l r ( d , u ) is a mixed variable with an absolutely continuous part over the interval ( 0 , u d ) and two discrete parts at 0, with probability mass F X ( d ) and at u d and with probability mass S X ( u ) .
In the next theorem, the Tsallis entropy of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is derived.
Theorem 9. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } , d > 0 and u > d . The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u is given by:
H α T ( X l r ( d , u ) ) = H α T ( X ) + E f X f X ( x ) α 1 1 α 1 I 0 < X < d + E f X f X ( x ) α 1 1 α 1 I u < X <
F X ( d ) F X ( d ) α 1 1 α 1 S X ( u ) S X ( u ) α 1 1 α 1
Proof. 
The probability density function of the random variable X l r ( d , u ) is given by
f X l r ( d , u ) ( x ) = F X ( d ) δ x = 0 + f X ( x + d ) δ u < X < u d + S X ( u ) δ x = u d
where δ denotes the Dirac delta function.
It results:
H α T X l r ( d , u ) = F X ( d ) F X α 1 ( d ) 1 α 1 E f X f X α 1 ( x ) 1 α 1 I d < X < u S X ( u ) S X α 1 ( u ) 1 α 1
= H α T X + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d + E f X f X α 1 ( x ) 1 α 1 I u < X <
F X ( d ) F X α 1 ( d ) 1 α 1 S X ( u ) S X α 1 ( u ) 1 α 1 .
The following theorem establishes the relationship between H α T X l r ( d , u ) , the entropy under censoring from above H α T X r c ( u ) and the entropy under censoring from below H α T X l c ( d ) .
Theorem 10. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . For any d > 0 and u > d , the Tsallis entropy H α T X l r ( d , u ) is related to the entropies H α T X r c ( u ) and H α T X l c ( d ) through the following relationship:
H s l ( f X , d , u ) = F X ( d ) F X α 1 ( d ) 1 α 1 + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d S X ( u ) S X α 1 ( u ) 1 α 1 + F X α ( u ) H α T ( X r c ( u ) ) F X ( u ) F X α 1 ( u ) 1 α 1 .
Proof. 
We have:
H α T X l r ( d , u ) = H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d + E f X f X α 1 ( x ) 1 α 1 I u < X <
F X ( d ) F X α 1 ( d ) 1 α 1 S X ( u ) S X α 1 ( u ) 1 α 1 .
Moreover,
H α T ( X r c ( u ) ) = H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I u < X < S X ( u ) S X α 1 ( u ) 1 α 1
H α T ( X l c ( d ) ) = H α T ( X ) + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d F X ( d ) F X α 1 ( d ) 1 α 1 .
It results that:
H s l ( f , d , u ) = H α T ( X r c ( u ) ) F X ( d ) F x α 1 ( d ) 1 α 1 + E f X f X α 1 ( x ) 1 α 1 I 0 < X < d =
= F X ( d ) F X α 1 ( d ) 1 α 1 + E f X f x α 1 ( x ) 1 α 1 I 0 < X < d
S X ( u ) S X α 1 ( u ) 1 α 1 + F X α ( u ) H α T ( X r c ( u ) ) F X ( u ) F X α 1 ( u ) 1 α 1 .
Figure 2 illustrates the Tsallis entropy of the right-truncated loss random variable X l r ( d , u ) , corresponding to the per-loss risk model with a deductible d and a policy limit u for the exponential distribution with λ = 0.1 .
Figure 2 displays a similar behavior of the Tsallis entropy H α T ( X l r ( d , u ) ) for all the considered values around 1 of the α parameter. Thus, we remark that, for all values of α , the Tsallis entropy H α T ( X l r ( d , u ) ) is decreasing with respect to the deductible d and it does not depend on the policy limit u.
Figure 3 represents the Tsallis entropy of losses for the right-truncated loss random variable X l r ( d , u ) corresponding to the per-loss risk model with a deductible d and a policy limit u for the χ 2 distribution, with γ = 30 and for different values of the Tsallis parameter α , in the case d < u .
Figure 3 reveals, for all the values of the parameter α considered, a similar decreasing behavior with respect to the deductible d of the Tsallis entropy H α T ( X l r ( d , u ) ) . Moreover, it indicates that the Tsallis entropy H α T ( X l r ( d , u ) ) does not depend on the values of the policy limit u.
Figure 4 depicts the Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Weibull distribution, with γ = 0.3 , λ = 1.3 and a = 0 for different values of the Tsallis parameter α , in the case d < u .
Figure 4 highlights that the Tsallis entropy of losses H α T ( X l r ( d , u ) ) is decreasing with respect to d for all the values of the parameter α considered. Moreover, the Tsallis entropy H α T ( X l r ( d , u ) ) does not depend on the policy limit u for the values of the α parameter around 1, respectively, for α = 0.9 and α = 1.1 . A different behavior is detected for α = 0.5 . In this case, we remark that the Tsallis entropy is increasing with respect to the policy limit u, which is realistic from the actuarial point of view. Indeed, increasing the policy limit results in a higher risk for the insurance company.
The conclusions obtained indicate that Tsallis entropy measures with parameter values significantly different from 1 can provide a better loss model involving truncation from above and from below.
Figure 5 displays the Tsallis entropy of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Gamma distribution, with γ = 4.5 , λ = 0.1 and a = 0.01 for different values of the Tsallis parameter α , in the case d < u .
Figure 5 reveals the decreasing behavior of the Tsallis entropy H α T ( X l r ( d , u ) ) of losses for all the values of the Tsallis parameter α considered. Moreover, for all the values of α , the Tsallis entropy H α T ( X l r ( d , u ) ) does not depend on the policy limit u.
The following tables present the Tsallis entropy values for the Weibull distribution, corresponding to the analyzed models.
Table 1 illustrates the Tsallis entropy values in case of the Weibull distribution with λ = 0.9585 , γ = 0.3192 and d = 1.1 for different values of the Tsallis parameter α and several values of the policy limit u.
The analysis of the results presented in Table 1 reveals that for parameter values α 1 the Tsallis entropy corresponding to the X r t ( u ) random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1 , the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X r t ( u ) random variable is more realistic.
Table 2 displays the values of the Tsallis entropy measures in case of the Weibull distribution with λ = 0.9585 , γ = 0.3192 and d = 1.1 for different values of the Tsallis parameter α and several values of the policy limit u.
Analyzing the results presented in Table 2, we remark that for parameter values α 1 the Tsallis entropy corresponding to the X r t ( u ) random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1 , the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X r t ( u ) random variable is more realistic.
Table 3 illustrates the Tsallis entropy values in the case of the Weibull distribution with λ = 0.9585 , γ = 0.3192 and deductible d = 1.2 for various values of the Tsallis parameter α and several values of the policy limit u.
The study of the results presented in Table 3 reveals that for parameter values α 1 the Tsallis entropy corresponding to the X r t ( u ) random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1 , the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X r t ( u ) random variable is more realistic.
Table 4 reveals the values of all the Tsallis entropy measures analyzed in the case of the Weibull distribution with λ = 0.9585 , γ = 0.3192 and d = 1.3 for several values of the Tsallis parameter α and different values of the policy limit u.
The results displayed in Table 4 show that for α 1 the Tsallis entropy of the X r t ( u ) random variable increases with respect to the value of the policy limit u, whereas for α = 1 , the entropy decreases with respect to u. It indicates that, when the policy limit increases, the risk of the insurance company increases, too. Thus, the entropy of losses is increasing. We can also conclude that in this case the right-truncated loss random variable X r t ( u ) is better modeled using Tsallis entropy measure.
Table 5 displays the Tsallis entropy values in case of the Weibull distribution with λ = 0.9585 , γ = 0.3192 and deductible d = 1.4 for different values of the Tsallis parameter α and several values of the policy limit u.
Analyzing the results provided in Table 5, we remark that for the parameter α 1 the Tsallis entropy corresponding to the right-truncated random variable is increasing with respect to the value of the policy limit u. For α = 1 , the Shannon entropy measure decreases with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior of the Tsallis entropy measure is reasonable in this case, and it means that the Tsallis entropy approach for evaluating the risk corresponding to the X r t ( u ) random variable is more realistic.
From Table 1, Table 2, Table 3, Table 4 and Table 5, we draw the following conclusions. Using the Tsallis entropy measure approach, in the case when the deductible value d increases, the uncertainty of losses for the insurance company will decrease, therefore the company has to pay smaller amounts. In the case when the policy limit value u increases, the uncertainty of losses for the insurance company will increase, as the company has to pay greater amounts. Therefore, the Tsallis entropy approach is more realistic and flexible, providing a relevant perspective and a useful instrument for loss models.

3.4. Loss Models under Inflation

Financial and actuarial models are estimated using observations made in the past years. As inflation implies an increase in losses, the models must be adjusted corresponding to the current level of loss experience. Moreover, a projection of the anticipated losses in the future needs to be performed.
Now, we study the effect of inflation on entropy. Let X be the random variable that models the loss corresponding to a certain year. We denote by F the cumulative distribution function of X and by f the probability density function of X. The random variable that models the loss after one year and under the inflation effect is X ( r ) = ( 1 + r ) X , where r , r > 0 , represents the annual inflation rate. We denote by F X ( r ) the cumulative distribution function of X ( r ) and by f X ( r ) the probability density function of the random variable X ( r ) .
The probability density function corresponding to the random variable X ( r ) is given by:
f X ( r ) z = 1 1 + r f X z 1 + r , z R .
The following theorem derives the relationship between the Tsallis entropies of the random variables X and X ( r ) = ( 1 + r ) X .
Theorem 11. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . The Tsallis entropy of the random variable X ( r ) , which models the loss after one year under inflation rate r, r > 0 , is given by
H α T X ( r ) = 1 + r 1 α H α T X 1 + r 1 α 1 α 1 .
Proof. 
Using the definition of the Tsallis entropy, we have:
H α T X ( r ) = E f X f X ( r ) α 1 ( x ) 1 α 1 .
Using the change in variable given by u = z 1 + r , it follows
H α T X ( r ) = 1 + r 1 α E f X f X α 1 ( x ) 1 α 1 1 + r 1 α 1 α 1 =
= 1 + r 1 α H α T X 1 + r 1 α 1 α 1
Theorem 12. 
Let X be a non-negative random variable which models the loss corresponding to an insurance policy. Let α R { 1 } . For r > 0 , the Tsallis entropy of the random variable X ( r ) , which models the loss after one year under inflation rate r , r > 0 , is always larger than that of X and is an increasing function of r.
Proof. 
Let r > 0 . We denote by
ψ ( r ) = H α T X ( r ) H α T X
= 1 + r 1 α H α T X H α T X 1 + r 1 α 1 α 1 = 1 + r 1 α 1 H α T ( X ) 1 α 1
We have:
d d r ψ ( r ) = 1 + r 2 α ( α 1 ) H α T ( X ) 1
= 1 + r 2 α E f X f X α 1 ( x ) > 0 ,
so that H α T X ( r ) is an increasing function of r.
Therefore, it follows that
H α T X ( r ) > H α T X .
The results obtained show that inflation increases the entropy, which means that the uncertainty degree of losses increases compared with the case without inflation. Moreover, the uncertainty of losses increases with respect to the inflation rate.

4. Tsallis Entropy Approach for Survival Models

In this section, we derive residual and past entropy expressions for some survival models, including the proportional hazard and the proportional reversed hazard models. Relevant results in this field have been obtained by Sachlas and Papaioannou [18], Gupta and Gupta [19], Di Crescenzo [41] and Sankaran and Gleeja [42].
Let X and Y be random variables with cumulative distribution functions F and G, probability density functions f and g and survival functions F ¯ and G ¯ , respectively. We denote by λ X and λ Y the hazard rate functions of the random variables X and Y, respectively.

4.1. The Proportional Hazard Rate Model

Definition 3. 
The random variables X and Y satisfy the proportional hazard rate model if there exists θ > 0 such that (see Cox [43]).
S Y ( x ) = S X θ ( x ) f o r e v e r y x > 0 .
We note that the random variables X and Y satisfy the proportional hazard rate model if the hazard rate function of Y is proportional to the hazard rate function of X, i.e., λ Y ( x ) = θ λ X ( x ) for every x > 0 ; see Cox [43].
In the next theorem, the Tsallis entropy of the left-truncated random variable Y l t ( d ) under the proportional hazard rate model is derived.
Theorem 13. 
Let X and Y be non-negative random variables. Let α R { 1 } and d > 0 . Under the proportional hazard rate model given in (28), the Tsallis entropy of the left-truncated random variable Y l t ( d ) corresponding to the per-payment risk model with a deductible d can be expressed as follows:
H α T Y l t ( d ) = θ α 1 S X θ ( d ) E f X S X θ 1 ( x ) θ S X θ 1 ( x ) f X ( x ) S X θ ( d ) α 1 1 I d < X < .
Proof. 
From (28), we obtain f Y ( x ) = θ S X θ 1 ( x ) f X ( x ) . It results:
H α T ( Y l t ( d ) ) = 1 α 1 S Y ( d ) E f Y f Y ( x ) S Y ( d ) α 1 1 I d < X <
= θ α 1 S X θ ( d ) E f X S X θ 1 ( x ) θ S X θ 1 ( x ) f X ( x ) S X θ ( d ) α 1 1 I d < X < .

4.2. The Proportional Reversed Hazard Rate Model

Definition 4. 
The random variables X and Y satisfy the proportional reversed hazard rate model [43] if there exists θ > 0 such that
F Y ( x ) = F X θ ( x ) f o r e v e r y x > 0 .
In the next theorem, the Tsallis entropy of the right-truncated random variable Y r t ( u ) under the proportional reversed hazard rate model is derived.
Theorem 14. 
Let X and Y be non-negative random variables. Let α R { 1 } and u > 0 . Under the proportional reversed hazard rate model given in (30), the Tsallis entropy of the right-truncated random variable Y r t ( u ) corresponding to the per-payment risk model with a policy limit u can be expressed as follows:
H α T Y r t ( d ) = θ α 1 F X θ ( u ) E f X F X θ 1 ( x ) θ F X θ 1 ( x ) f X ( x ) F X θ ( u ) α 1 1 I 0 < X < u .
Proof. 
From (30) we get
f Y ( x ) = θ F X θ 1 ( x ) f X ( x ) .
It results:
H α T ( Y r t ( d ) ) = 1 α 1 F Y ( u ) E f Y f Y ( x ) F Y ( u ) α 1 1 I 0 < X < u
= θ α 1 F X θ ( u ) E f X F X θ 1 ( x ) θ F X θ 1 ( x ) f X ( x ) F X θ ( u ) α 1 1 I 0 < X < u .

5. Applications

We used a real database from [18], representing the Danish fire insurance losses recorded during the 1980–1990 period [44,45,46], where losses are ranged from MDKK 1.0 to 263.250 (millions of Danish Krone). The average loss is MDKK 3.385, while 25 % of losses are smaller than MDKK 1.321 and 75 % of losses are smaller than MDKK 2.967.
The data from the database [18] were fitted by using a Weibull distribution and the maximum likelihood estimators of the shape c ^ = 0.3192 , and scale parameters of the distribution τ ^ = 0.9585 were obtained.
The results displayed in Table 1, Table 2, Table 3, Table 4 and Table 5 can be used to compare the values of the following entropy measures:
  • The Tsallis entropy H α T ( X ) corresponding to the random variable X which models the loss;
  • The Tsallis entropy of the left-truncated loss and, respectively, censored loss random variable corresponding to the per-payment risk model with a deductible d, namely H α T ( X l t ( d ) ) and, respectively, H α T ( X l c ( d ) ) ;
  • The Tsallis entropy of the right-truncated and, respectively, censored loss random variable corresponding to the per-payment risk model with a policy limit u, denoted by H α T ( X r t ( u ) ) and, respectively, H α T ( X r c ( u ) ) ;
  • The Tsallis entropy of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u, H α T ( X l r ( d , u ) ) .
In the case of the Weibull distribution, for the parameter values λ = 0.9585 and γ = 0.3192 for d = 1.1 1.5 , u = 10 , u = 15 , u = 20 , u = 25 and for different values of the Tsallis entropy parameter α located in the neighborhood of the point 1, we draw the following conclusions. The values of the Tsallis entropy for α = 1 correspond to those obtained in [18]. Moreover, we remark that, for values of the Tsallis parameter α lower than 1, the values of the corresponding entropy measures increase. Moreover, for values of the parameter α greater than 1, the values of the corresponding entropy measures decrease, as we can notice from Figure 3, too. This behavior allows a higher degree of flexibility for modeling the loss-truncated and loss-censored random variables in actuarial models.

6. Conclusions

In this paper, an entropy-based approach for risk assessment in the framework of loss models and survival models involving truncated and censored random variables was developed.
By using the Tsallis entropy, the effect of some partial insurance schemes, such as inflation, truncation and censoring from above and truncation and censoring from below was investigated.
Analytical expressions for the per-payment and per-loss entropies of losses were derived. Moreover, closed formulas for the entropy of losses corresponding to the proportional hazard rate model and the proportional reversed hazard rate model were obtained.
The results obtained point out that entropy depends on the deductible and the policy limit, and inflation increases entropy, which means the increase in the uncertainty degree of losses increases compared with the case without inflation. The use of entropy measures allows risk assessment for actuarial models involving truncated and censored random variables.
We used a real database representing the Danish fire insurance losses recorded between 1980 and 1990 [44,45,46], where losses range from MDKK 1.0 to 263.250 (millions of Danish Krone). The average loss is MDKK 3.385, while 25 % of losses are smaller than MDKK 1.321 and 75 % of losses are smaller than MDKK 2.967.
The data were fitted using the Weibull distribution in order to obtain the maximum likelihood estimators of the shape c ^ = 0.3192 and scale parameters of the distribution τ ^ = 0.9585 .
The values of the Tsallis entropies for α = 1 correspond to those from [18], while as the α is lower than 1 the values of the entropies will increase and, as the α is bigger than 1, the values of the entropies will decrease, as we can notice from the Figure 3, too.
The paper extends several results obtained in this field; see, for example, Sachlas and Papaioannou [18].
The study of the results obtained reveals that for parameter values α 1 the Tsallis entropy corresponding to the right-truncated loss random variable is increasing with respect to the value of the policy limit u. On the other side, for α = 1 , the Tsallis entropy, which reduces to the Shannon entropy measure, is decreasing with respect to u. From an actuarial perspective, when the policy limit increases, the risk of the insurance company also increases, therefore the entropy of losses increases, too. The detected behavior proves that the Tsallis entropy approach for evaluating the risk corresponding to the right-truncated loss random variable is more realistic.
Therefore, we can conclude that the Tsallis entropy approach for actuarial models involving truncated and censored random variables provides a new and relevant perspective, since it allows a higher degree of flexibility for the assessment of risk models.

Author Contributions

All authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a grant of the Romanian Ministery of Education and Research, CNCS—UEFISCDI, project number PN-III-P4-ID-PCE-2020-1112, within PNCDI III.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to express their gratitude to the anonymous referees for their valuable suggestions and comments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Koukoumis, C.; Karagrigoriou, A. On Entropy-type Measures and Divergences with Applications in Engineering, Management and Applied Sciences. Int. J. Math. Eng. Manag. Sci. 2021, 6, 688–707. [Google Scholar] [CrossRef]
  2. Iatan, I.; Dragan, M.; Preda, V.; Dedu, S. Using Probabilistic Models for Data Compression. Mathematics 2022, 10, 3847. [Google Scholar] [CrossRef]
  3. Li, S.; Zhuang, Y.; He, J. Stock market stability: Diffusion entropy analysis. Phys. A 2016, 450, 462–465. [Google Scholar] [CrossRef]
  4. Miśkiewicz, J. Improving quality of sample entropy estimation for continuous distribution probability functions. Phys. A 2016, 450, 473–485. [Google Scholar] [CrossRef]
  5. Toma, A.; Karagrigoriou, A.; Trentou, P. Robust Model Selection Criteria Based on Pseudodistances. Entropy 2020, 22, 304. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Moretto, E.; Pasquali, S.; Trivellato, E. Option pricing under deformed Gaussian distributions. Phys. A 2016, 446, 246–263. [Google Scholar] [CrossRef]
  7. Remuzgo, L.; Trueba, C.; Sarabia, S.M. Evolution of the global inequality in greenhouse gases emissions using multidimensional generalized entropy measures. Phys. A 2015, 444, 146–157. [Google Scholar] [CrossRef] [Green Version]
  8. Sheraz, M.; Dedu, S.; Preda, V. Volatility Dynamics of Non-Linear Volatile Time Series and Analysis of Information Flow: Evidence from Cryptocurrency Data. Entropy 2022, 24, 1410. [Google Scholar] [CrossRef]
  9. Toma, A.; Leoni-Aubin, S. Robust portfolio optimization using pseudodistances. PLoS ONE 2015, 10, e0140546. [Google Scholar] [CrossRef] [Green Version]
  10. Nayak, A.S.; Rajagopal, S.A.K.; Devi, A.R.U. Bipartite separability of symmetric N-qubit noisy states using conditional quantum relative Tsallis entropy. Phys. A 2016, 443, 286–295. [Google Scholar] [CrossRef]
  11. Pavlos, G.P.; Iliopoulos, A.C.; Zastenker, G.N.; Zelenyi, L.M. Tsallis non-extensive statistics and solar wind plasma complexity. Phys. A 2015, 422, 113–135. [Google Scholar] [CrossRef]
  12. Singh, V.P.; Cui, H. Suspended sediment concentration distribution using Tsallis entropy. Phys. A 2014, 414, 31–42. [Google Scholar] [CrossRef]
  13. Balakrishnan, N.; Buono, F.; Longobardi, M. A unified formulation of entropy and its application. Phys. A 2022, 596, 127214. [Google Scholar] [CrossRef]
  14. Ebrahimi, N. How to measure uncertainty in the residual life distributions. Sankhya 1996, 58, 48–57. [Google Scholar]
  15. Ebrahimi, N.; Pellerey, F. New partial ordering of survival functions based on the notion of uncertainty. J. Appl. Probab. 1995, 32, 202–211. [Google Scholar] [CrossRef]
  16. Baxter, L.A. A note on information and censored absolutely continuous random variables. Stat. Decis. 1989, 7, 193–197. [Google Scholar] [CrossRef]
  17. Zografos, K. On some entropy and divergence type measures of variability and dependence for mixed continuous and discrete variables. J. Stat. Plan. Inference 2008, 138, 3899–3914. [Google Scholar] [CrossRef]
  18. Sachlas, A.; Papaioannou, T. Residual and past entropy in actuarial science. Methodol. Comput. Appl. Probab. 2014, 16, 79–99. [Google Scholar] [CrossRef]
  19. Gupta, R.C.; Gupta, R.D. Proportional reversed hazard rate model and its applications. J. Stat. Plan. Inference 2007, 137, 3525–3536. [Google Scholar] [CrossRef]
  20. Di Crescenzo, A.; Longobardi, M. Entropy-based measure of uncertainty in past lifetime distributions. J. Appl. Probab. 2002, 39, 430–440. [Google Scholar] [CrossRef]
  21. Messelidis, C.; Karagrigoriou, A. Contingency Table Analysis and Inference via Double Index Measures. Entropy 2022, 24, 477. [Google Scholar] [CrossRef] [PubMed]
  22. Anastassiou, G.; Iatan, I.F. Modern Algorithms of Simulation for Getting Some Random Numbers. J. Comput. Anal. Appl. 2013, 15, 1211–1222. [Google Scholar]
  23. Pardo, L. Statistical Inference Based on Divergence Meaures; Chapman & Hall/CRC: Boca Raton, FL, USA, 2006. [Google Scholar]
  24. Toma, A. Model selection criteria using divergences. Entropy 2014, 16, 2686–2698. [Google Scholar] [CrossRef] [Green Version]
  25. Belzunce, F.; Navarro, J.; Ruiz, J.; del Aguila, Y. Some results on residual entropy function. Metrika 2004, 59, 147–161. [Google Scholar] [CrossRef]
  26. Vonta, F.; Karagrigoriou, A. Generalized measures of divergence in survival analysis and reliability. J. Appl. Probab. 2010, 47, 216–234. [Google Scholar] [CrossRef] [Green Version]
  27. Tsallis, C. Possible generalization of Boltzmann–Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  28. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized nonextensive statistics. Phys. A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  29. Tsallis, C.; Anteneodo, A.; Borland, L.; Osorio, R. Nonextensive statistical mechanics and economics. Phys. A 2003, 324, 89–100. [Google Scholar] [CrossRef] [Green Version]
  30. Tsallis, C. Introduction to Nonextensive Statistical Mechanics; Springer Science Business Media, LLC: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  31. Furuichi, S. Information theoretical properties of Tsallis entropies. J. Math. Phys. 2006, 47, 023302. [Google Scholar] [CrossRef] [Green Version]
  32. Furuichi, S. On uniqueness theorems for Tsallis entropy and Tsallis relative entropy. IEEE Trans. Inf. Theory 2005, 51, 3638–3645. [Google Scholar] [CrossRef] [Green Version]
  33. Trivellato, B. Deformed exponentials and applications to finance. Entropy 2013, 15, 3471–3489. [Google Scholar] [CrossRef] [Green Version]
  34. Trivellato, B. The minimal k-entropy martingale measure. Int. J. Theor. Appl. Financ. 2012, 15, 1250038. [Google Scholar] [CrossRef]
  35. Preda, V.; Dedu, S.; Sheraz, M. New measure selection for Hunt-Devolder semi-Markov regime switching interest rate models. Phys. A 2014, 407, 350–359. [Google Scholar] [CrossRef]
  36. Preda, V.; Dedu, S.; Gheorghe, C. New classes of Lorenz curves by maximizing Tsallis entropy under mean and Gini equality and inequality constraints. Phys. A 2015, 436, 925–932. [Google Scholar] [CrossRef]
  37. Miranskyy, A.V.; Davison, M.; Reesor, M.; Murtaza, S.S. Using entropy measures for comparison of software traces. Inform. Sci. 2012, 203, 59–72. [Google Scholar] [CrossRef] [Green Version]
  38. Preda, V.; Dedu, S.; Sheraz, M. Second order entropy approach for risk models involving truncation and censoring. Proc. Rom.-Acad. Ser. Math. Phys. Tech. Sci. Inf. Sci. 2016, 17, 195–202. [Google Scholar]
  39. Shannon, C.E. A mathematical theory of communication. BellSyst. Tech. J. 1984, 27, 379–423. [Google Scholar]
  40. Klugman, S.A.; Panjer, H.H.; Willmot, G.E. Loss Models: From Data to Decisions; John Wiley and Sons: New York, NY, USA, 2004. [Google Scholar]
  41. Di Crescenzo, A. Some results on the proportional reversed hazards model. Stat. Probab. Lett. 2000, 50, 313–321. [Google Scholar] [CrossRef]
  42. Sankaran, P.G.; Gleeja, C.L. Proportional reversed hazard and frailty models. Metrika 2008, 68, 333–342. [Google Scholar] [CrossRef]
  43. Cox, D.R. Regression models and life-tables. J. R. Stat. Soc. 1972, 34, 187–220. [Google Scholar] [CrossRef]
  44. McNeil, A.J. Estimating the tails of loss severity distributions using extreme value theory. ASTIN Bull. 1997, 27, 117–137. [Google Scholar] [CrossRef] [Green Version]
  45. Pigeon, M.; Denuit, M. Composite Lognormal-Pareto model with random threshold. Scand. Actuar. J. 2011, 3, 177–192. [Google Scholar] [CrossRef]
  46. Resnick, S.I. Discussion of the Danish data on large fire insurance losses. ASTIN Bull. 1997, 27, 139–151. [Google Scholar] [CrossRef]
Figure 1. The graph for λ = 100 and different values of the Tsallis entropy parameter α .
Figure 1. The graph for λ = 100 and different values of the Tsallis entropy parameter α .
Entropy 24 01654 g001
Figure 2. The Tsallis entropy H α T ( X l r ( d , u ) ) of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the exponential distribution, with λ = 0.1 and different values of the Tsallis parameter α .
Figure 2. The Tsallis entropy H α T ( X l r ( d , u ) ) of losses for the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the exponential distribution, with λ = 0.1 and different values of the Tsallis parameter α .
Entropy 24 01654 g002
Figure 3. The Tsallis entropy of H α T ( X l r ( d , u ) ) losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the χ 2 distribution, with γ = 30 and different values of the Tsallis parameter α .
Figure 3. The Tsallis entropy of H α T ( X l r ( d , u ) ) losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the χ 2 distribution, with γ = 30 and different values of the Tsallis parameter α .
Entropy 24 01654 g003
Figure 4. The Tsallis entropy of losses H α T ( X l r ( d , u ) ) of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Weibull distribution, with γ = 0.3 , λ = 1.3 , a = 0 and different values of the Tsallis parameter α .
Figure 4. The Tsallis entropy of losses H α T ( X l r ( d , u ) ) of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Weibull distribution, with γ = 0.3 , λ = 1.3 , a = 0 and different values of the Tsallis parameter α .
Entropy 24 01654 g004
Figure 5. The Tsallis entropy H α T ( X l r ( d , u ) ) of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Gamma distribution, with γ = 4.5 , λ = 0.1 , a = 0.01 and different values of the Tsallis parameter α .
Figure 5. The Tsallis entropy H α T ( X l r ( d , u ) ) of losses of the right-truncated loss random variable corresponding to the per-loss risk model with a deductible d and a policy limit u for the Gamma distribution, with γ = 4.5 , λ = 0.1 , a = 0.01 and different values of the Tsallis parameter α .
Entropy 24 01654 g005
Table 1. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.1 .
Table 1. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.1 .
α u H α T ( X ) H α T ( X lt ( d ) ) H α T ( X lc ( d ) ) H α T ( X rt ( u ) ) H α T ( X rc ( u ) ) H α T ( X lr ( d , u ) )
105.4345.50055.38373.79944.10674.0564
0.515 4.57254.76224.712
20 4.98455.091405.0411
25 5.20065.25825.2079
102.54462.57782.51562.22202.35042.3214
0.915 2.43062.4882.4591
20 2.5054322.527922.4989660524
25 2.51562.53142.5396
102.208652.23691.0912.3162.08272.0792
115 2.25342.17672.1732
20 2.224842.200412.1969
25 2.21402.20642.203
101.264741.2781.25211.20941.247991.2353
1.515 1.25031.26261.2499
20 1.26094757151.26441.2518
25 1.26371.26471.2525
100.84590.85240.83960.82790.84330.837
215 0.84150.84570.83944
20 0.84480.84590.8395
25 0.84560.84590.8396
Table 2. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.2 .
Table 2. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.2 .
α u H α T ( X ) H α T ( X lt ( d ) ) H α T ( X lc ( d ) ) H α T ( X rt ( u ) ) H α T ( X rc ( u ) ) H α T ( X lr ( d , u ) )
105.4345.50435.333.79944.10674.0027
0.515 4.57254.76224.6583
20 4.984575.0914.9874
25 5.20065.25825.1542
102.54462.57962.48292.2222.352.2887
0.915 2.432.4882.4264
20 2.50542.52792.4662
25 2.53142.53962.4779
102.208652.23841.06212.316012.08272.09548
115 2.25342.17672.1894
20 2.22482.20042.2131
25 2.2142.20642.2192
101.264741.27861.23641.20941.24791.2197
1.515 1.25031.26261.2343
20 1.26091.26441.2362
25 1.263731.26471.2364
100.84590.852750.83110.827920.84330.8285
215 0.84150.84570.8309
20 0.84480.84590.8395
25 0.84480.84590.8311
Table 3. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.3 .
Table 3. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.3 .
α u H α T ( X ) H α T ( X lt ( d ) ) H α T ( X lc ( d ) ) H α T ( X rt ( u ) ) H α T ( X rc ( u ) ) H α T ( X lr ( d , u ) )
105.4345.5085.27543.79944.10673.9481
0.515 4.57254.76224.6036
20 4.98455.09144.9328
25 5.20065.25825.0996
102.54462.58122.44912.2222.35042.2549
0.915 2.43062.4882.3926
20 2.50542.52792.4324
25 2.53142.53962.4441
102.20862.23981.032122.3162.082752.11294
115 2.25342.17672.20691
20 2.22482.20042.23
25 2.2142.20642.2366
101.26471.27921.21991.20941.24791.2031
1.515 1.25031.26261.2178
20 1.26091.26441.2196
25 1.26371.26471.2199
100.84590.8530.82190.82790.84330.8193
215 0.84150.84570.8218
20 0.84480.84590.8219
25 0.84560.84590.8219
Table 4. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.4 .
Table 4. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 , d = 1.4 .
α u H α T ( X ) H α T ( X lt ( d ) ) H α T ( X lc ( d ) ) H α T ( X rt ( u ) ) H α T ( X rc ( u ) ) H α T ( X lr ( d , u ) )
10 5.43403 5.51157 5.2201 3.79949502 4.106747228 3.892825
0.515 4.57256 4.762289 4.54836725
20 4.98450397 5.0914 4.87748487
25 5.200639 5.25823 5.04431057
10 2.54465658 2.5828658 2.414517 2.22206 2.3504043 2.220265
0.915 2.430659 2.488092 2.35795
20 2.50543 2.5279 2.39778476
25 2.531 2.539648 2.4095
10 2.20865 2.24113 1.00127 2.31601 2.08275 2.13136
115 2.25345 2.17672 2.22534
20 2.22484 2.20041 2.24902
25 2.21409 2.20649 2.25511
10 1.2647449 1.2798459 1.20262623 1.209498976566 1.24799613 1.1858774
1.515 1.2626 1.2626 1.2004
20 1.2609 1.2644668 1.2023481
25 1.2637 1.2647078 1.2025891
10 0.8459 0.853296 0.812209 0.82795 0.84332 0.8096
215 0.84159 0.8457620 0.8120436
20 0.84482638 0.84591692 0.81219852
25 0.84564 0.84592715 0.8122087
Table 5. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 and d = 1.5 .
Table 5. Tsallis entropy values for the Weibull distribution for: λ = 0.9585 , γ = 0.3192 and d = 1.5 .
α u H α T ( X ) H α T ( X lt ( d ) ) H α T ( X lc ( d ) ) H α T ( X rt ( u ) ) H α T ( X rc ( u ) ) H α T ( X lr ( d , u ) )
10 5.43403331 5.51497466 5.16428 3.799495 4.1067472 3.8369939886
0.515 4.57256 4.762289 4.4925361
20 4.98450397 5.0914 4.82165
25 5.2006 5.25823 4.9884794
10 2.54465658 2.5843899676 2.3792109 2.22206 2.3504 2.1849586855
0.915 2.430659 2.488092 2.322646
20 2.50543 2.52792 2.362478
25 2.531 2.5396 2.3742
10 2.20865 2.24240 0.96975 2.31601 2.08275 2.15055
115 2.25345 2.17672 2.24452
20 2.22484 2.20041 2.26821
25 2.21409 2.20649 2.27429
10 1.2647449 1.2803868 1.184659 1.209 1.24799613 1.16791
1.515 1.25035 1.2626 1.18253
20 1.2609 1.2644668 1.18438
25 1.2637 1.2647 1.18462
10 0.8459 0.85354 0.801878 0.82795 0.8433243 0.79927
215 0.8415947 0.845762 0.8017
20 0.844826 0.8459169 0.801867
25 0.84564 0.845927 0.801877
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Preda, V.; Dedu, S.; Iatan, I.; Cernat, I.D.; Sheraz, M. Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables. Entropy 2022, 24, 1654. https://doi.org/10.3390/e24111654

AMA Style

Preda V, Dedu S, Iatan I, Cernat ID, Sheraz M. Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables. Entropy. 2022; 24(11):1654. https://doi.org/10.3390/e24111654

Chicago/Turabian Style

Preda, Vasile, Silvia Dedu, Iuliana Iatan, Ioana Dănilă Cernat, and Muhammad Sheraz. 2022. "Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables" Entropy 24, no. 11: 1654. https://doi.org/10.3390/e24111654

APA Style

Preda, V., Dedu, S., Iatan, I., Cernat, I. D., & Sheraz, M. (2022). Tsallis Entropy for Loss Models and Survival Models Involving Truncated and Censored Random Variables. Entropy, 24(11), 1654. https://doi.org/10.3390/e24111654

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop