Next Issue
Volume 6, March
Previous Issue
Volume 5, September
 
 

Risks, Volume 5, Issue 4 (December 2017) – 14 articles

Cover Story (view full-size image): The double Pareto lognormal distribution is sufficiently flexible to model heavy tails and skewness in insurance claim data, evident in the accompanying graph. Embodying this distribution in a generalized linear model provides a powerful tool for modelling insurance claims emanating from a variety of risk factors. The EM algorithm developed in our paper allows the parameters of the model to be estimated using the formulae below, often more easily than for the more commonly used generalized beta distribution of the second kind. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
402 KiB  
Article
A General Framework for Incorporating Stochastic Recovery in Structural Models of Credit Risk
by Albert Cohen and Nick Costanzino
Risks 2017, 5(4), 65; https://doi.org/10.3390/risks5040065 - 20 Dec 2017
Cited by 3 | Viewed by 3317
Abstract
In this work, we introduce a general framework for incorporating stochastic recovery into structural models. The framework extends the approach to recovery modeling developed in Cohen and Costanzino (2015, 2017) and provides for a systematic way to include different recovery processes into a [...] Read more.
In this work, we introduce a general framework for incorporating stochastic recovery into structural models. The framework extends the approach to recovery modeling developed in Cohen and Costanzino (2015, 2017) and provides for a systematic way to include different recovery processes into a structural credit model. The key observation is a connection between the partial information gap between firm manager and the market that is captured via a distortion of the probability of default. This last feature is computed by what is essentially a Girsanov transformation and reflects untangling of the recovery process from the default probability. Our framework can be thought of as an extension of Ishizaka and Takaoka (2003) and, in the same spirit of their work, we provide several examples of the framework including bounded recovery and a jump-to-zero model. One of the nice features of our framework is that, given prices from any one-factor structural model, we provide a systematic way to compute corresponding prices with stochastic recovery. The framework also provides a way to analyze correlation between Probability of Default (PD) and Loss Given Default (LGD), and term structure of recovery rates. Full article
563 KiB  
Article
Stable Weak Approximation at Work in Index-Linked Catastrophe Bond Pricing
by Krzysztof Burnecki and Mario Nicoló Giuricich
Risks 2017, 5(4), 64; https://doi.org/10.3390/risks5040064 - 16 Dec 2017
Cited by 10 | Viewed by 3523
Abstract
We consider the subject of approximating tail probabilities in the general compound renewal process framework, where severity data are assumed to follow a heavy-tailed law (in that only the first moment is assumed to exist). By using the weak convergence of compound renewal [...] Read more.
We consider the subject of approximating tail probabilities in the general compound renewal process framework, where severity data are assumed to follow a heavy-tailed law (in that only the first moment is assumed to exist). By using the weak convergence of compound renewal processes to α -stable Lévy motion, we derive such weak approximations. Their applicability is then highlighted in the context of an existing, classical, index-linked catastrophe bond pricing model, and in doing so, we specialize these approximations to the case of a compound time-inhomogeneous Poisson process. We emphasize a unique feature of our approximation, in that it only demands finiteness of the first moment of the aggregate loss processes. Finally, a numerical illustration is presented. The behavior of our approximations is compared to both Monte Carlo simulations and first-order single risk loss process approximations and compares favorably. Full article
Show Figures

Figure A1

237 KiB  
Editorial
Special Issue “Actuarial and Financial Risks in Life Insurance, Pensions and Household Finance”
by Luca Regis
Risks 2017, 5(4), 63; https://doi.org/10.3390/risks5040063 - 5 Dec 2017
Cited by 1 | Viewed by 2799
Abstract
The aim of the Special Issue is to address some of the main challenges individuals and companies face in managing financial and actuarial risks, when dealing with their investment/retirement or business-related decisions [...] Full article
1770 KiB  
Article
An Analysis and Implementation of the Hidden Markov Model to Technology Stock Prediction
by Nguyet Nguyen
Risks 2017, 5(4), 62; https://doi.org/10.3390/risks5040062 - 24 Nov 2017
Cited by 25 | Viewed by 10014
Abstract
Future stock prices depend on many internal and external factors that are not easy to evaluate. In this paper, we use the Hidden Markov Model, (HMM), to predict a daily stock price of three active trading stocks: Apple, Google, and Facebook, based on [...] Read more.
Future stock prices depend on many internal and external factors that are not easy to evaluate. In this paper, we use the Hidden Markov Model, (HMM), to predict a daily stock price of three active trading stocks: Apple, Google, and Facebook, based on their historical data. We first use the Akaike information criterion (AIC) and Bayesian information criterion (BIC) to choose the numbers of states from HMM. We then use the models to predict close prices of these three stocks using both single observation data and multiple observation data. Finally, we use the predictions as signals for trading these stocks. The criteria tests’ results showed that HMM with two states worked the best among two, three and four states for the three stocks. Our results also demonstrate that the HMM outperformed the naïve method in forecasting stock prices. The results also showed that active traders using HMM got a higher return than using the naïve forecast for Facebook and Google stocks. The stock price prediction method has a significant impact on stock trading and derivative hedging. Full article
Show Figures

Figure 1

826 KiB  
Article
Bounded Brownian Motion
by Peter Carr
Risks 2017, 5(4), 61; https://doi.org/10.3390/risks5040061 - 17 Nov 2017
Cited by 9 | Viewed by 4567
Abstract
Diffusions are widely used in finance due to their tractability. Driftless diffusions are needed to describe ratios of asset prices under a martingale measure. We provide a simple example of a tractable driftless diffusion which also has a bounded state space. Full article
445 KiB  
Article
Optimal Claiming Strategies in Bonus Malus Systems and Implied Markov Chains
by Arthur Charpentier, Arthur David and Romuald Elie
Risks 2017, 5(4), 58; https://doi.org/10.3390/risks5040058 - 8 Nov 2017
Cited by 4 | Viewed by 4211
Abstract
In this paper, we investigate the impact of the accident reporting strategy of drivers, within a Bonus-Malus system. We exhibit the induced modification of the corresponding class level transition matrix and derive the optimal reporting strategy for rational drivers. The hunger for bonuses [...] Read more.
In this paper, we investigate the impact of the accident reporting strategy of drivers, within a Bonus-Malus system. We exhibit the induced modification of the corresponding class level transition matrix and derive the optimal reporting strategy for rational drivers. The hunger for bonuses induces optimal thresholds under which, drivers do not claim their losses. Mathematical properties of the induced level class process are studied. A convergent numerical algorithm is provided for computing such thresholds and realistic numerical applications are discussed. Full article
Show Figures

Figure 1

2100 KiB  
Article
An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims
by Enrique Calderín-Ojeda, Kevin Fergusson and Xueyuan Wu
Risks 2017, 5(4), 60; https://doi.org/10.3390/risks5040060 - 7 Nov 2017
Cited by 9 | Viewed by 4958
Abstract
Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN) in Reed and Jorgensen (2004), we develop [...] Read more.
Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN) in Reed and Jorgensen (2004), we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind) and lognorma distributions. Full article
Show Figures

Figure 1

372 KiB  
Article
A Review and Some Complements on Quantile Risk Measures and Their Domain
by Sebastian Fuchs, Ruben Schlotter and Klaus D. Schmidt
Risks 2017, 5(4), 59; https://doi.org/10.3390/risks5040059 - 7 Nov 2017
Cited by 7 | Viewed by 3423
Abstract
In the present paper, we study quantile risk measures and their domain. Our starting point is that, for a probability measure Q on the open unit interval and a wide class L Q of random variables, we define the quantile risk measure [...] Read more.
In the present paper, we study quantile risk measures and their domain. Our starting point is that, for a probability measure Q on the open unit interval and a wide class L Q of random variables, we define the quantile risk measure ϱ Q as the map that integrates the quantile function of a random variable in L Q with respect to Q. The definition of L Q ensures that ϱ Q cannot attain the value + and cannot be extended beyond L Q without losing this property. The notion of a quantile risk measure is a natural generalization of that of a spectral risk measure and provides another view of the distortion risk measures generated by a distribution function on the unit interval. In this general setting, we prove several results on quantile or spectral risk measures and their domain with special consideration of the expected shortfall. We also present a particularly short proof of the subadditivity of expected shortfall. Full article
936 KiB  
Article
Non-Parametric Integral Estimation Using Data Clustering in Stochastic dynamic Programming: An Introduction Using Lifetime Financial Modelling
by Gaurav Khemka and Adam Butt
Risks 2017, 5(4), 57; https://doi.org/10.3390/risks5040057 - 31 Oct 2017
Cited by 6 | Viewed by 3150
Abstract
This paper considers an alternative way of structuring stochastic variables in a dynamic programming framework where the model structure dictates that numerical methods of solution are necessary. Rather than estimating integrals within a Bellman equation using quadrature nodes, we use nodes directly from [...] Read more.
This paper considers an alternative way of structuring stochastic variables in a dynamic programming framework where the model structure dictates that numerical methods of solution are necessary. Rather than estimating integrals within a Bellman equation using quadrature nodes, we use nodes directly from the underlying data. An example of the application of this approach is presented using individual lifetime financial modelling. The results show that data-driven methods lead to the least losses in result accuracy compared to quadrature and Quasi-Monte Carlo approaches, using historical data as a base. These results hold for both a single stochastic variable and multiple stochastic variables. The results are significant for improving the computational accuracy of lifetime financial models and other models that employ stochastic dynamic programming. Full article
Show Figures

Figure 1

427 KiB  
Article
Optional Defaultable Markets
by Mohamed N. Abdelghani and Alexander V. Melnikov
Risks 2017, 5(4), 56; https://doi.org/10.3390/risks5040056 - 23 Oct 2017
Cited by 1 | Viewed by 3072
Abstract
The paper deals with defaultable markets, one of the main research areas of mathematical finance. It proposes a new approach to the theory of such markets using techniques from the calculus of optional stochastic processes on unusual probability spaces, which was not [...] Read more.
The paper deals with defaultable markets, one of the main research areas of mathematical finance. It proposes a new approach to the theory of such markets using techniques from the calculus of optional stochastic processes on unusual probability spaces, which was not presented before. The paper is a foundation paper and contains a number of fundamental results on modeling of defaultable markets, pricing and hedging of defaultable claims and results on the probability of default under such conditions. Moreover, several important examples are presented: a new pricing formula for a defaultable bond and a new pricing formula for credit default swap. Furthermore, some results on the absence of arbitrage for markets on unusual probability spaces and markets with default are also provided. Full article
535 KiB  
Article
Optimal Form of Retention for Securitized Loans under Moral Hazard
by Georges Dionne and Sara Malekan
Risks 2017, 5(4), 55; https://doi.org/10.3390/risks5040055 - 21 Oct 2017
Cited by 1 | Viewed by 3538
Abstract
We address the moral hazard problem of securitization using a principal-agent model where the investor is the principal and the lender is the agent. Our model considers structured asset-backed securitization with a credit enhancement (tranching) procedure. We assume that the originator can affect [...] Read more.
We address the moral hazard problem of securitization using a principal-agent model where the investor is the principal and the lender is the agent. Our model considers structured asset-backed securitization with a credit enhancement (tranching) procedure. We assume that the originator can affect the default probability and the conditional loss distribution. We show that the optimal form of retention must be proportional to the pool default loss even in the absence of systemic risk when the originator can affect the conditional loss given default rate, yet the current regulations propose a constant retention rate. Full article
(This article belongs to the Special Issue Information and market efficiency)
Show Figures

Figure 1

3463 KiB  
Article
Exposure as Duration and Distance in Telematics Motor Insurance Using Generalized Additive Models
by Jean-Philippe Boucher, Steven Côté and Montserrat Guillen
Risks 2017, 5(4), 54; https://doi.org/10.3390/risks5040054 - 25 Sep 2017
Cited by 45 | Viewed by 8578
Abstract
In Pay-As-You-Drive (PAYD) automobile insurance, the premium is fixed based on the distance traveled, while in usage-based insurance (UBI) the driving patterns of the policyholder are also considered. In those schemes, drivers who drive more pay a higher premium compared to those with [...] Read more.
In Pay-As-You-Drive (PAYD) automobile insurance, the premium is fixed based on the distance traveled, while in usage-based insurance (UBI) the driving patterns of the policyholder are also considered. In those schemes, drivers who drive more pay a higher premium compared to those with the same characteristics who drive only occasionally, because the former are more exposed to the risk of accident. In this paper, we analyze the simultaneous effect of the distance traveled and exposure time on the risk of accident by using Generalized Additive Models (GAM). We carry out an empirical application and show that the expected number of claims (1) stabilizes once a certain number of accumulated distance-driven is reached and (2) it is not proportional to the duration of the contract, which is in contradiction to insurance practice. Finally, we propose to use a rating system that takes into account simultaneously exposure time and distance traveled in the premium calculation. We think that this is the trend the automobile insurance market is going to follow with the eruption of telematics data. Full article
Show Figures

Figure 1

2224 KiB  
Article
Bayesian Modelling, Monte Carlo Sampling and Capital Allocation of Insurance Risks
by Gareth W. Peters, Rodrigo S. Targino and Mario V. Wüthrich
Risks 2017, 5(4), 53; https://doi.org/10.3390/risks5040053 - 22 Sep 2017
Cited by 6 | Viewed by 5813
Abstract
The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a [...] Read more.
The main objective of this work is to develop a detailed step-by-step guide to the development and application of a new class of efficient Monte Carlo methods to solve practically important problems faced by insurers under the new solvency regulations. In particular, a novel Monte Carlo method to calculate capital allocations for a general insurance company is developed, with a focus on coherent capital allocation that is compliant with the Swiss Solvency Test. The data used is based on the balance sheet of a representative stylized company. For each line of business in that company, allocations are calculated for the one-year risk with dependencies based on correlations given by the Swiss Solvency Test. Two different approaches for dealing with parameter uncertainty are discussed and simulation algorithms based on (pseudo-marginal) Sequential Monte Carlo algorithms are described and their efficiency is analysed. Full article
Show Figures

Figure 1

1669 KiB  
Article
The Impact of Risk Management in Credit Rating Agencies
by A. Seetharaman, Vikas Kumar Sahu, A. S. Saravanan, John Rudolph Raj and Indu Niranjan
Risks 2017, 5(4), 52; https://doi.org/10.3390/risks5040052 - 21 Sep 2017
Cited by 9 | Viewed by 7369
Abstract
An empirical study was conducted to determine the impact of different types of risk on the performance management of credit rating agencies (CRAs). The different types of risks were classified as operational, market, business, financial, and credit. All these five variables were analysed [...] Read more.
An empirical study was conducted to determine the impact of different types of risk on the performance management of credit rating agencies (CRAs). The different types of risks were classified as operational, market, business, financial, and credit. All these five variables were analysed to ascertain their impact on the performance of CRAs. In addition, apart from identifying the significant variables, the study focused on setting out a structured framework for future research. The five independent variables were tested statistically using structural equation modelling (SEM). The results indicated that market risk, financial risk, and credit risk have a significant impact on the performance of CRAs, whereas operational risk and business risk, though important, do not have a significant influence. This finding has a significant implication for the examination and inter-firm evaluation of CRAs. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop