Financial and Actuarial Mathematics, TU Wien, Austria TU Wien FAM
 
EAJ 2014

2nd European Actuarial Journal (EAJ)
Conference & Educational Workshop

TU Vienna, September 8-12, 2014

Sponsors

Vienna Insurance Group
Drei-Banken Versicherungs-AG
HDI Versicherung AG
Gen Re - General Reinsurance AG
Munich RE - Münchener Rückversicherungs-Gesellschaft
arithmetica
fintegral
Sparkassen Versicherung AG Vienna Insurance Group
Milliman
Springer Science+Business Media

Organizers

Actuarial Association of Austria - Aktarvereinigung Österreichs (AVÖ)
Vienna University of Technology, Financial anc Actuarial Mathematics Group

Abstracts



Abstracts (Talks & Posters)

... in alphabetical Order


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 16:05 - 16:30, session / room C

AFONSO Lourdes

CMA & FCT Universidade Nova de Lisboa, Portugal 

Measuring the impact of a bonus-malus system in finite and continuous time ruin probabilities, for large portfolios in motor insurance

We consider a "classical" approach for the bonus-malus system applied to a large motor insurance portfolio where premium adjustments are done according to policyholders claim count record. By adapting the model introduced by Afonso et al. (2009), we evaluate numerically the impact in finite time ruin probabilities under a continuous time risk process when we allow posterior adjustments to the annual premia, after having observed the claims record of each policyholder in each year. We consider two bonus-malus systems with real commercial scales where well known optimal premium scales are applied, such as Norberg (1976), Borgan et al. (1981), Gilde and Sundt (1989) and Andrade e Silva and Centeno (2005).

In all scenarios we use real data from automobile third-party liability portfolio of an insurance company operating in the portuguese market. According to the data provided, we fitted a mixed Poisson distribution where the structure function is Inverse Gaussian, for the annual number of claims in the portfolio.

According to the model, at the beginning of each year we need to evaluate the class position of each policyholder in the bonus system, that depends on the relevant past annual number of claims, so that we assign each premium in the year. To be in position to evaluate the ruin probabilities we further need the annual aggregate claims in the portfolio so that we evaluate the surplus of the process in each year.

For the given bonus system of the insurer used in the work, we provide figures for the "expected" distribution of policies in the classes over time, as well as in stationarity and then considering the different premium scales used we show figures with the evolution of the portfolio premia over time. Finally, we show figures for finite time ruin probabilities in different chosen years, intermediate times and a time considered to be close to be giving an ultimate ruin probability. It is very interesting to see how the different optimal scales behave along time, as well as their comparizon with the actual commercial scale and the importance of a "proper" loading coefficient choice.

Joint work with Rui M.R. Cardoso, Alfredo D. Egídio dos Reis and Gracinda R. Guerreiro.

References:
[1] Afonso, L.B., Egidio dos Reis, A.D., and Waters, H.R. (2009). Calculating continuous time ruin probabilities for a large portfolio with varying premiums. ASTIN Bulletin, 39(1):117-136.
[2] Andrade e Silva, J. and Centeno, M.L. (2005). A note on bonus scales. Journal of Risk and Insurance, 72(4):601-607.
[3] Borgan, O., Hoem, J., and Norberg, R. (1981). A non asymptotic criterion for the evaluation of automobile bonus system. Scandinavian Actuarial Journal, 1981 (3):165-178.
[4] Gilde, V. and Sundt, B. (1989). On bonus systems with credibility scales. Scandinavian Actuarial Journal, 1989 (1):13-22.
[5] Norberg, R. (1976). A credibility theory for automobile bonus system. Scandianvian Actuarial Journal, 1976 (2): 92-107.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 16:05 - 16:30, session / room E

ALAI Daniel

University of Kent, UK 

A multivariate Tweedie lifetime model: censoring and truncation

We establish model calibration for a multivariate Tweedie distribution in the presence of truncated and censored observations; estimation is based on the method of moments. The multivariate Tweedie distribution we consider incorporates dependence in a pool of lives via a common stochastic component. Pools may be interpreted in various ways, from nation-wide cohorts to employer-based pension annuity portfolios. In general, the common stochastic component is representative of systematic longevity risk, which is not accounted for in standard life tables and actuarial models used for annuity pricing and reserving.

This is joint work with Zinoviy Landsman (University of Haifa) and Michael Sherris (University of New South Wales).


Invited Plenary Talk
Wednesday (Sept. 10, 2014), 9:20 - 10:10, main lecture hall (FH1)

ALBRECHER Hansjörg

Professor for Insurance Mathematics, Department of Actuarial Science, University of Lausanne, Switzerland 

Insurance risk and the cost of capital

The development of rules for the determination of premiums under solvency capital requirements is a classical topic in insurance. In recent years the cost-of-capital method for the determination of risk margins has been advocated, with a particular suggestion for the size of the cost-of-capital rate. In this talk a framework will be developed which considers the viewpoint of regulators, investors and policyholders at the same time, leading to a quantitative approach towards interpreting and justifying the size of such a rate. Some practical implications of this approach are discussed in the context of Solvency II and the Swiss Solvency Test.


Contributed Talk, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 14:30 - 14:55, session / room D

ALM Jonas

Chalmers University of Technology, Göteborg, Sweden 

Signs of dependence in non-life insurance data

We study the yearly reports sent by Swedish insurers to the Financial Supervisory Authority (Finansinspektionen). These reports contain liability predictions made by actuaries at the different companies, and the time evolution of these predictions will give us an idea of how yearly losses are distributed. We are particularly interested in (1) signs of dependence between losses on different lines of business within a company, and (2) signs of dependence between companies for losses on a specific line of business. The former is of great importance for an individual insurer's aggregation of risk, while the latter is important for financial stability. Data from the reports show that there exist dependencies both within and between companies, and we give a suggestion of how to model these dependencies.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 15:40 - 16:05, session / room F

ASSA Hirbod

Institute for financial and actuarial mathematics, University of Liverpool, UK 

On optimal reinsurance policy with distortion risk measures and premiums

In this paper, we consider the problem of optimal reinsurance design, when the risk is measured by a distortion risk measure and the premium is given by a distortion risk premium. First, we show how the optimal reinsurance design for the ceding company, the reinsurance company and the social planner can be formulated in the same way. Second, by introducing the "marginal indemnification functions", we characterize the optimal reinsurance contracts. We show that, for an optimal policy, the associated marginal indemnification function only takes the values zero and one. We will see how the roles of the market preferences and premiums and that of the total risk are separated.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 15:40 - 16:05, session / room A

AZCUE Pablo

Universidad Torcuato di Tella, Argentina

Optimal dividend problem for a two-dimensional insurance risk process

We consider two branches of an insurance company, each one pays half of the amount of each claim and receive premiums at different rates. The surpluses are modelized as compound Poisson processes and the ruin occurs when the surpluses leave the positive quadrant. We optimize the combined expected cumulative discounted dividend payment of the two branches. We consider both the case in which there is a positive constant transaction cost (impulse problem) and the one without cost (continuous problem). These are two dimensional optimization problems which involve integro-differential equation with free boundaries. We prove that the optimal value functions are the smallest viscosity solutions of the corresponding HJB equations and found the optimal strategy in some particular cases.

This is a joint work with Nora Muler (University Torcuato di Tella) and Zbigniew Palmowski (University of Wroclaw).


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 13:30 - 13:55, session / room F

BADOUNAS Ioannis

University of Piraeus, Greece

Robust loss reserving regression models with random coefficients

In insurance practice, it is well known that the presence of outlier events can miss-estimate the overall reserve in the chain-ladder method when we consider a log-linear regression model based on the assumption that the coefficients are fixed and identical from one observation to another. By relaxing the fixed coefficients assumptions and applying a regression with randomly varying coefficients we have a similar phenomenon, i.e. miss-estimation of the overall reserves. The lack of robustness of loss reserving regression with random coefficients on log-incremental payments estimators leads to the development of this paper. Our proposal is to apply robust statistical procedures to the loss reserving estimation when the regression coefficients are random. Our robust model is also extended when different types of claims are affected by different unobservable risk characteristics.

Joint work with Georgios Pitselis (University of Piraeus and KU Leuven).

Keywords: robust, random regression coefficients, loss reserving


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 13:30 - 13:55, session / room B

BELLINI Fabio

Dipartimento di Statistica e Metodi Quantitativi, Università di Milano-Bicocca, Italy 

Return risk measurement: Orlicz-type measures of risk

Risk measures and premium principles based on Orlicz norms have been introduced in the actuarial literature by Haezendonck and Goovaerts (1982). The so called Haezendonck-Goovaerts risk measures are a class of coherent risk measures that generalizes the Expected Shortfall and that is becoming increasingly popular (see for example Goovaerts et al., 2012, and the reference therein).
In this paper we characterize Orlicz risk measures in two different ways, either by exploiting their natural correspondence with the shortfall risk measures introduced by Foellmer and Schied (2002), or by the correspondence with equivalent expected utility principles (see for example Denuit et al., 2006), of which Orlicz risk measures are a positively homogeneous version.
We explicate that, contrary to common use of risk measures, which measures the risk of a financial position by assessing the stochastic nature of its value, Orlicz measures of risk assess the stochastic nature of returns.
These axiomatic foundations of Orlicz risk measures naturally lead to several generalizations, obtained by relaxing expected utility to variational preferences (Maccheroni et al., 2006) or to homothetic preferences (Cerreia-Vioglio et al., 2008, Laeven and Stadje, 2013).
We also consider the case of ambiguity over the Young function Φ in the definition of the Orlicz risk measure, or the case of a state-dependent Φ.
From a purely mathematical point of view, the obtained functionals can be seen in a unified way as suprema of Orlicz norms on a suitable rearrangement-invariant Banach space.
We study the properties of these generalized Orlicz risk measures, provide their dual representations, and analyze their optimized translation invariant extensions, that generalize the class of Haezendonck-Goovaerts risk measures. An application to an optimal risk sharing problem is also provided.

Joint work with R. Laeven (University of Amsterdam) and E. Rosazza Gianin (Università di Milano-Bicocca).

References:
[1] Cerreia-Vioglio, S., Maccheroni, F., Marinacci, M., and Montrucchio, L. (2011). Uncertainty averse preferences. Journal of Economic Theory, 146, 1275-1330.
[2] Denuit, M., J. Dhaene, M.J. Goovaerts, R. Kaas and R.J.A. Laeven (2006). Risk measurement with equivalent utility principles. In: Ruschendorf, Ludger (Ed.), Risk Measures: General Aspects and Applications (special issue), Statistics and Decisions 24, 1-26.
[3] Foellmer, H. and A. Schied (2002). Convex measures of risk and trading constraints. Finance & Stochastics 6, 429-447.
[4] Goovaerts, M.J., D. Linders, K. Van Weert and F.Tank (2012). On the interplay between distortion, mean value, and Haezendonck-Goovaerts risk measures. Insurance: Mathematics and Economics 51, 10-18.
[5] Haezendonck, J. and Goovaerts M.J. (1982). A new premium calculation principle based on Orlicz norms. Insurance: Mathematics and Economics 1, 41-53.
[6] Laeven, R.J.A. and Stadje, M.A. (2013). Entropy coherent and entropy convex measures of risk. Mathematics of Operations Research 38, 265-293.
[7] Maccheroni, F., M. Marinacci and Rustichini, A. (2006). Ambiguity aversion, robustness, and the variational representation of preferences. Econometrica 74, 1447-1498.


Invited Plenary Talk (Mini Course: 120 Minutes)
Monday (Sept. 8, 2014), 15:30 - 16:30, and Tuesday (Sept. 9, 2014), 16:40 - 17:40, main lecture hall (FH1)

BERNARD Carole

Professor at the Department of Statistics and Actuarial Science, University of Waterloo, Canada 

A new approach to assessing model risk on dependence in high dimensions

A central problem for regulators and risk managers concerns the risk assessment of an aggregate portfolio defined as the sum of d individual dependent risks Xi. This problem is mainly a numerical issue once the joint distribution of (X1,X2,...,Xd) is fully specified. Unfortunately, while the marginal distributions of the risks Xi are often known, their interaction (dependence) is usually either unknown or only partially known, implying that any computed risk measure of the portfolio is subject to model uncertainty.

Previous academic research has focused on the maximum and minimum possible values of a given risk measure of the portfolio, in the case in which only the marginal distributions are known. This approach leads to wide bounds, as all information on the dependence is ignored.

We show how to integrate in a natural way available information on the multivariate dependence and provide easy-to-compute bounds for the risk measure at hand. We observe that incorporating the information of a well-fitted multivariate model may, or may not, lead to much tighter bounds, a feature that also depends on the risk measure used. We illustrate this point by showing that the Value-at-Risk at a very high confidence level (as used in Basel II) is typically prone to very high model risk, even if one knows the multivariate distribution almost completely.

Our results make it possible to determine which risk measures can benefit from adding dependence information (i.e., leading to narrower bounds when used to assess portfolio risk), and, hence, to identify those models for which it would be meaningful to develop accurate multivariate models.

This is joint work with Steven Vanduffel (Vrije Universiteit Brussels).


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 15:40 - 16:05, session / room C

BIARD Romain

Laboratoire de mathématiques de Besançon, Université de Franche-Comté, France 

Fractional Poisson process: long-range dependence and applications in ruin theory

We study a renewal risk model in which the surplus process of the insurance company is modeled by a compound fractional Poisson process.

We establish the long-range dependence property of this non-stationary process. Some results for the ruin probabilities are presented in various assumptions on the distribution of the claim sizes.

Joint work with Bruno Saussereau (Laboratoire de mathématiques de Besançon).


Invited Plenary Talk (Mini Course: 180 Minutes)
Monday (Sept. 8, 2014), 9:00 - 10:30 and 13:40 - 15:10, main lecture hall (FH1)

BIFFIS Enrico

Professor of actuarial finance at the Imperial College Business School, London, UK 

Some old and new problems in insurance contract design

In the first part of the mini course I will consider the design of traditional life insurance contracts and variable annuities in the presence of adverse selection. I will first revisit standard approaches to modelling selective withdrawals, and then outline a model where the policyholders' mortality risk profile can be represented in terms of a frailty process shaped by the relative attractiveness of different contract benefits in different states of the world. I will present some practical examples of optimal contract design and tests for adverse selection.

In the second part of the mini course I will discuss the design of some innovative risk sharing arrangements. I will first look at longevity risk transfers, address the issue of collateralization in longevity swaps, and discuss the design of longevity linked securities that might appeal to investors more familiar with the catastrophe bond format. I will then look at Value-of-In-Force (VIF) monetization, and outline a model to compare the economic sale and contingent loan format within the Solvency II framework. The results will be illustrated with case studies based on real world portfolios of a large global insurer.


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 13:55 - 14:20, session / room D

BIGNOZZI Valeria

University of Firenze, Italy

How superadditive can a risk measure be?

Risk measures that are not subadditive may penalize the aggregation of risk, by inducing portfolio requirements that are larger than those of undiversified positions. This happens for instance for Value-at-Risk (VaR), as well as convex shortfall risk measures. In this paper we characterize the potential for superadditivity that any risk measure may exhibit, considering both dependence uncertainty as well as the effect of portfolio size.

It is shown that for the wide majority of risk measures of use or interest this corresponds to calculating the smallest dominating coherent risk measure (SDCRM). We show that this risk measure often exists and is identified with the notion of extreme-aggregation risk measure introduced in this paper. Explicit results are provided for the class of distortion risk measures, where the SDCRM is again a distortion risk measure and for the class of shortfall risk measures, where the SDCRM is given by an expectile.

Joint work with Ruodu Wang and Andreas Tsanakas.


Invited Plenary Talk
Thursday (Sept. 11, 2014), 9:50 - 10:40, main lecture hall (FH1)

CAIRNS Andrew

Professor of financial mathematics at Heriot-Watt University, Edinburgh, United Kingdom 

Multi-population mortality modelling

There are many situations in life insurance, pensions and elsewhere where we need to model and forecast future rates of mortality for several populations simultaneously. Sometimes this might be at the national population level (different countries; males and females) but often also we wish to model sub-populations that have potentially different characteristics from the national population. We will discuss some different approaches to these problems and outline recent progress in developing new models.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 16:05 - 16:30, session / room F

CANI Arian

Department of Actuarial Science, University of Lausanne, Switzerland 

An application of optimal reinsurance in the classical risk model

In this article we are going to consider the surplus process of an insurance company within the Cramér-Lundberg framework with the extension of controlling its performance by means of dynamic reinsurance. Our aim is to find a dynamic reinsurance strategy that maximizes a performance measure introduced in Højgaard & Taksar.

Using analytical methods we can identify the value function as a particular solution to the associated Hamilton-Jacobi-Bellman equation. This approach leads to an implementable numerical method for determining (approximating) the value function and optimal reinsurance strategy. Furthermore we give some examples illustrating the applicability of this method.

Joint work with Stefan Thonhauser (University of Lausanne).


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 15:40 - 16:05, session / room D

CHOO Weihao

MSIG Holdings (Asia) Pte Ltd, Singapore and Macquarie University, Sydney, Australia

Percentile rank gap as a measure of dependence between two variables at different percentiles

This paper proposes and studies a new statistic, called the "percentile rank gap", to quantify dependence between two variables at different percentiles. This statistic captures dependencies exhibited in for example stock markets where moderate returns are weakly dependent but extreme returns are highly dependent and linked often to a major correction. Another example is insurance where large natural catastrophe losses from related business lines happen simultaneously but "average" losses are weakly dependent.

The percentile rank gap is calculated from the copula, and is expressed as conditional tail expectations of percentile ranks. Percentile rank gap lies between -1 and 1, with higher values indicating stronger dependence. For example, the percentile rank gap of a Gumbel copula starts below 1 then increases to 1, reflecting imperfect lower tail dependence and perfect upper tail dependence. For a Clayton copula exhibiting perfect lower tail dependence, percentile rank gap starts at 1, then decreases.

Percentile rank gap satisfies several "coherence" properties. Countermonotonicity, independence and comonotonicity yield percentile rank gap of -1, 0 and 1, respectively. In addition percentile rank gap increases with correlation order, therefore positively dependent variables have positive percentile rank gap, and vice versa. Lastly, taking a weighted average of percentile rank gap across all percentiles yields Spearman's correlation.

Keywords: local dependence, copula, Spearman correlation, coherence


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:55 - 15:20, session / room E

CHRISTIANSEN Marcus

Institute of Insurance Science, University of Ulm, Germany

Integral equations for moments and loss distributions in multistate life and health insurance models

For the risk management in life and health insurance, a mathematical key quantity is the probability distribution of the random future liabilities of an insurance contract. We focus on unsystematic biometric risk, modeling the randomness of the future health status of individual policyholders by Semi-Markovian multistate models. We derive integral equations and partial differential equations for the moment generating function and higher order conditional moments of the future liabilities. From the moments we can then construct approximations for the loss distribution. Furthermore, we discuss how to represent the loss distribution directly via an integral equation.

Joint work with Franck Adekambi, University of Johannesburg.


Poster Presentation, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

CLARAMUNT BIELSA M. Mercè

Dept. Matemàtica Econòmica, Financera i actuarial, Facultat d'Economia i Empresa, Universitat de Barcelona, Spain 

Optimal stop-loss reinsurance under the maximization of the joint survival probability

The stop-loss reinsurance stands out among reinsurance contracts in the insurance market. It presents an interesting property: it is optimal if the criterion of minimizing the variance of the cost of the insurer is used. We analyse this contract in one period from the point of view of the insurer and the reinsurer. The optimal stop-loss contract is obtained if the criterion used is the maximization of the joint survival probability. We consider two different optimization problems.

In the first one, the reinsurance premium is fixed and so are the initial values of the reserves of the insurer and the reinsurer. In addition, the parameters of the reinsurance maximize the joint survival probability.

It is usually considered that the reinsurance premium is a function of the parameters of the stop-loss reinsurance and the total cost. In that instance, the reinsurer would apply for the calculation of the premium some of the usual criteria, for instance, the expected value, variance and standard deviation principles. We adopt as a criterion for the calculation of the reinsurer’s premiums the maximization of the joint survival probability, given as fixed both the values of the parameters of the reinsurance contract and the initial values of the reserves of the insurer and the reinsurer. Then, in the second optimization problem, the joint survival probability is considered to be a function of the reinsurance premium.

Joint work with Anna Castañer (Universitat de Barcelona).


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 13:55 - 14:20, session / room E

COSTABILE Massimo

Department of Economics, Statistics and Finance, University of Calabria, Italy

A trinomial lattice to evaluate variable annuities with guaranteed minimum withdrawal benefits under a regime-switching model

We consider the problem of evaluating variable annuities with a guaranteed minimum withdrawal benefit under a regime-switching model. Regime-switching lognormal models are appropriate to describe long-term returns that are typical in insurance applications and, moreover, two regimes are enough to describe the asset value evolution. The proposed model is based on a lattice that approximates the investment fund value dynamics. This choice is motivated by the fact that lattice-based models are simple and effective tools widely used in evaluation models when closed-form solutions are not available. Moreover, their extension is straightforward when additional provisions such as surrender options are embedded into the policy, without resorting to time consuming Monte Carlo simulations. We assume that at the contract inception the policyholder pays a lump sum that is invested in a fund made up of equities of the same kind whose value evolves according to a regime-switching lognormal model. The policyholder has the right to make periodical withdrawals from the investment fund until the initial investment is full recouped. A trinomial lattice is constructed to approximate the dynamics of the investment fund value in one regime and we use the same lattice to approximate the evolution of the fund value in the other regimes. A problem arises because withdrawals make the lattice non-recombining and the evaluation problem becomes unmanageable from a computational point of view even when a small number of time steps is considered. To overcome this drawback, we develop the trinomial lattice in such a way that after each withdrawal the three branches emanating from each node recombine with those stemming from adjacent nodes. Transition probabilities are computed so that the first two order moment of the discrete process match those of the corresponding continuous time process in each regime. Finally, the contract evaluation is conducted through the usual backward induction procedure and the insurance fee is obtained as the solution of the equation that makes the contract actuarially fair. Numerical results show the consistency of the proposed model.

Joint work with Ivar Massabò, University of Calabria, Italy.


Invited Plenary Talk (Mini Course: 180 Minutes)
Monday (Sept. 8, 2014), 10:50 - 12:20, and Tuesday (Sept. 9, 2014), 10:50 - 12:20, main lecture hall (FH1)

CZADO Claudia

Professor for applied mathematical statistics at the Center of Mathematics, Technische Universität München, Munich, Germany 

Pair-Copula constructions of multivariate copulas with applications

Copulas are used to characterize dependency among several components and are used to build multivariate models for financial and insurance data. The short course we will introduce the concept of copulas and discuss standard classes such as the elliptical and Archemedian copulas. These are restricted in their dependency pattern such as symmetry, tail independence or ex changeability. In contrast the flexible class of regular vine (R-vine) copula models can accommodate tail asymmetry and allow for different dependency patterns for different pairs of variables. R-vine copulas are based on a pair-copula construction (PCC) using only bivariate copulas as building blocks. Estimation, simulation and model selection are shown with examples using the R-package VineCopula, which contains functions for statistical inference of vine copulas and tools for exploratory data analysis and selection of bivariate copulas.

References:
[1] Aas, K., C. Czado, A. Frigessi and H. Bakken (2009). Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics 44(2), pp. 182-198.
[2] Brechmann, E. C. and U. Schepsmeier (2013). Modeling dependence with C- and D-vine copulas: The R-package CDVine. Journal of Statistical Software 52(3), pp. 1-27.
[3] Czado, C. (2010). Pair-copula constructions of multivariate copulas. In P. Jaworski, F. Durante, W. Härdle, and T. Rychlik (Eds.), Copula Theory and Its Applications. Berlin: Springer.
[4] Dissmann, J., E. Brechmann, C. Czado, and D. Kurowicka (2013). Selecting and estimating regular vine copulae and application to financial returns. Computational Statistics and Data Analysis 59(1), pp. 52-69.
[5] Kurowicka, D. and R. M. Cooke (2006). Uncertainty Analysis with High Dimensional Dependence Modelling. Chichester: John Wiley.
[6] Kurowicka, D. and H. Joe (Eds.) (2011). Dependence Modeling: Vine Copula Handbook. Singapore: World Scientific Publishing Co.
[7] Schepsmeier, U., J. Stoeber, and E. C. Brechmann (2013). VineCopula: Statistical inference of vine copulas. R package version 1.2.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:55 - 15:20, session / room D

DAHMS Rene

Switzerland

Reserve risk dependencies under Solvency II and IFRS 4 perspective

(draft version of a corresponding paper)

In modern times, in particular under Solvency II, SST and IFRS 4 perspective, it is required to specify the uncertainty corresponding to the estimation of the expectation of the outstanding insurance liabilities. In order to do so often some assumptions about the distribution of the estimated losses are made and actuaries estimate the corresponding parameters, for instance one assumes a Lognormal distribution and estimates the mean and the variance. For a single portfolio this has been studied by several authors. In this talk we want to look at collections of portfolios, use Linear-Stochastic-Reserving-Methods (LSRMs) to couple them in an often natural way and estimate the covariance matrix of the corresponding reserve risk.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 15:40 - 16:05, session / room B

DASSIOS Angelos

London School of Economics, UK 

Dynamic contagion processes in finance and insurance

Dynamic contagion processes are motivated by default contagion in financial mathematics. The model is inspired by stochastic intensity models but also by branching theory. It goes beyond Cox processes (ie processes with a stochastic intensity) as they incorporate a positive feedback element. They combine the characteristics of both Cox and Hawkes processes.

We will give a branching process definition and survey important results such as the generating functions for the distribution of the number of points over a finite interval as well as the distribution of the number of the “intensity” and some important asymptotics. The main tool used is the infinitessimal generator and various useful martingales. We will apply these results to a credit risk problem.

The potential applications in non-life insurance mathematics are even more numerous. We will use the process as a claim arrivals process in general insurance and develop results concerning the probability of ruin. A simple change of measure using a generalisation of the Esscher transform is an additional mathematical tool used. We will also demonstrate that dynamic contagion processes are an excellent tool for modelling delayed, incurred but not reported and incurred but not settled claims. This application will motivate the branching process definition. We will also provide results for the probability of ruin for such models.

We will also explore multidimensional extensions which should be extremely useful for the modelling of cross contagion in credit risk. Extensions involving a diffusion element will also be discussed. Although, the mathematics become more involved there already are useful results as well as methods for stochastic simulation of these processes.

Most of the work is joint with Hongbiao Zhao of Xiamen University.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 13:30 - 13:55, session / room B

DEELSTRA Griselda

Université libre de Bruxelles, Belgium 

Modelling correlated processes in a multivariate Lévy framework with applications

Both in finance and insurance, the modelling of correlated processes is a hot topic. In an insurance framework, the modelling of different sub-portfolios versus a global portfolio is e.g. an important issue. In finance, different derivatives have several underlying assets. Moreover, a multivariate version of a structural default model (with jumps) is useful to quantify the bilateral credit value adjustment and the bilateral debt value adjustment for equity contracts (see e.g. Ballotta and Fusai 2014).

Ballotta and Bonfiglioli (2014) address the issue of modelling with multivariate exponential Lévy model via a two factor linear representation of the assets log-returns, obtained as a linear combination of two independent Lévy processes representing respectively the systematic risk factor and the idiosyncratic shock. This model proves to be fairly general as it can be applied to any Lévy process, and fairly flexible as it accommodates the full range of dependencies. This paper, however, bases the calibration method upon a convolution condition.

The goal of our paper is to relax this convolution constraint and study calibration methods incorporating the correlation in another way. Therefore; we first study the pricing of products written on several assets with a multivariate exponential Lévy model. Using the Esscher transform for multidimensional semimartingales, we relate Exchange and Quanto options to European call and put options. We derive an FFT based pricing formula for Exchange and Quanto options. We present a fast calibration method to the Vanilla market and to the Exchange and Quanto market. We illustrate this method in a subordinated Brownian motion framework.

For insurance, an interesting question to explore is the impact of changing correlation between sub-portfolios on the value of the economic capital needed in the framework of Solvency 2.

Joint work with Laura Ballotta and Gregory Rayée.


Contributed Talk, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 13:55 - 14:20, session / room D

DEVOLDER Pierre

UCL - Catholic University of Louvain, Belgium 

Time consistency of Solvency 2 measurement for long term guarantees

The future Solvency 2 regulation for insurance introduces a risk metric taking into account all the risks involved in an insurance contract. In particular, in life insurance, financial and longevity risks play a central role.

But the risk measurement proposed is essentially based on a one year perspective; for long term life insurance products, this methodology can imply important distortions and induce non optimal strategies in terms of investment.

The purpose of this presentation is to present various alternatives to Solvency 2 in order to take into account the time horizon in the measurement of financial and longevity risks for long term guarantees and the computation of solvency capital requirement. In particular we will discuss time consistency and dynamic risk measures.

Joint work with Adrien Lebegue (UCL).

Keywords: longevity risk, market risk, solvency requirement, dynamic risk measure


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 16:05 - 16:30, session / room C

DODD Erengul (Ozkok)

Department of Mathematical Sciences, University of Southampton, UK

The effect of model uncertainty on the pricing of critical illness insurance

We present methodology for estimating net premium rates in critical illness insurance (CII) and we price CII products sold in the UK market, using data supplied by the Continuous Mortality Investigation in the UK. Our methodology requires the estimation of dates of diagnosis of the critical illness or death given a date of settlement of the corresponding claim [2, 3]. We also adjust the exposure for a given observation period to allow for claims where diagnosis occurs in the observation period but settlement does not, i.e for IBNS. These adjustments are implemented using the distribution of the time delay between dates of diagnosis and settlement of a claim, the so-called claim delay distribution [1]. The choice of this delay distribution can potentially affect the diagnosis rates and hence the premium rates. Therefore we consider model uncertainty and discuss the effect of two different claim delay distributions - a 2-parameter standard distribution and a 3-parameter heavy-tailed distribution - on the net premium rates for different CII products. We also explore the sensitivity of premium rates to interest rate [4].

Joint work with G. Streftaris, H. R. Waters and A. D. Stott (Heriot-Watt University, Edinburgh, UK).

References:
[1] Ozkok, E., Streftaris, G., Waters, H. R. & Wilkie, A. D. (2012a). Bayesian modelling of the time delay between diagnosis and settlement for critical illness insurance using a burr generalised-linear-type model. Insurance: Mathematics and Economics 50, 266-279.
[2] Ozkok, E., Streftaris, G., Waters, H.R., and Wilkie, A.D. (2012b). Modelling critical illness claim diagnosis rates I: Methodology. Scandinavian Actuarial Journal, DOI:10.1080/03461238.2012.728537.
[3] Ozkok, E., Streftaris, G., Waters, H.R., and Wilkie, A.D. (2012c). Modelling critical illness claim diagnosis rates II: Results. Scandinavian Actuarial Journal, DOI:10.1080/03461238.2012.728538.
[4] Ozkok-Dodd, E., Streftaris, G., Waters, H.R., and Stott, A.D. (2014). The e ffect of model uncertainty on the pricing of critical illness insurance. Submitted - Annals of Actuarial Science.


Invited Plenary Talk
Friday (Sept. 12, 2014), 9:50 - 10:40, main lecture hall (FH1)

DOTTERWEICH Alexander

Director, KPMG, Munich, Germany
Member of the KPMG Competence Center for implementation of Solvency II 

The importance of actuarial projections in risk management

Actuarial projections, typically designed as projections of future cash-flows, are no unknown territory for insurance companies. They are already established for a wide range of purposes, for example for MCEV calculations, for pricing or for further economic analyses. Now the importance of actuarial projections in risk management increases sustainably, in particular with regard to "good practice" of risk management approaches and with regard to enhanced regulatory requirements.
By reference to practical examples and illustrations, key topics for further development of projection landscape will be examined in the context of this presentation. This concerns for instance questions regarding granularity and quality of projections for a future Forward Looking Assessment of Own Risks and related issues regarding deeper-seated projection principles. In addition, consequences for the projection landscape that result from the required governance framework under Solvency II, e.g. the interaction between the risk management function and the actuarial function, will be addressed.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 15:40 - 16:05, session / room D

DYGAS Pawel

Group Risk Management, Solvency II Internal Group Model, UNIQA Insurance Group plc, Vienna, Austria

From Solvency II Standard Approach to true dependencies - shrinkage in Non-Life Insurance

One of the main issues for internal models under Solvency II framework is risk aggregation. This topic becomes even more important if only Non-Life Underwriting Risk is taken into account. Uncertainty of estimated parameters makes it impossible for insurance undertakings to rely purely on statistics based on data in their dependency modelling. Therefore undertakings are obligated to use methods which incorporate expert judgment into the estimation process.

Due to required one-year horizon only few historical observations are available for dependency estimation (“small n”). The granularity of risk modelling must be at least equal to the setting given by the Solvency II Standard Formula, which means premium as well as reserve risk must be modelled on lines of business level (“large p”). We show a solution of this classical “large p, small n” correlation estimation problem using shrinkage proposed by Ledoit – Wolf. Our proposed approach estimates parameters of a (Gaussian) copula at the lowest required granularity based on historical observations, fitted single distributions of risks and expert judgment.

Expert judgment assumptions are derived from parameters used in the Solvency II Standard Formula. We show how correlations from Solvency II Standard Approach can be adapted to reflect granularity of premium and reserve risk presented separately on line of business level.

We present different estimation possibilities in our proposed setting. For each possibility, theoretical shrinkage factors are shown and estimators of these factors are analyzed.

We compare our methodology based on fitted single distributions with an alternative version which uses rank correlation.

We also analyze the impact of adding assumption of no negative correlation between risk categories as we believe that Non-Life risks do not hedge each other.

Finally we present a case study in which performance of our dependency modeling approach is compared to pure estimation from historical data. Especially the aspect of parameter stability for successive years of estimation is assest.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 13:55 - 14:20, session / room F

EISELE Karl-Theodor

Institut de Recherche Mathématique Avancée, University of Strasbourg, France 

Prediction of claim provisions with Hachemeister credibility for development patterns

We consider a multivariate model for loss prediction with several contracts for each accident year. The model includes a Hachemeister's credibility part where the design matrix represents standard development patterns for cumulative quotas. These patterns can be found by methods of discriminative analysis. The credibility estimator yields a mean development pattern. Inverting this mean development pattern allows for an estimation of the final losses of accident years.
Volume vectors may be useful in addition, but are not essential parts of the model.

Joint work with Saida Guettouche (University of Strasbourg).

Keywords: multivariate IBNR model, Hachemeister's credibility


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:55 - 15:20, session / room A

EISENBERG Julia

FAM, Vienna University of Technology, Austria 

Optimal consumption under a stochastic interest rate

We consider an individual or household whose income is modeled by a deterministic process. Our target is to maximize the expected discounted consumption over a finite time horizon under the assumption of a stochastic interest rate, describing macroeconomic and/or microeconomic changes in the considered system. We derive an explicit expression for the value function and the optimal strategy via the Hamilton-Jacobi-Bellman approach. An example illustrates the obtained results.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 16:05 - 16:30, session / room F

EKSI-ALTAY Zehra

Institute for Statistics and Mathematics, WU Vienna University of Economics and Business, Austria

EM algorithm for Markov chain observed via Gaussian noise and point processes information

Continuous-time partial-information models are common in finance, economics and insurance. A partial information setting may arise for a number of non-exclusive reasons such as the latent nature of model variables and potentially noisy observations. We consider a setting in which the latent variable follows a Markov chain observed via diffusive and point processes information. Such a setting can be used in the modelling of sovereign default risk and non-life insurance risk where the price data and default or event history constitute the diffusive and point process information, respectively. In order to estimate the unknown parameters and infer the unobserved latent variables in such models we use innovations approach to non-linear filtering and extend the earlier work of Elliot[93]. In particular, we obtain an EM algorithm for the setting where the state variable follows a Markov chain observed via diffusive and point processes information. Then, we test the speed, efficiency and accuracy of the algorithm by running a simulation analysis.

This is a joint work with Ruediger Frey.

References:
[1] R. J. Elliott. New finite-dimensional filters and smoothers for noisily observed Markov chains. Information Theory, IEEE Transactions on Information Theory, 39(1):265-271, 1993.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Thursday (Sept. 11, 2014), 16:05 - 16:30, session / room A

FLICI Farid

Center of Research in Applied Economics for Development (CREAD) Algiers, Algeria 

Using specific life-tables for life annuities calculation: the case of Algeria

Algerian life insurance market still little developed. The proposed formulas are unattractive. Much of subscriptions are due to the obligatory insurance: travel, securing bank credit…etc. Until 2006, it was not a distinction between life and non-life insurance. After 2006 [law 06-04 of 20th February 2006], insurance companies have been obliged to separate their activities by branch. Then, each branch must be self-balanced. Mission is also more complicated in life insurance. If we take the number of subscribers by insurer, we observe that the insured portfolios are still reduced. Less the portfolio is large, more portfolio-risk is important, especially for long-term contracts such life-annuities. The most important risk is related to the choice of the adequate life-table. Algerian insurers still use general life table for life annuities calculation. However, annuitant’s mortality curve is not supposed to follow that the rest of the population. It is necessary to consider this element, and construct specific life table for the Algerian annuitant population. It is the main objective that the present paper is looking for.

The problem that we have is the shortage of data about annuitants’ mortality. To construct specific life table, we need a very large population presenting specific commons qualities. In Algeria, the private annuitant population is still fewer. The idea is to use to data of the public pension. The population of pensioners is assumed to better represent the annuitant population compared with the global population.

For this, we use the death observations during the three last years (2010-2012), and construct average life table of the period. Life annuities calculation is based on female life table. We note that the part of women in the pensioners’ portfolio in pay is still fewer. So, the quality of the obtained life table doesn’t have to be satisfying. The idea is once male life table is given, we deduce the female table by using the age-sex ratio calculated on the global life table for the same period. Here we assume that the sex-mortality is the same for the two considered population.

In final, the utility of the resultant life table compared with the global life table will be tested on annuitant portfolio composed by 100 annuitants.

Keywords: specific life-table, public pension, private annuity, mortality models, Algeria


Poster Presentation, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

FRAGKOS Nikos

Department of Statistics, Athens University of Economics and Business, Greece

The interplay between social security pension and saving plans: The economic value of tax incentives

(extended version of abstract)

We assume that individuals would be creating Individual Pension Plan accounts as a complementary investment plan to their Social Security.
To make people get interested in such investment one way is for the Government to give tax incentives:
Assume there are people at different tax brackets as a function of their income.
Suppose also that there will be investment accounts similar to the American IRA’s (or different depending on the country) sold in banks, investment houses or perhaps Private pension Companies like in Turkey.
The system will work as follows:
The citizen will be investing a predetermined amount to the personal account every month, which will be invested in financial portfolios of his choice.
(The amount will have a minimum value so that the proceeds will have some economic value at the end of the accumulation period. There will also be an upper value.)
The yearly amount saved/investment will be tax-deductible.
We may also assume that the investment returns will be tax free.
This will create a second benefit to the participant but at the same time it will give rise to a shortfall (will create a decrease) of government revenues.
The proposal of this study is to create a mechanism so that the government will “sort of” get back what they have given throughout the accumulation period through the tax incentives.

Joint work with Irini Dimitriyadis (Bahcesehir University Istanbul, Turkey).


Contributed Plenary Talk
Friday (Sept. 12, 2014), 11:50 - 12:15, main lecture hall (FH1)

FREY Ruediger

Institute for Statistics and Mathematics, WU Vienna, Austria

Contagion effects and collateralized credit value adjustments for credit default swaps

(draft version of a corresponding paper)

The talk is concerned with counterparty credit risk for credit default swaps in the presence of default contagion. In particular, we study the impact of default contagion on credit value adjustments such as the BCCVA (Bilateral Collateralized Credit Value Adjustment) and on the performance of various collateralization strategies. We use a credit risk model with partial information and default contagion for our analysis. We find that contagion effects have a substantial impact on the effectiveness of popular collateralization strategies. We go on and derive improved collateralization strategies that account for contagion. Theoretical results are complemented by a simulation study.

Joint work with Lars Rösler, WU Vienna.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:30 - 14:55, session / room B

FUCHS Sebastian

Lehrstuhl für Versicherungsmathematik, Technische Universität Dresden, Germany

Bivariate copulas: transformations, asymmetry and measures of concordance

We study a group of transformations on the collection of all real functions on the unit square. These transformations map the collection of all bivariate copulas into itself. For every copula, they generate a variety of new copulas; this is of particular interest with regard to asymmetric copulas and may also be useful for proving that certain real functions on the unit square are indeed copulas. Some of these transformations preserve symmetry and the value of any measure of concordance, but others do not. The talk provides examples of asymmetric copulas which arise as transformations of symmetric ones, and it also emphasizes the potential of asymmetric copulas in modeling joint life insurance.

Joint work with Klaus D. Schmidt (TU Dresden).


Invited Plenary Talk
Wednesday (Sept. 10, 2014), 10:30 - 11:20, main lecture hall (FH1)

FURRER Hansjörg

Head of Quantitative Risk Management - Division Insurance, Swiss Financial Market Supervisory Authority FINMA, Switzerland 

FINMA's model review approach and future challenges

Switzerland is one of the pioneers in the field of risk-based solvency capital requirements for insurance supervision. The Swiss Solvency Test (SST) has been set into force on 1 January 2011 following a five-year phasing-in period. The SST is based on market-consistent valuation principles and requires the quantification of at least market, credit and insurance risk. Insurance companies are allowed to use internal valuation and risk models for SST purposes provided the models have been approved by the Swiss Financial Market Supervisory Authority (FINMA). In this talk, we first present the main characteristics of the SST and make comparisons with the Solvency II capital requirements. We then present the experiences and challenges FINMA encounters during the review process. A special emphasis is given to some selected mathematical problems that arise in this context.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 14:30 - 14:55, session / room B

GHOSSOUB Mario

Imperial College London, UK 

Cost-efficient contingent claims under nonlinear pricing

In complete frictionless securities markets under uncertainty, it is well-known that in the absence of arbitrage opportunities, there exists a unique linear positive pricing rule, which induces a state-price density. Dybvig (1988) showed that the cheapest way to acquire a certain distribution of a consumption bundle (or security) is when this bundle is anti-comonotonic with the state-price density, i.e., arranged in reverse order of the state-price density. In this paper, we look at extending Dybvig's ideas to markets with imperfections represented by a nonlinear pricing rule. We consider an investor in a securities market where the pricing rule is "law-invariant" with respect to a capacity (e.g., Choquet pricing as in Araujo et al. (2011), Chateauneuf et al. (1996) and Cerreia-Vioglio et al. (2012)). The investor holds a security with a random payoff X and his problem is that of buying the cheapest contingent claim Y on X, subject to some constraints on the performance of the contingent claim and on its level of risk exposure. The cheapest such claim is called cost-efficient. If the capacity satisfies standard continuity and a property called strong diffuseness introduced in Ghossoub (2011), we show the existence of a cost-efficient claim, and we show that a cost-efficient claim is anti-comonotonic with the underlying security's payoff X. Strong diffuseness is satisfied by a large collection of capacities, including all distortions of diffuse probability measures. As an illustration, we consider the case of a Choquet pricing functional with respect to a capacity and the case of a Choquet pricing functional with respect to a distorted probability measure. Finally, we consider a simple example in which we derive an explicit form for a cost-efficient claim.


Poster Presentation, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

HEINY Johannes

University of Copenhagen, Denmark 

Asymptotic theory for large sample covariance matrices

In risk management an appropriate assessment of the dependence structure of multivariate data plays a crucial role for the trustworthiness of the obtained results. The case of heavy-tailed components is of particular interest.

We consider asymptotic properties of sample covariance matrices of such time series, where both the dimension and the sample size tend to infinity simultaneously.

Joint work with Thomas Mikosch.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 14:55 - 15:20, session / room F

HENTSCHEL Felix

Institute of Insurance Science, University of Ulm, Germany

Optimal consumption and investment decisions under time varying risk attitudes

The continuous time consumption-investment problem was originally solved by Merton (1969) for a time-separable power utility with a constant relative risk aversion coefficient γ. In Merton (1971) the solution is extended to the more general class of HARA utility functions, but still the risk attitude remains constant over time. Since financial contracts might have a long time horizon, one would expect that the attitude towards risk could change during the considered period.

Several extensions have been made in the literature to study the consumption-investment problem under more general utility functions and to account for time changing behavior towards risks. One approach is to omit the time separability of the utility function by including habit formation (see e.g. Constantinides (1990)). This extension accounts for the customization of individuals to their current level of consumption. Another approach is to consider a time varying coefficient of risk aversion γ(t) (see e.g. Steffensen (2011)) to account for changes in the individuals behavior towards risk over time.

In our setup we combine both approaches and consider the continuous time optimal consumption and investment problem for an investor with habit formation and a time varying coefficient γ(t). We solve the problem in a complete market and suggest different shapes for γ(t). In a numerical analysis, we consider the effects of different time varying coefficients on the optimal consumption and investment decisions. Furthermore we calibrate γ(t) in such a way that we can replicate investment strategies, like for example the rule of thumb.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 13:55 - 14:20, session / room A

HERNÁNDEZ Miguel Camilo

Mathematics Department, University of Los Andes, Bogotá, Colombia

Optimal dividend payments problem under time of ruin constraints: Exponential case

(extended version of the abstract)

In this paper we study the classical optimal dividend payments problem under a constraint on the time of ruin. We begin stating the problem as an stochastic control problem (P1) with value function V(x) and use duality theory to obtain the lagrange dual function VL (x) which defines the dual problem (P2). We show uniqueness of the optimal barrier strategy b* for (P2) and derive a explicit formula for VL (x). Finally, we proof that the solution to (P1) is obtained as the minimal solution to (P2) over all L ≥ 0. We also present a series of numerical examples.

Joint work with Mauricio Junca (University of Los Andes).


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 16:05 - 16:30, session / room D

HIRHAGER Karin

COR&FJA Austria Ges.m.b.H., Vienna, Austria

Conditional distortion risk measures, conditional (weighted) expected shortfall and application to risk capital allocation

Based on conditional lower quantiles and measurable upper envelopes we define conditional distortion risk measures on a general modeling setup involving conditional expectations based on sigma-integrability. Within this setup we give an overview of properties of conditional distortion risk measures. Then we will give a definition of conditional expected shortfall via an explicitly given density with stochastic risk levels and show the connection to conditional distortion risk measures. Further, we point out the link to dynamic risk measures and prove a supermartingale property. Further we introduce weighted conditional expected shortfall, which also arises as a special case of conditional distortion risk measures. In the next step we introduce contributions to conditional weighted expected shortfall and list several properties. In particular, it is possible to derive the contribution of a subportfolio to the whole portfolio in order to be able to identify main risks.

Joint work with Jonas Hirz and Uwe Schmock (Vienna University of Technology).


Contributed Talk, Section: Life and Pension Insurance Mathematics
Thursday (Sept. 11, 2014), 14:55 - 15:20, session / room A

HIRZ Jonas

FAM, Vienna University of Technology, Austria

Modelling annuity portfolios with extended CreditRisk+

Using an extended version of the credit risk model CreditRisk+, we develop a flexible framework to estimate stochastic life tables and to model annuity portfolios, including actuarial reserves. Deaths are driven by common stochastic risk factors which may be interpreted as death causes like neoplasms, circulatory diseases or idiosyncratic components. This approach provides an efficient, numerically stable algorithm for an exact calculation of the one-period loss distribution where various sources of risk are considered. As required by many regulators, we can then derive risk measures for the one-period loss distribution such as value-at-risk and expected shortfall. In particular, our model allows stress testing and, therefore, offers insight into how certain health scenarios influence annuity payments of an insurer. Such scenarios may includeimprovement in health treatments and better medication. Using publicly available data, we provide estimation procedures for model parameters including classical approaches as well as MCMC methods. We conclude with a real-world example using Australian death data.

Joint work with Uwe Schmock (TU Vienna) and Pavel Shevchenko (CSIRO, Sydney).


Plenary Talk
Thursday (Sept. 11, 2014), 17:30 - 17:55, main lecture hall (FH1)

HOLZER Helmut

Actuary and Honorary President of the Actuarial Association of Austria (AVÖ), Austria

History of Actuarial Science in Austria & at TU Vienna

The profession of actuaries has a very old tradition in Austria. Actuaries have been working already since the midst of the 19th century within the old Austrian-Hungarian Monarchy. At the beginning of the 20th century, this profession got more and more importance establishing an Austrian-Hungarian Association for Actuarial Science in 1904. After the 2nd world war actuaries helped to reconstruct insurance business in Austria. Since this time as well education as importance of actuaries has increased constantly, fields of activity having extended and importance within these fields has increased because of prescriptions by law and international regulations.

In parallel to increase of requirements education has developed. Until 1990, only a so-called Short Academic Study on Actuarial Science at the Technical University of Vienna lasting 6 semesters has existed. This study had imparted basic knowledge of actuarial science. Advanced knowledge could be required only within national and international seminars or within combination of praxis and theory.

Because of the new requests on the knowledge of actuaries, caused by the deregulation of the insurance industry, following the determination of the EC-directives a full academic study has been established at TU Vienna in 1991/1992, 4th pillar in Technical Mathematics, lasting 10 semesters. In the course of Bologna Process, regulations have changed in 2002, the study being composed of a Baccalaureates study lasting 3 years and a following Master study lasting 2 years. The 1st part equals to a big extent Short Academic Study and the 2nd part the Second section of the study of Technical Mathematics for Actuaries.

Another possibility to become an actuary is studying Mathematics, supplied by Actuarial Lectures. In this case, the student has to attain a full academic degree in Mathematics and afterwards he has to visit and pass the examinations in further special actuarial lectures. You can study Mathematics in nearly each Austrian university, e.g. Universities of Vienna, Salzburg, Graz (University and Technical University) and Linz. Additional lectures in Deterministic and Financial Mathematics, Risk Theory, Stochastic Processes, Mathematical Statistic and Probability Theory. However, those lectures normally are very special and on a very high level. Therefore, they do not include the treatment of the general characteristics of these topics.

In addition, Actuarial Association of Austria organize seminars for advanced education in Actuarial Mathematics and at the University of Salzburg courses for special problems in actuarial mathematics have been organized already since 2000 in cooperation with Actuarial Association of Austria. Supporting actuaries the Actuarial Association of Austria (AVÖ) has been established in 1971, being a reactivation of the further Austrian-Hungarian Association for Actuarial Science established in 1904. Since that time, many alterations of requirements to this association had to be carried out in parallel to the increase of fields of activity and importance within these fields.

The main duties of the association are now protection and support of actuarial interests, development and enforcement of professional conduct, positioning of actuarial career, support of actuarial research and teaching, cooperation with universities and other actuarial associations and promotion of contacts of members.

Using analytical methods we can identify the value function as a particular solution to the associated Hamilton-Jacobi-Bellman equation. This approach leads to an implementable numerical method for determining (approximating) the value function and optimal reinsurance strategy. Furthermore we give some examples illustrating the applicability of this method.


Contributed Plenary Talk
Thursday (Sept. 11, 2014), 11:50 - 12:15, main lecture hall (FH1)

ALBERS Thomas & ZACHARIAS Mario

Co-Practice Leader Life Modelling at Milliman, Düsseldorf, Germany 

Does the standard formula really suit my company?

Is the use of the standard formula appropriate for your company and its risk profile? Assessing the suitability of the standard formula is going to be one of next year's main challenges due to the introduction of Solvency II.

Using concrete Solvency II examples the session will introduce various validation processes (with focus on the market and life risk modules and their sub-modules) and elaborate on their applications. The session will also deal with the issue of the overall approach to the modelling of the corresponding risks and the choice of the parameters. Finally, the presentation will outline some validation approaches relating to the aggregation formulas.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Thursday (Sept. 11, 2014), 14:30 - 14:55, session / room A

HUBER Laurent J.

RiskLab, ETH Zürich, Switzerland 

Bayesian mortality trends analysis

We present a Bayesian framework for the modeling and the projection of mortality rates. The model maps the survival probability surface to an n-dimensional risk factor process taking into account dependencies contained in the surface. This way, the random effects and the systemic risk of mortality are separated. Calibration of the model is obtained using Markov chain Monte-Carlo methods, risk factor trends are then extrapolated, and future mortality rates are predicted. Additionally, this modeling framework yields a natural closing methodology for the simulated life tables akin to the mortality curve of Heligman and Pollard. We apply the model to Swiss male mortality data and offer some comparison to the Lee-Carter model. Lastly, sensitivity analysis of life expectancy is performed.


Invited Plenary Talk
Wednesday (Sept. 10, 2014), 11:20 - 12:10, main lecture hall (FH1)

JASCHKE Stefan

Head of Quantitative Methods at Munich Re, Munich, Germany 

Challenges in the risk management of life reinsurance

There are numerous challenges for the life insurance industry: a combination of low interest rates, increased competition from the banking and fund industry in savings products, regulatory requirements according to Solvency II (affecting European insurers), additional requirements for large variable annuity writers (to the extent they are classified as globally systemically important insurer), reduced trust in financial institutions and significant changes in the way hedging instruments are collateralized and priced. These require new product designs and new approaches to risk management.

The talk will include

  • an overview of the recent developments in the banking and insurance industry regarding valuation and risk assessment
  • a description of our general approach to pricing and hedging and
  • selected specific questions from the reinsurance business, which entail quantitative challenges.

Poster Presentation, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

KLEINERT Florian

University of Manchester, UK

A formula for the ruin probability in finite time driven by Lévy processes

We study the problem of ruin probabilities with finite horizon driven by a Lévy processes. As it is well known typically no closed form solution for such problems exist and hence numerical methods are required. To this end we introduce a new algorithm, which is based on Carr's 'Canadisation' technique (Carr (1998)). This means we are working on a stochastic time grid. Following this, we introduce an explicit formula for the ruin probability in finite time driven by a huge class of Lévy processes. Hereby, the formula is viable for any Lévy process whose law at an independent, exponentially distributed time consists of a (possibly infinite) mixture of exponentials. This includes Compound Poisson processes, Brownian motion plus (hyper)exponential jumps, but also the recently introduced rich class of so-called meromorphic Lévy processes (Kyprianou et al. (2012)). We provide error bounds, illustrate the results with some numerics and compare them to cases where explicit formulas for the ruin probability on a deterministic time grid (Brownian Motion, Compound Poisson processes with exponential distributed claims) are known.

This is joint work with Kees van Schaik (University of Manchester).


Contributed Talk, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 16:05 - 16:30, session / room D

KOLEV Nikolai

Department of Statistics, University of Sao Paulo, Brazil 

A new class of bivariate distributions with risk management applications

The main goal is to link two known and popular versions of bivariate lack of memory property in a new class A of bivariate continuous distributions. It happens that the sum of conditional hazard rates of the class A is characterized by a linear function of both arguments. We are convinced that the class introduced is promising in modeling dynamic aging dependence, being much more realistic than the virtual "nonaging world". The class A is very exible, including symmetric and asymmetric continuous distributions, with possible singularity and those which are positive or negative quadrant dependent. Such a variety of bivariate distributions would help to choose the "right" model consistent with the physical nature of the observations.

We suggest to consider the sum of conditional hazard rates as a measure of the riskiness of the portfolio. Geometric interpretation of the class A, its multivariate version and risk management applications will be discussed.

Joint work with Jayme Pinto.


Poster Presentation, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

KRAUS Daniel

Applied Statistics, Technische Universität München, Germany

Using R vine copulas to explain dependencies of health care costs

Using R vine copulas in order to explain the dependencies of insurance data has become very popular. The reasons for this are manifold. High dimensional multivariate data sets with complex dependence structures (e.g. high tail dependence) can be modeled using only bivariate copulas by a pair copula construction (Aas et al (2009)). This procedure results in flexible model fitting with results that are easy to interpret.

The data set we will consider contains the total costs and deductibles of ambulant, stationary and dental treatments for the years 2005 to 2007 for a large number of insured individuals. Further, for each person specific and demographic variables such as age, gender and ZIP code are provided.

Using 9-dimensional R vine copulas we will investigate the dependence structure of the three different cost types between the three years. The likelihood of the resulting model is compared to models using several truncated R vine copulas with clustered tree structures, for example grouping the variables by years or categories.

Finally, we fit a model containing only the costs from 2005 and 2006 and use it to predict the health care costs of the year 2007 and compare the predictions with the actual results.

Reference:
[1] Aas, K., C. Czado, A. Frigessi, and H. Bakken. 2009. Pair-copula constructions of multiple dependence. Insurance Mathematics and Economics 44: 182-198.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 14:55 - 15:20, session / room E

KUDRYAVTSEV Andrey

St. Petersburg State University, Russia

Some aspects of nonparametric regression applications to rate making

(extended version of abstract)

The idea of rate making is to estimate of rate P(yj) on the base of loss data xj and risk factors information yj, j=1,...,n. Although a construction of P(yj) is usually based on a parametric (regression) model, nonparametric regression could also be used.

The basic idea of the latter approach is to use the Rosenblatt density estimator that is modified by an adjustment for risk factors. Then the aimed rate estimate is usually based on this estimation of the loss distribution density function (that is conditional with risk factors. In other words, the key aspects of this approach are not only choosing kernel function that is important element of the nonparametric regression but also constructing an appropriate partition of the risk ractors set. The latter constitutes actually a tariff classification.

The partitioning estimator (that based on an indicator function) is simplest and gives a compromise between these two aspects. However, its result is too elementary as the estimator is a sample mean. Other kernel functions lead to more consistent estimators, but the special attention to possible contradiction between above mentioned aspects should be paid. Some recommendations for choosing kernel functions and partitioning risk factor set are given in the report.

Joint work with Mirbulat B. Sikhov (Al Farabi Kazakh National University, Kazakhstan).

Keywords: nonparametric regression, rate making, Rosenblatt density estimator, partitioning risk factor set


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 14:55 - 15:20, session / room C

LEHTOMAA Jaakko

University of Helsinki, Finland

Asymptotic behaviour of ruin probabilities in a general discrete risk model using moment indices

We study the rough asymptotic behaviour of a general economic risk model in a discrete setting. Both financial and insurance risks are taken into account. Loss during the first n years is modelled as a random variable B1 + A1B2 + … + A1…An-1Bn, where Ai corresponds to the financial risk of the year i and Bi represents the insurance risk respectively. Risks of the same year i are not assumed to be independent.

The main result shows that ruin probabilities exhibit power law decay under general assumptions. The objective is to give a complete characterisation of the relevant quantities that describe the speed at which the ruin probability vanishes as the amount of initial capital grows. These quantities can be expressed as maximal moments, called moment indices, of suitable random variables. In addition to the study of ultimate ruin, the case of finite time interval ruin is considered.

Keywords: insurance mathematics; ruin theory; moment index; perpetuity; heavy-tailed

References:
Jaakko Lehtomaa. Asymptotic behaviour of ruin probabilities in a general discrete risk model using moment indices. Journal of Theoretical Probability, pages 1-26, 2014, doi: 10.1007/s10959-014-0547-y.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 14:30 - 14:55, session / room E

LEMAIRE Jean

Wharton School, University of Pennsylvania, USA 

The use of annual mileage as a rating variable

Auto insurance companies are at a crossroads. Several variables commonly used, such as gender and territory, are being questioned by regulators. Insurers are being pressured to find new variables that predict accidents more accurately, are socially acceptable, and are under the control of policyholders. Annual mileage seems an ideal candidate. The recent development of GPS systems, on-board computers, and telematics devices, and the rapid decrease in price of these new technologies, should induce carriers to explore ways to introduce Pay-As-You-Drive insurance.

We use the unique database of a major insurer in Taiwan to investigate whether annual mileage should be introduced as a rating variable in third-party liability insurance. The database includes car information from the largest manufacturer in Taiwan, insurance characteristics from a major carrier, and odometer readings from a chain of shops performing oil changes and routine repairs, for over a quarter million policy-years. We find that annual mileage is an extremely powerful predictor of the number of claims at-fault. Its significance, as measured by Wald’s chi-square and its associated p-value, by far exceeds that of all other variables, including bonus-malus. This conclusion applies independently of all other variables possibly included in rating. The inclusion of mileage as a new variable should, however, not take place at the expense of bonus-malus systems; rather the information contained in the bonus-malus premium level complements the value of annual mileage. An accurate rating system should therefore include annual mileage and bonus-malus as the two main building blocks, possibly supplemented by the use of other variables like age, territory, and engine cubic capacity. While Taiwan has specific characteristics (high traffic density, mild bonus-malus system, limited compulsory auto coverage), our results are so strong that we can confidently conjecture that they extend to all affluent countries.

Joint work with Sojung Park and Kili Wang.


Contributed Talk, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 13:30 - 13:55, session / room D

LINDE Marc

BELTIOS P&C GmbH, Köln, Germany

Analytical and simulation-based approaches for quantifying multi-year non-life insurance risk within ORSA / FLAOR processes under Solvency II

Non-life insurance risk is usually composed of reserve risk and premium risk (see, e.g., [3], [1]). Reserve risk relates to claims that have already occurred in the past (previous accident years), whereas premium risk relates to claims that will occur in the future (future accident years). So far, in practice, the separation between reserve risk and premium risk is very strict and in some cases even totally different stochastic modelling approaches are applied.

Non-life insurance risk is typically considered in an ultimo time horizon, which means that uncertainty about future claims development due to claims that have already occurred in the past (reserve risk) and due to claims that will occur in the future (premium risk), is quantified up to final settlement. Within the new regulatory framework of Solvency II, a time horizon of one year is taken into account, which means that uncertainty about future claims development is quantified for one calendar year only (see, e.g., [4], [3]).

In the context of Solvency II, insurance companies are also prescribed to perform a forward looking assessment of own risks (FLAOR) as part of the risk management system (see [EIOPA 2013]). For this purpose, non-life insurance risk has to be modelled in a multi-year context (usually an intermediate-term time horizon of 3-5 years is taken into account). In [1] Diers and Linde have introduced a concept for defining and quantifying multi-year premium risk, multi-year reserve risk and hence multi-year non-life insurance risk in terms of prediction uncertainty of the corresponding multi-year claims development results.

Within our presentation we briefly recap the basic concept and central definitions of the multi-year claims development result as introduced in [1]. Following that, we present analytical closed-form expressions for the prediction error of the multi-year claims development result in the most common reserving models - namely in the additive reserving model and the chain ladder reserving model. Next to the analytical approach we will present a simulation-based approach for quantifying multi-year non-life insurance risk which is referred to as stochastic ‘Re-Reserving’ (see [2]). We illustrate the theoretical results by means of a numerical example, where we compare the results from the analytical and simulation-based approaches. Finally we demonstrate how both approaches can be integrated into FLAOR processes for non-life insurance companies.

This is a joint work with Dorothea Diers (University of Ulm).

References:
[1] Diers, D., and Linde, M. (2013), The multi-year non-life insurance risk in the additive loss reserving model, Insurance: Mathematics and Economics, 52(3), 590-598.
[2] Diers, D., Eling, M., Kraus, C., and Linde, M. (2013), Multi-year non-life insurance risk, The Journal of Risk Finance, 14(4), 353-377.
[3] Ohlsson, E., and Lauzeningks, J. (2009), The one-year non-life insurance risk, Insurance: Mathematics and Economics, 45(2), 203-208.
[4] Merz, M., and Wuethrich, M. V. (2008), Modelling the claims development result for solvency purposes, Casualty Actuarial Society E-Forum, 542-568.


Invited Plenary Talk (Mini Course: 180 Minutes)
Tuesday (Sept. 9, 2014), 9:00 - 10:30 and 13:40 - 15:10, main lecture hall (FH1)

LOISEL Stéphane

Professor at Institute of Actuarial Science and Finance, University Claude Bernard of Lyon 1, France 

Modeling, monitoring and managing longevity risk

In this short course, we present classical approaches and new ideas to model longevity risk using population dynamics methods.

We investigate online detection problems: how does one optimally detect changepoints in two-population longevity models under some false alarm constraint? We also discuss some financial risks associated to longevity related contracts as well as simulation issues in a risk management / Solvency II perspective.


Poster Presentation, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

MACCI Claudio

University of Rome Tor Vergata, Italy 

Asymptotic results for empirical means of independent geometric distributed random variables, and applications to weak records

We consider independent geometric distributed random variables which satisfy suitable hypotheses (in particular they can have different geometric distributions). We study large and moderate deviations for their empirical means. Moreover, motivated by the interest of weak records in insurance, we also present asymptotic results for sequences of weak records of i.i.d. discrete random variables. In fact it is known that these weak records can be expressed in terms of sums of independent geometric distributed random variables.

Joint work with Barbara Pacchiarotti.


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 13:30 - 13:55, session / room F

MAINIK Georg

TU Munich, Germany 

Risk aggregation with empirical margins: Latin hypercubes, empirical copulas, and convergence of sum distributions

This talk is dedicated to risk aggregation in multivariate models constructed by plugging empirical margins into a copula and computing the aggregated risk distribution via Monte Carlo. This approach is often chosen in practice if marginal distributions are not known in closed form. Another related method is the sample reordering by Iman and Conover, also known as Latin Hypercube Sampling with dependence. The unique algorithmic tractability of the sample reordering method allows for bottom-up sampling of hierarchic dependence structures.

Despite their popularity in practice, mathematical proofs for the convergence of the aggregated risk distributions in both methods mentioned above have been missing so far.

The most surprising outcome of my study is that a CLT for these estimates does not hold. As it turns out, the underlying mathematical problem goes beyond classic functional CLTs for empirical copulas. The convergence results that are available include strong uniform consistency and a sufficient criterion for the convergence rate O(n-1/2) in probability. In particular, all copulas with bounded densities satisfy this criterion. Examples with unbounded densities include bivariate Clayton and Gauss copulas. The convergence results are not specific to the component sum and hold also for any other componentwise non-decreasing aggregation function.


Poster Presentation, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

MAKOGIN Vitalii

Taras Shevchenko University of Kyiv, Ukraine

Example of a Gaussian self-similar field with stationary rectangular increments that is not a fractional Brownian sheet

In the classical Black-Scholes pricing model the randomness of the stock price is due to Brownian motion. It had been suggested that one should replace the standard Brownian motion by a fractional Brownian motion. In the present talk we consider fractional Brownian sheet (multiparameter process). This Gaussian self-similar random field is an extension of a fractional Brownian motion.

We consider the fields which are self-similar with respect to every coordinate with individual index. Such fields are used to call anisotropic and in the Brownian case they usually are called as Brownian sheets. The investigation of self-similar random fields was caused by the evidence of the self-similarity property of phenomena in climatology and environmental sciences. But it is known that investigations of problems in climatology and financial mathematics often use the same stochastic models. So, applications of self-similar fields in finance and actuarial science are expected.

It is known that a fractional Brownian motion is a self-similar process with stationary increments. So, fractional Brownian motion is unique in the sense that the class of all fractional Brownian motions coincides with that of all Gaussian self-similar processes with stationary increments.

The fractional Brownian sheet has stationary rectangular increments. The properties of fractional Brownian sheet and fractional Brownian motion seem to be quite similar. The aim of this talk is an answer to the following question:
Is a fractional Brownian sheet unique Gaussian self-similar field with stationary rectangular increments?
The answer is no and we present an example of a Gaussian self-similar field with stationary rectangular increments that is not a fractional Brownian sheet.

We prove some properties of covariance function for self-similar fields with rectangular increments. Using Lamperti transformation we obtain necessary and sufficient conditions on covariance function of stationary field for the corresponding self-similar field to have stationary rectangular increments.

Joint work with Yuliya Mishura (Taras Shevchenko University of Kyiv).

References:
[1] Genton, M.G., Perrin, O., Taqqu, M.S.: Self-similarity and Lamperti transformation for random fields. Stochastic Models 23, 397-411 (2007).
[2] Makogin, V.I., Mishura, Yu.S.,: Strong limit theorems for anisotropic self-similar fields. Modern Stochastics: Theory and Application 1, 1-22 (2014).
[3] Makogin, V., Mishura, Yu.: Example of a Gaussian self-similar field with stationary rectangular increments that is not a fractional Brownian sheet. Preprint, (2014), arXiv:1403.1215.


Poster Presentation, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

MARTÍNEZ-MERINO Luisa Isabel

Department of Statistics and Operation Research, University of Cádiz, Spain

A multivariate extension of the stop-loss order with applications in insurance

In insurance, an important reason for quantifying losses in the tail of distributions is to compare risks and, for this purpose, stochastic orders can be used. One of the most popular stochastic orders among univariate risks is the stop-loss order (also named the increasing convex order). In this work, we suggest a generalization of the stop-loss order to the multivariate setting to compare portfolios of risks. This new stochastic order is closely related to the multivariate risk measures recently introduced by Cousin and Di Bernardino (2013, 2014). In particular, we extend a stop-loss order preserving property for these measures from the case of Archimedean copulas (as stablished by Hürlimann, 2014) to the case of conditional increasing copulas.

Joint work with Miguel A. Sordo and Alfonso Suarez-Llorens.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 13:30 - 13:55, session / room E

MIKUS Georg

Frankfurt School of Finance and Management, Germany

Valuation of partial and suboptimal surrender in Guaranteed Minimum Withdrawal Benefits for life

Variable annuities are long-established investment products linked to one or more underlying reference asset(s). They usually come with some guarantee, such as minimum annual withdrawals. In the popular variant of Guaranteed Minimum Withdrawal Benefits, also offered for life since a few years (GLWBs), the common option of early surrender has been analyzed by means of PDE-based and least-squares Monte-Carlo (LSMC) methods. The former can be relatively fast while the latter allows for including baskets of reference assets and stochastic volatility and/or interest rates, which is adequate for long-term investments such as GLWBs. We employ the latter approach, include also stochastic volatility and compare our results with earlier studies of eg. Holz et al. (2012) and Kling et al. (2013).

The early surrender option calls for an extended modeling approach when the client is given the freedom to withdraw arbitrary amounts between the minimum guaranteed amount and the entire account value, corresponding to partial (up to full) surrender, while charged with penalty fees. The valuation problem of this option in the LSMC context has been briefly discussed in Dai et al. (2008) and Bauer et al. (2010) and was approached numerically by Forsyth and Vetzal (2013). We propose an approach based on the LSMC algorithm which poses computational challenges and show some simulation results.

We also investigate the sensitivity of fair fees to assumptions in the surrender strategy of the policyholder, which can be influenced by laws such as the obligatory notice about existence of a secondary market and other exogenous factors. A rationally optimal surrender strategy giving an upper bound on the fair guarantee fee is compared to fair fees resulting from various surrender behaviours performing suboptimally.

Joint work with Wolfgang M. Schmidt.

References:
[1] Bauer, D., Bergmann, D., Kiesel, R., On the risk-neutral valuation of life insurance contracts with numerical methods in view, ASTIN Bulletin 40: 65-95 (2010).
[2] Dai, M., Kwok, Y. K., Zong, J., Guaranteed minimum withdrawal benefit in variable annuities, Mathematical Finance 18, 595-611 (2008).
[3] Forsyth, P., Vetzal, K., An optimal stochastic control framework for determining the cost of hedging of variable annuities, Journal of Economic Dynamics and Control 44 (2014) 29-53.
[4] Holz, D., Kling, A., Ruß, J., GMWB for life: An analysis of lifelong withdrawal guarantees, Zeitschrift für die gesamte Versicherungswissenschaft 101, 305-325 (2012).
[5] Kling, A., Ruez, F., Ruß, J., The Impact of Policyholder Behavior on Pricing, Hedging, and Hedge Efficiency of Withdrawal Benefit Guarantees in Variable Annuities, Working Paper, Institut für Finanz- und Aktuarwissenschaften Ulm, Germany (2011).


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:30 - 14:55, session / room E

MILHAUD Xavier

ENSAE ParisTech and CREST (LFA), France

Tree-based estimators in censored regression: applications to segmentation and reserving in life insurance

The use of regression trees as a tool for high-dimensional classifiation and regression problems has boomed since the publication of [1]. Initially designed to estimate nonparametrically the conditional mean of a response given a vector of covariates, this popular technique is here adapted to deal with both density estimation and right-censored data. We derive key nonasymptotic results of tree-based estimators following the growing procedure, as well as consistency results concerning the pruning algorithm. Following the works by [2], applications on real life insurance datasets enable to illustrate the utility of such a method and demonstrate its effectiveness in selecting most impacting risk factors on the phenomenon of interest.

Joint work with Olivier Lopez (ENSAE ParisTech and CREST (LFA)) and Pierre Thérond (ISFA Lyon).

Keywords: regression trees, loss function, consistency, KM weights.

References:
[1] Breiman, L. et al. (1984), Classifiation and Regression Trees, Chapman and Hall.
[2] Olbricht, W. (2012), Tree-based methods: a useful tool for life insurance; European Actuarial Journal (2), 1, 129-147.


Invited Plenary Talk
Thursday (Sept. 11, 2014), 11:00 - 11:50, main lecture hall (FH1)

MISCHLER Claus

Head of German Product Development, Standard Life, Frankfurt, Germany 

Guarantees in the stress field between customer needs and financial viability

German insurance clients are well-known for their risk-averse attitude and their preference for life-long guarantees - even if these guarantees are dearly bought at the expense of overall returns. These traditional guarantees, which promise clients a fixed interest rate over the whole lifetime of the contract and a yearly declared total interest, are coming under growing pressure due to their high costs. The on-going low yield environment forces insurance companies to come up with new ideas. While many insurers still haven't dared to change their products, last year a handful of companies introduced new models for the German market: Some are working on the basis of bullet guarantees only, others introduced mechanisms protecting clients against the risks of volatile markets.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 13:30 - 13:55, session / room A

MULER Nora

Universidad Torcuato di Tella, Argentina

Optimal dividends for collaborating insurance companies

We consider two insurance companies which have an agreement to collaborate in the following way: when the surplus of one of the companies becomes negative the other one has the obligation to cover the deficit as long as its surplus is greater or equal than this deficit (if this is not the case, the first company goes to ruin and the second one goes on by itself). The problem consists on maximizing the average of the expected discounted dividends paid by the two companies up to the time of ruin (of both companies). We model the uncontrolled surplus of each company as independent Crámer-Lundberg processes. This is a two-dimensional problem and the boundaries between the action and non-action regions are unknown (free boundaries). We prove that the optimal value function is the smallest viscosity supersolution of the corresponding Hamilton-Jacobi-Bellman equation. We show numerical examples in the symmetric case in which the boundary between the optimal action and non-action region is a curve.

Joint work with Hansjoerg Albrecher (University of Lausanne) and Pablo Azcue (Universidad Torcuato di Tella).


Invited Plenary Talk (Mini Course: 120 Minutes)
Monday (Sept. 8, 2014), 16:40 - 17:40, and Tuesday (Sept. 9, 2014), 15:30 - 16:30, main lecture hall (FH1)

MÜLLER Alfred

Professor for stochastics and quantitative methods in economics at the Department of Mathematics, University of Siegen, Germany 

Modeling, measuring and comparing dependent risks

In this presentation we deal with methods for modeling, measuring and comparing dependent risks. First we deal with the measurement of univariate risks by risk measures, where we follow the axiomatic approach of coherent and convex risk measures, and deal in particular with the relatively new concept of expectiles as a risk measure. Then we look at the comparison of risk measures by stochastic orders like stop-loss order and usual stochastic orders.

The main part, however, will deal with multivariate risks. After introducing the concept of copulas for modeling dependent risks, we look at methods for the comparison of dependent risks. This is an important topic for actuaries, as it helps to understand how dependence between different risks affects the aggregate risk of a business line or a whole company. We will introduce the most relevant concepts of comparing dependence of risks like supermodular ordering and orthant ordering, and we will demonstrate how mass transfer principles can be used to better understand these concepts, and to prove many interesting results.

Literature:
[1] Fabio Bellini, Bernhard Klar, Alfred Müller and Emanuela Rosazza Gianin (2014). Generalized quantiles as risk measures. Insurance: Mathematics and Economics 54, pp. 41-48.
[2] Alfred Müller (2013). Duality Theory and Transfers for Stochastic Order Relations. In: Stochastic Orders in Reliability and Risk. Lecture Notes in Statistics 208, pp. 41-57.
[3] Alfred Müller and Dietrich Stoyan (2002). Comparison methods for stochastic models and risks. John Wiley & Sons, Chichester, xii+330 pages.


Poster Presentation, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

NI Weihong

Institute for Actuarial and Financial Mathematics, Department of Mathematical Sciences, University of Liverpool, UK

Ruin Probabilities with Dependence on the Number of Claims Within Fixed Window Time

We analyse the ruin probability for the Cramér renewal risk process with consideration of an inter-arrival time depending on a number of claims that have come within past fixed time-window. This adjusted model could be explained through the construction of a regenerative process whose properties are employed in further analysis. Asymptotic results of ruin probabilities for different regimes of the claim distributions will be examined and discussed followed by explanatory examples.

Joint work with Corina Constantinescu and Zbigniew Palmowski.


Invited Plenary Talk
Wednesday (Sept. 10, 2014), 16:40 - 17:30, main lecture hall (FH1)

NORBERG Ragnar

Research Officer (Chercheur) at the University of Lyon 1, Lyon, France (Professor of statistics at London School of Economics, London, United Kingdom - emeritus since 2010) 

On marked point processes and their applications in insurance

The talk starts with a friendly introduction to marked point processes and their associated counting processes and martingales. Then it proceeds to three distinct, still intertwined, aspects of the theory: Modelling is a matter of specifying the intensities, which are the fundamental model entities with a clear interpretation as instantaneous transition probabilities; Prediction is a matter of calculating conditional expected values of functionals of the process, which involves stochastic calculus (can be made simple); Computation is a matter of solving Ordinary or Partial Integral-Differential Equations, looking for shortcuts (ODE-s replacing PDE-s) and looking out for pitfalls (non-smoothness points that cannot be detected by inspection of the equations). The unifying powers and the versatility of the model framework are demonstrated with examples from risk theory, life insurance, and non-life insurance.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 14:55 - 15:20, session / room F

O'HAGAN Adrian

University College Dublin, Ireland

A model-based clustering approach to data reduction for actuarial modelling

(longer abstract, 2 pages)

In the recent past, actuarial modelling has migrated from deterministic approaches towards the use of stochastic scenarios. Such projections are useful to an insurer who wishes to examine the distribution of emerging earnings across a range of future economic and mortality scenarios. The use of nested stochastic processes dramatically increases required computational time. This is particularly true for products with heavy optionality, which are becoming more popular in the marketplace. Incremental savings can be made as computing power expands and as coding is optimised. However, much more comprehensive savings are possible using a compressed version of the original data in the stochastic model.

This involves the synthesis of “model points”: a relatively small number of policies that efficiently represent the data at large. Traditionally this has been achieved using variations on the distance to nearest neighbour and k-means nonparametric clustering approaches. The aim of this paper is to investigate how model-based clustering can be applied to actuarial data to produce high quality model points for stochastic projections. This is feasible since insurance policies typically have a number of associated location variables, allowing them to be modelled spatially. This is achieved using the standard Gaussian mixture model and automated using the freely available R package Mclust.

High quality historical data on a large set of 110,000 variable annuity policies has been provided by Milliman for the conduct of this research, under the guidance of Mr Craig Reynolds and Mr Avi Freedman, Principal and Consulting Actuaries with Milliman, Seattle. The location variables are a series of net present values for revenue, expense and benefit outcomes across a range of five economic scenarios. The size variable is the total account value in force for each policy.

The model-based clustering approaches are contrasted with both the weighted distance-to-nearest-neighbour approach and the outcome when the full, uncompressed data is used. The model points produced under each regime are compared for forecast accuracy at a range of compression levels for various stochastically generated scenarios. The results are validated using the Milliman actuarial pricing model.

Joint work with Colm Ferrari.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 13:55 - 14:20, session / room E

PEÑA SÁNCHEZ Inmaculada

MAPFRE, Madrid, Spain

Extreme Value Theory: a practical application based on estimating the large claims in Non-Life Insurance

Within the frame of Solvency II Project with higher solvency capital requirements, the insurance companies must control and quantify any inherent risk in the insurance business.

In particular the field of this study is focused on the risk of occurrence of large claims, those of low frequency and high amount of claims. Only one process for all claims as a whole does not correspond to a single model. In fact, the large claims have a different behavior and therefore those should be treated separately. A correct analysis of large claims provides necessary tools to improve solvency and stability of the insurance companies.

The Extreme Value Theory provides the solid fundamentals and statistical tools needed for the statistical modelling of the rare events. The Generalized Extreme Value Distribution and the Generalized Pareto Distribution are the main models that study these large claims.

Just here comes the aim of this research, how to estimate the model of large claims with the Extreme Value Theory and for a portfolio of products, Professional Civil Liability and Industrial Civil Liability.

The results confirm the existence of large claims according to the Extreme Value Theory. On the one hand, the issue of the optimum threshold choice that determines when a large claim must be considered as such claim. On the other hand, the model of extreme events confirms that the Generalized Pareto Distribution is valid for excesses over threshold according to the Extreme Value Theory.


Contributed Talk, Section: Economics of Insurance
Thursday (Sept. 11, 2014), 13:55 - 14:20, session / room F

PETER Richard

Institute for Risk Management and Insurance, LMU Munich, Germany 

Endogenous information and adverse delection under loss prevention

(draft version of a corresponding paper)

We examine the endogenous value of information in an insurance market where there is potential adverse selection in the efficiency of loss prevention technology. We show that by introducing observable preventive effort for all risk types then classification risk is alleviated or might even be overcome: If people can adjust their loss prevention behavior to the information acquired, information might be valuable.

Allowing for loss prevention does not change the ordering of the value of information across disclosure regimes compared to a situation without loss prevention opportunities. In particular, a first-best efficient risk allocation does not necessarily deter information acquisition. This has important public policy implications for the areas of genetic testing, HIV testing and product liability.

Joint work with Andreas Richter (LMU Munich) and Paul Thistle (University of Nevada Las Vegas).

Keywords: information value, loss prevention, adverse selection


Poster Presentation, Section: Economics of Insurance
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

PETER Richard

Institute for Risk Management and Insurance, LMU Munich, Germany 

Risk management and saving: income effects and background risk

(draft version of a corresponding paper)

We study the interplay of intertemporal risk management and saving decisions. We define risk management broadly by allowing the activity to influence the severity of loss, the probability of loss or both simultaneously. Due to the similar cost-benefit structure of risk management and saving decisions a substitution effect arises whose implications we analyze for changes in income and background risk. Typically, the direct effects for risk management and saving move in the same direction but because of substitution net effects become a priori ambiguous. We resolve this ambiguity by deriving necessary and sufficient conditions. Our paper cautions against the use of single-instrument models as spurious results will emerge.

Joint work with Annette Hofmann (HSBA).

Keywords: risk management, saving, income effects, background risk, substitution


Contributed Talk, Section: Life and Pension Insurance Mathematics
Thursday (Sept. 11, 2014), 13:55 - 14:20, session / room A

PICHLER Alois

Department of Industrial Economics and Technology Management, Norwegian University of Science and Technology (NTNU), Norway 

Insurance pricing under ambiguity

An actuarial model is typically selected by applying statistical methods to empirical data. The actuary employs the selected model then when pricing or reserving an individual insurance contract, as the selected model provides complete knowledge of the distribution of the potential claims. However, the empirical data are random and the model selection process is subject to errors, such that exact knowledge of the underlying distribution is in practice never available. The actuary finds her- or himself in an ambiguous position, where deviating probability measures are justifiable model selections equally well.

This talk employs distances of probability measures (the Wasserstein distance) to quantify the deviation from a selected model. The distance justifies premiums and reserves, which are based on erroneous model selections.

The method applies to the Net Premium Principle, it extends to the well-established Conditional Tail Expectation (CTE) and to many other, related premium principles. To demonstrate the relations and to simplify computations, explicit formulas are provided for the Conditional Tail Expectation of standard life insurance contracts.


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 14:55 - 15:20, session / room D

PITSELIS Georgios

Department of Statistics & Insurance Science, University of Piraeus, Greece

Some new developments on credibility risk measures and applications

In this paper we present some new developments on credible risk measures, in order to recapture the risk of an individual insurer's contract (or financial sector) as well as the industry risk. These new measures are: the credible value at risk (CrVaR), the credible conditional tail expectation (CrCTE), the credible tail conditional median (CrTCM) and the credible quantile tail expectation (CrQTE). The idea is extended to quantile regression credible risk measures, where the variables of interest (e.g. losses or expected returns) depend on some covariates (risk or financial components). Regression credible risk measures provide more complete tools than the usual risk measures (i.e. VaR, CTE) in capturing the individual insurer's risk and industry's risk. Applications of these credible risk measures are also presented.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 14:30 - 14:55, session / room C

PSARRAKOS Georgios

Department of Statistics and Insurance Science, University of Piraeus, Greece

On risk models with claims following inverse Gaussian distribution

In this talk, the inverse Gaussian claim size distribution is considered in some stochastic models of risk theory, such as the classical, Erlang(2) and Phase-type(2) risk processes. Note that some results in the classical risk model can be generalized to the Inverse Gaussian process, see Dufresne and Gerber (1993) and Morales (2004). Closed formulas are given for quantities in ruin theory, using the generalized incomplete gamma function introduced by Chaudhry and Zubair (1994), see also Dutang et al. (2013) for further applications in risk theory. Illustrative examples are given to evaluate our results.

References:
[1] Chaudhry, M.A. and Zubair, S.M. (1994). Generalized incomplete gamma functions with applications, Journal of Computational and Applied Mathematics 55, 99-124.
[2] Dufresne, F. and Gerber, H.U. (1993). The probability of ruin for the Inverse Gaussian amd related processes, Insurance: Mathematics and Economics 12, 9-22.
[3] Morales, M. (2004). Risk theory with generalized inverse gaussian Levy process. ASTIN Bulletin 34, 361-377.
[4] Dutang, C., Lefevre, C. and Loisel, S. (2013). On an asymptotic rule A+B/u for ultimate ruin probabilities under dependence by mixing. Insurance: Mathematics and Economics 53, 774-785.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 13:55 - 14:20, session / room C

RAGULINA Olena

Department of Probability Theory, Statistics and Actuarial Mathematics, Taras Shevchenko National University of Kyiv, Ukraine

Analytic properties of the ruin probabilities in risk models with investments

We consider a generalization of the classical risk model when the premium intensity depends on the current surplus of the insurance company (see [1]). All surplus is invested in a risky asset, the price of which follows a geometric Brownian motion. Our main aim is to show that if the premium intensity grows rapidly with increasing surplus, then an exponential bound for the infinite-horizon ruin probability holds under certain conditions in spite of the fact that all surplus is invested in the risky asset in contrast to the results of [2, 3]. To this end, we allow the surplus process to explode. To be more precise, we let the premium intensity be a quadratic function. In addition, we investigate the question concerning the probability of explosion of the surplus process between claim arrivals in detail.

We also consider the classical risk model when all surplus is invested in risk-free and risky assets proportionally, and the price of the risky asset follow a jump process. Continuity and differentiability properties of the infinite-horizon and finite-horizon ruin probabilities are investigated. These results are used to get analytic expressions and uniform statistical estimates for the ruin probabilities, as well as to solve optimal control problems (see [4]).

Joint work with Yuliya Mishura.

References:
[1] Mishura, Yu., Perestyuk, M. and Ragulina, O.: Ruin probability in a risk model with a variable premium intensity and risky investments, (2014), arXiv:1403.7150.
[2] Frolova, A., Kabanov, Yu. and Pergamenshchikov, S. (2002), In the insurance business risky investments are dangerous. Finance and Stochastics 6(2), pp. 227-235.
[3] Pergamenshchikov, S. and Zeitouny, O. (2006), Ruin probability in the presence of risky investments. Stochastic Processes and their Applications 116(2), pp. 267-278.
[4] Ragulina, O. (2014), Maximization of the survival probability by franchise and deductible amounts in the classical risk model. Springer Optimization and Its Applications 90, Modern Stochastics and Applications, pp. 287-300.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 13:55 - 14:20, session / room C

REGIS Luca

IMT Institute for Advanced Studies, Lucca, Italy 

A three factor cohort-based model for the mortality surface

We perfect the two-factor cohort-based model for the mortality surface introduced by Jevtić et al. (2013) and propose a three-factor model coupled with a new calibration procedure. The intensity of each generation is given by the sum of three correlated state variables following Ornstein-Uhlenbeck processes, and the correlation matrix is what identifies each generation. The fit of the mortality surface proves quite satisfactory, as well as the forecasting ability. The addition of a third factor is needed to capture in and out-of-sample old-ages survival probabilities remarkably well. Other new features of the proposed model are the following: (i) we set the same observation point for all cohorts, implying that we use all available information for the calibration, (ii) the intensity initially observed is perfectly matched for each cohort, (iii) the survival function of each cohort is guaranteed to be decreasing over the life-span, (iv) the instantaneous correlation among intensities of different cohorts is meaningful. Different calibration procedures are illustrated and compared.

Joint work with Elena Vigna (University of Torino).

References:
[1] Jevtić, P., Luciano, E. and E. Vigna, 2013. Mortality surface by means of continuous time cohort models, Insurance: Mathematics and Economics 53, 122-133.


Poster Presentation, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

RIBAS Carmen

Departament de Matemàtica Econòmica, Financera i Actuarial, University of Barcelona, Spain

An investment-consumption model with life insurance and time-inconsistent preferences

Life insurance and life settlements are studied in an investment-consumption model in a continuous time setting. First, a consumption and portfolio rules problem with a life insurance is described. The problem is studied for the cases of CRRA and CARA utility functions. Special attention is devoted to the effects of changing the instantaneous discount rate of time preference. The model is described for different (deterministic) discount functions. Time-consistent equilibria are derived for models with time-inconsistent preferences. Next, life settlements are introduced for the problem when there is just one risk-free asset. For the model with life settlements, the problem of finding the optimal moment for selling the contract is analysed.

Joint work with Jesus Marin-Solano.


Invited Plenary Talk
Thursday (Sept. 11, 2014), 9:00 - 9:50, main lecture hall (FH1)

RYAN Daniel

Head of Population Risk & Data Analytics R&D at Swiss Re, London, United Kingdom 

The future of human longevity

We are all aware of the personal challenges of living longer, and the concern over whether these additional years will be spent in a state of good health. Mortality experience analyses highlight the relative and absolute importance of gender, wealth, the presence of prior disease, adverse risk factors. Insurers and reinsurers are constantly looking for new and better proxies for the underlying risk, as we have seen in recent years through the increased interest in profiling individuals based on where and how they live. In this talk we will look at the future of human longevity and consider some techniques, which enable us to explore relationships in the human life-table data more fully.


Invited Plenary Talk
Friday (Sept. 12, 2014), 14:20 - 15:10, main lecture hall (FH1)

SCHACHERMAYER Walter

Head of the Mathematical Finance group, Department of Mathematics, University of Vienna, Austria 

From Doob's inequality to model-free super-hedging

The limitations of specific models of financial markets, such as the Black Scholes model, are increasingly noted. On the other hand, the model-free approach has recently gained great interest. Given the prices of plain vanilla options, this approach allows to deduce the possible prices of certain exotic options, using only the principle of no arbitrage. For example, for exotic options pertaining to the maximal price of an underlying asset during a given period, we find an interesting connection to the classical Doob inequalities pertaining to the maximal function of a martingale. These inequalities allow for a financial interpretation as a model-free super-hedge. We also present a recent general theorem, due to B. Bouchard and M. Nutz, relating martingale inequalities with pathwise super-hedges.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:30 - 14:55, session / room F

SCHELLDORFER Jürg

AXA Winterthur, Switzerland

GLM model diagnostics in claims reserving using R

Generalized linear models (GLM) are widely used in order to determine the one-year or run-off uncertainty of oustanding loss reserves. An important issue is to check the adequacy of the assumptions underlying of the underlying GLMs by means of model diagnostics. In this talk, we will present a wide range of GLM model diagnostic tools which help in finding the appropriate GLM. We will disuss the main steps in order to identify an adequate GLM capturing the relevant structure in the claims data. We are using the statistical software R and show how it can be used for reserve risk modeling purposes. All techniques are illustrated on development triangles from a large Swiss insurance company.

Joint work with Lukas Meier and Maximilien Vila, ETH Zürich.

References:
[1] Faraway, J. J. (2006). Extending the Linear Model with R. Chapmann and Hall/CRC.
[2] McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models. Chapman and Hall.
[3] Wüthrich, M. V. and Merz, B. (2007). Stochastic claims reserving methods in insurance. Wiley Finance.
[4] R core Team. R: A Language and Environment for Statistical Computing. http://www.R-project.org/. Vienna, Austria.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 13:30 - 13:55, session / room C

SCHILLING Katja

Institute of Insurance Science, University of Ulm, Germany 

Decomposing life insurance liabilities into risk factors

Life insurance liabilities are influenced by various risk sources such as equity, interest, and mortality. Although it is common to measure the total risk by advanced stochastic models, the question of how to allocate the randomness of life insurance liabilities to different risk sources is not very well understood. Nevertheless, in order to be able to devise adequate risk management strategies or to improve product design, insurance companies need to assess the relative importance of each risk source.

In the first part of the talk, we review several decomposition methods from literature, among others variance decomposition, Taylor expansion, and sensitivity analysis. We demonstrate by simple examples that all these methods have some undesirable properties. Therefore, we derive an alternative decomposition method primarily motivated by the martingale representation theorem in order to allocate the total risk to different risk sources. All risk sources are modeled as diffusion processes with exception of the counting process describing the number of survivors. With the help of Itô's Lemma as well as the Clark-Ocone theorem from Malliavin calculus the decomposition is specified.

In the second part of the talk, we apply the proposed decomposition approach to several life insurance liabilities. In particular, we consider various types of annuity conversion options. Based on a similar framework as in Kling et al. (2014), which allows a joint analysis of financial and longevity guarantees, we first determine the total risk implied by the considered annuity conversion options. Then we derive the respective risk contributions of the risk sources equity, interest and mortality by means of the proposed decomposition method. Since the resulting risk contributions are random variables, we are able to analyze the risk structure in detail; among others, we are able to quantify the relative importance of each risk source. We show that different product designs imply significantly different risk contributions of the three risk sources equity, interest and mortality, and that (partially) hedging the financial risks has different effects on these product-specific risk decompositions. This allows us to derive valuable insights for risk management and product design.

Joint work with Daniel Bauer (Georgia State University, USA), Marcus C. Christiansen (University of Ulm, Germany) and Alexander Kling (Institut für Finanz- und Aktuarwissenschaften (ifa), Germany).

References:
[1] Kling, A., Ruß, J., Schilling, K. (2014). Risk analysis of annuity conversion options in a stochastic mortality environment. ASTIN Bulletin, 44, pp 197-236.


Invited Plenary Talk
Thursday (Sept. 11, 2014), 16:40 - 17:30, main lecture hall (FH1)

SCHLÖGL  Michael

Head of Motor Insurance Department and Actuarial Department Non-Life, Wiener Städtische Versicherung AG - Vienna Insurance Group, Vienna, Austria 

Milestones on the way to an internal model in non-life insurance - experiences out of the dialogue with the surpervisors

Vienna Insurance Group and especially Wiener Städtische want to apply for a (partial) internal model in non-life. The main challenges of the development in the context of Solvency II will be pointed out. Emphasis will be given to the feedback from the supervisors and how it influenced the improvement of the model. The talk will touch the implementation of methods and processes to fulfil the requirements. Furthermore statistical and practical topics like parameterization, validation, automation and documentation will be covered.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 16:05 - 16:30, session / room B

SCHMECK Maren

University of Cologne, Germany 

Exploring deviations of mean reverting price processes from standard models

Often simple standard processes do not capture price dynamics well. Here we propose a model for mean reverting price dynamics, where we seperate intrinsic properties from those that are initiated by extern sources. The connection is made via a stochastic time change, where we take an Ornstein Uhlenbeck process with jumps as base process and a time change that is absolutly continuous.

For example, electricity spot prices show a mean reverting behaviour and exhibit jumps. Changes in temperature influence the demand of electricity and so the prices, and is therefore assumed to cause a stochastic mean reversion rate, stochastic volatility and stochastic jump intensity.

We specify properties of the model, consider simulation and propose a calibration procedure.

Joint work with Svetlana Borovkova.


Invited Plenary Talk
Friday (Sept. 12, 2014), 9:00 - 9:50, main lecture hall (FH1)

SCHMIDLI Hanspeter

Professor for stochastics and actuarial mathematics at the Institute of Mathematics, University of Cologne, Germany 

On the calculation of risk measures based on dividends and capital injections

The classical measure for the risk in non-life insurance are the ruin probabilities. Because a measure based on ruin probabilities are similar to the VaR in Finance, there are several drawbacks. For example, the time to ruin and the deficit at ruin do not play a role. Therefore, several alternative measures have been introduced in the last few years. Many of these measure are based on dividends and capital injections. For the calculation of such a measure, Gerber-Shiu functions turn out to be helpful. In this talk, we develop a method to calculate Gerber-Shiu functions, and then study the problem of the calculation of the discounted expected value of capital injections.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Thursday (Sept. 11, 2014), 15:40 - 16:05, session / room A

SCHMIDT Jan-Philipp

Institut für Finanz- und Aktuarwissenschaften (ifa), Ulm 

The best of both worlds: analysis of policyholder behavior with multivariate adaptive regression splines

The aim of this talk is to assess whether multivariate adaptive regressions splines are convenient for analyzing and modeling policyholder behavior in health insurance. The method provides a quantification of the impact and importance of several covariates (age, contract duration, gender, sales channel, development of premiums, etc.) on the policyholders’ decisions. To this end, we perform a case study for a very large portfolio of German long-term private health insurance contracts (n=180,000).

Multivariate adaptive regression splines inherit advantages of both worlds: on the one hand, they operate locally in high dimensional problems similar to stepwise linear regression models, and on the other hand, the model building procedure resembles the tree-growing algorithm of classification and regression trees (see [1] for details). The model building procedure has very useful aspects for practical applications: it is adaptive and its additive structure allows easy interpretation of results.

Understanding policyholder behavior is essential for the risk and portfolio management of health insurance contracts because lapse represents a major risk in health insurance calculated similar to life insurance techniques. Risk-based solvency regimes such as Solvency II require an assessment of the lapse risk in the calculation of solvency capital.

We compare our results with traditional actuarial approaches (i.e. lapse tables and other predictive models). Multivariate adaptive regression splines appear as a very useful expansion of the actuarial toolbox for the risk and portfolio management in insurance.

Joint work with Marcus C. Christiansen, Florian Ullrich and Hans-Joachim Zwiesler (Institute of Insurance Science, University of Ulm, Germany).

References:
[1] Hastie T., Tibshirani R., Friedman J. (2001) The Elements of Statistical Learning. Springer, New York


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 16:05 - 16:30, session / room E

SELCH Daniela

Chair of Mathematical Finance, Technische Universität München, Germany

A multivariate claim number process with simultaneous claim arrivals

Recent events like floods, hurricanes, and other environmental catastrophes have shown the importance to account for dependence between different types of risks in insurance modeling. Neglecting dependence can lead to severe underestimation of risk in a portfolio perspective. We present a realistic, yet mathematically tractable model to describe the joint behavior of multiple claim arrival processes. The processes are derived from independent Poisson processes by introducing a Lévy subordinator as common stochastic clock. The model supports simultaneous claim arrivals and captures the often observable phenomenon of overdispersion in claim count data. There is a very efficient simulation routine available and distributional properties like Laplace transform, probability mass function, and (mixed) moments can be derived in closed form. A convenient approximation for the loss in a large portfolio is given as well. Furthermore, it is studied how the model affects pricing and risk management of (re-)insurance products. Joint work with M. Scherer.

Joint work with M. Scherer.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 15:40 - 16:05, session / room E

SORDO Miguel A.

Department of Statistics and Operation Research, University of Cádiz, Spain

Comparison of conditional distributions in portfolios of dependent risks

Given a portfolio of risks, we study the marginal behavior of the i-th risk under an adverse event, such as an unusually large loss in the portfolio or, in the case of a portfolio with a positive dependence structure, to an unusually large loss for another risk. By considering some particular conditional risk distributions, we formalize, in several ways, the intuition that the i-th component of the portfolio is riskier when it is part of a positive dependent random vector than when it is considered alone. We also study, given two random vectors with a fixed dependence structure, the circumstances under which the existence of some stochastic orderings among their marginals implies an ordering among the corresponding conditional risk distributions.

Joint work with Alfonso Suárez-Llorens and Alfonso J. Bello.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 15:40 - 16:05, session / room E

SPREEUW Jaap

Cass Business School, City University London, UK 

Projecting mortality rates by a Markov chain

In this paper, we present a mortality projection model where future stochastic changes in mortality are driven by a finite state hierarchical Markov chain. This model is inspired by the one discussed in Norberg (2013). Rather than involving specific causes of death which may diminish over time, we will look at mortality in aggregate terms only. A basic parametric model like Makeham is chosen for the initial mortality, although generalizations can be accommodated.

In our Markov model, a jump in the process to the next state leads to a change in mortality over time. The focus is on fitting the model - which in general contains a relatively small number of parameters - to real mortality data from several countries. For fixed values of the transition intensities, the successive factors of mortality change are estimated using a criterion of minimum weighted average quadratic distance between observed mortality rates and expected mortality rates. It is shown that the optimal factors are found very efficiently through a recursive scheme. The calculations are fast – even for a large number of states – and can even be performed on a spreadsheet.

In general, the change factors imply a reduction in mortality. As the number of states in the finite-state process is increased, a smoother fit is achieved with smaller mortality improvements happening more frequently.

For each country, generation effects are assessed by fitting the model for different generations. In order to forecast mortality rates, the change factors are extrapolated using time series techniques. This also permits the estimation of key mortality indices like complete expectation of life and annuity values.

This is joint work with Iqbal Owadally.

References:
Norberg, R. (2013). Optimal hedging of demographic risk in life insurance. Finance and Stochastics 17(1), 197-222.


Invited Plenary Talk
Friday (Sept. 12, 2014), 11:00 - 11:50, main lecture hall (FH1)

STEFFENSEN Mogens

Professor in life insurance mathematics at the Department of Mathematical Sciences, University of Copenhagen, Denmark 

From utility optimization to good advice and good product design

We discuss three different problems, the structure of their solutions, and their relation to practical challenges concerning pension savings advice and product development. The three problems deal with the optimal consumption-investment plan for an individual or a household in the cases where a) the consumption-investment control in a stochastic framework is constrained to be deterministic, see [1], b) preferences are formulated in terms of growth in smooth consumption rather than consumption itself, see [2], and c) risk aversion and elasticity of intertemporal substitution are separated in presence of uncertain lifetime and access to a life insurance market. The three problems and the structure of their solutions are quite different but they share the ability to shed light on important practical questions in personal finance and insurance and unveil appealing theoretical challenges.

References:
[1] Christiansen, M. and Steffensen, M. (2013). Deterministic mean-variance-optimal consumption and investment. Stochastics 85, pp. 620-636.
[2] Bruhn, K. and Steffensen, M. (2013). Optimal smooth consumption and annuity design. Journal of Banking & Finance 37 (8), pp. 2693-2701.


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 14:30 - 14:55, session / room F

STEPCHENKO Darja

Riga Technical University, Latvia

Assessment of risk function using Analytical Network process

The concept of this paper is to perform the improvement of the risk function analysis, assessment and management with the aim to ensure a more sensitive and sophisticated risk coverage in accordance with the Solvency II regime requirements. The authors have offered an approach of risk function implementation and management corresponding with the Solvency II directive framework by integrating Analytical Network process into decision – making process of an insurance company. Analytical Network process is the combination of SWOT analysis designed to evaluate an insurance company’s activity and choice of strategy with the purpose to ensure further successful development and financial stability. Analytical Network process allows measuring the dependencies and feedbacks among decision elements and strategic factors in the hierarchical or non-hierarchical structures; thus, it might be used within the analysis of complicated and sensitive interrelationships between decision levels and attributes. The most valuable advantage of using Analytical Hierarchy process, in insurance is the possibility to include tangible and intangible strategic factors and elements into the decision-making process of an insurance company by applying specified functions or fields steering, analysis and management. Moreover, the authors have prepared the case study about the practical usage of Analytical Network process through the example of one non-life insurance company. In the case study the authors have emphasized the most preferable strategy through a detailed analysis of risk function and by using Analytical Network process. According to the authors, Analytical Network process is considered to be a part of the risk culture of an insurance company, since it helps to increase the employees’ knowledge of risk nature and its influence on the development and results of an insurance company. In order to achieve the stated objective, the authors use theoretical and methodological analysis of the scientific literature, as well as analytical, comparative, mathematical and statistical methods with the purpose to study the elements of an insurance company’s risk function evaluation.

The paper is joint work with Prof. Irina Voronova (Riga Technical University).

Keywords: risk function, SWOT analysis, Analytical Hierarchy Process, Analytical Network Process, the Solvency II Directive


Poster Presentation, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

STØVE Bård

Department of Mathematics, University of Bergen, Norway

Recognizing and visualizing copulas: an approach using local Gaussian approximation

(draft version of a corresponding paper)

In this paper we examine the relationship between a newly developed local dependence measure, the local Gaussian correlation, and standard copula theory. We are able to describe characteristics of the dependence structure in different copula models in terms of the local Gaussian correlation. Further, we construct a goodness-of-fit test for bivariate copula models. An essential ingredient of this test is the use of a canonical local Gaussian correlation and Gaussian pseudo-observations which make the test independent of the margins, so that it is a genuine test of the copula structure. A Monte Carlo study reveals that the test performs very well compared to a commonly used alternative test. We also propose two types of diagnostic plots which can be used to investigate the cause of a rejected null. Finally, our methods are applied to a "classical" insurance data set.

This is joint work with Geir Drage Berentsen, Dag Tjøstheim and Tommy Nordbø.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 15:40 - 16:05, session / room C

STREFTARIS George

Department of Actuarial Mathematics and Statistics, Heriot-Watt University, Edinburgh, UK

Parameter and model uncertainty in the estimation of settlement delay and diagnosis rates in critical illness insurance

In critical illness insurance the delay between the diagnosis of a critical illness and corresponding settlement is important, as it can potentially affect the liabilities of an insurance company. We develop appropriate generalised-linear-type models to investigate this delay, using data supplied by the Continuous Mortality Investigation in the UK [1]. More specifically, log-normal, Burr, generalised gamma and generalised beta error distributions are fitted to the delay data, which include a number of missing cases when the date of diagnosis is not recorded. The analysis is performed under a Bayesian framework, using Markov chain Monte Carlo estimation techniques. The methodology that we employ allows the inclusion of various claim risk factors (e.g. gender, benefit amount, policy duration etc) in a single model and provides estimates of non-recorded dates of diagnosis, while also accounting for parameter and model uncertainty. Bayesian variable selection is also performed, to identify the factors with the highest impact on settlement delay. Claim diagnosis rates are then estimated and smoothed, for individual or aggregated causes of claim, under a Poisson model with exposure adjusted for claims diagnosed in the observation period using the fitted claim delay distribution [2,3].

Joint work with Erengul (Ozkok) Dodd (University of Southampton, UK), Andrew Stott and Howard Waters (Heriot-Watt University, Edinburgh, UK).

References:
[1] Ozkok, E., Streftaris, G., Waters, H. R. & Wilkie, A. D. (2012). Bayesian modelling of the time delay between diagnosis and settlement for critical illness insurance using a burr generalised-linear-type model. Insurance: Mathematics and Economics 50, 266-279.
[2] Ozkok, E., Streftaris, G., Waters, H.R., and Wilkie, A.D. (2012). Modelling critical illness claim diagnosis rates I: Methodology. Scandinavian Actuarial Journal, DOI:10.1080/03461238.2012.728537.
[3] Ozkok, E., Streftaris, G., Waters, H.R., and Wilkie, A.D. (2013). Modelling critical illness claim diagnosis rates II: Results. Scandinavian Actuarial Journal, DOI:10.1080/03461238.2012.728538.


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 14:30 - 14:55, session / room A

SZÖLGYENYI Michaela

Johannes Kepler University Linz, Austria 

On dividend maximization in hidden Markov models

De Finetti proposed to use accumulated discounted future dividend payments as a valuation principle for a homogeneous insurance portfolio.

In this talk we will study the dividend maximization problem in a diffusion model, the drift of which is driven by a hidden Markov chain. This leads to a joint filtering and stochastic optimization problem. We will talk about both the Bayesian case, where the Markov chain does not jump, and the Markov switching case, and compare the results. Further, we will discuss the issue of admissibility of the resulting dividend policies, which leads to the problem of existence and uniqueness of a certain class of stochastic differential equations.

Therefore, we will state an existence- and uniqueness result for this class of SDEs.

Joint work with Gunther Leobacher (University of Linz) and Stefan Thonhauser (University of Lausanne).


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Wednesday (Sept. 10, 2014), 13:55 - 14:20, session / room B

VAN WEVERBERG Christopher

Université Libre de Bruxelles, Belgium 

Explosion time for the Wishart process

The Wishart process has been introduced by Bru (1991) and represents a stochastic process on the cone of positive semidefinite symmetric matrices. Cuchiero and al. (2012) has provided the mathematical foundation for stochastically continuous affine processes on this cone.
The Wishart process has been incorporated into the finance domain by Gourieroux, Jasiak, and Sufana (2004) and Gourieroux and Sufana (2003), (2004). It has found applications to multivariate option pricing (da Fonseca, Grasselli, and Tebaldi (2008)) and in the yield curve context (Gourieroux and Sufana (2003)).

The explosion time T linked with a fixed number θ and an asset price process is defined as the supremum over all time instants t such that the moments of order θ of the asset price remain finite.
Andersen and Piterbarg (2007) demonstrate that many stochastic volatility models such as the Heston model (1993) have the undesirable property that moments of order higher than one can become infinite in finite time. As arbitrage-free price computation for a number of important fixed income products involves forming expectations of functions with super-linear growth, such lack of moment stability is of significant practical importance.
By analyzing the Riccati system, which is associated with affine processes (on the canonical state spaces of Dai and Singleton (2000)) via the transform formula, Glasserman et Kim (2010) fully characterize the regions of exponents in which exponential moments of a given process do not explode at any time or explode at a given time. Jena, Kim and Xing (2012) extend the results of Glasserman and Kim to the canonical state space ℝm+ × ℝn.

In our paper, we consider the Laplace transform and the integrated Laplace transform of the Wishart process. The integrated Laplace transform has direct applications in the computation of interest-rate derivatives. In particular, we consider the explicit Laplace transforms given for instance in Gnoatto and Grasselli (2014), Ahdida and Alfonsi (2013), Kang and Kang (2013) and Gauthier et Possamaï (2009) and study under which assumptions the domain of the Laplace transform can be extended.

Joint work with Griselda Deelstra (Université Libre de Bruxelles) and Martino Grasselli (University of Padova and ESILV).


Invited Plenary Talk
Friday (Sept. 12, 2014), 13:30 - 14:20, main lecture hall (FH1)

VANDAELE Nele

Risk Adviser, Group Risk Strategy Support, KBC Group, Brussels, Belgium 

Solvency II: challenges from a risk perspective

It's a well-known fact that the implementation of Solvency II is a major change for each European insurance company. Not only become the capital requirements risk based, but also the linearity of the capital requirements under Solvency I is abandoned and replaced by an aggregation of requirements based on the Var-Covar method. Furthermore, only calculating the solvency position is not sufficient as additional major importance is given to the risk governance and the risk reporting/disclosure requirements under Pillar II and respectively Pillar III of Solvency II.
Firstly we will zoom in on a number of implications of the Pillar I calculations, amongst other on the governance of an insurance company. Secondly, the implications on risk management due to the ORSA (=Own Risk and Solvency Assessment) requirement are discussed.


Contributed Talk, Section: Non-Life Insurance Mathematics
Thursday (Sept. 11, 2014), 13:30 - 13:55, session / room E

VERBELEN Roel

KU Leuven, Belgium 

Loss modelling with mixtures of Erlang distributions

Modeling data on claim sizes is crucial when pricing insurance products. Such loss models require on the one hand the flexibility of nonparametric density estimation techniques to describe the insurance losses and on the other hand the feasibility to analytically quantify the risk. Mixtures of Erlang distributions with a common scale are very versatile as they are dense in the space of positive continuous distributions (Tijms, 1994, p. 163). At the same time, it is possible to work analytically with this kind of distributions. Closed-form expressions of quantities of interest, such as the Value-at-Risk (VaR) and the Tail-Value-at-Risk (TVaR), can be derived as well as appealing closure properties (Lee and Lin (2010), Willmot and Lin (2011) and Klugman et al. (2012)). In particular, using these distributions in aggregate loss models leads to an analytical form of the corresponding aggregate loss distribution which avoids the need for simulations to evaluate the model.

In actuarial science, claim severity data is often censored and/or truncated due to policy modifications such as deductibles and policy limits. Lee and Lin (2010) formulate a calibration technique based on the EM algorithm for fitting mixtures of Erlangs with a common scale parameter to complete data. Here, we construct an adjusted EM algorithm which is able to deal with censored and truncated data, inspired by McLachlan and Peel (2001) and Lee and Scott (2012). Using the developed R program, we demonstrate the approximation strength of mixtures of Erlangs and model e.g. the left truncated Secura Re data from Beirlant et al. (2004), and use the mixtures of Erlangs approach to price an excess-of-loss reinsurance contract.

We next discuss how to model dependent losses using the multivariate extension of this class, introduced by Lee and Lin (2012), in case of censoring and/or truncation. We demonstrate the effectiveness of the improved fitting procedure on a real data set.

Joint work with Katrien Antonio (KU Leuven, University of Amsterdam), Lan Gong, Andrei Badescu and Sheldon Lin (University of Toronto).

References:
[1] Beirlant, J., Goegebeur, Y., Segers, J., Teugels, J., De Waal, D., and Ferro, C. (2004). Statistics of Extremes: Theory and Applications. Wiley Series in Probability and Statistics. Wiley.
[2] Klugman, S. A., Panjer, H. H., and Willmot, G. E. (2012). Loss models: from data to decisions, volume 715. Wiley.
[3] Lee, G. and Scott, C. (2012). EM algorithms for multivariate Gaussian mixture models with truncated and censored data. Computational Statistics & Data Analysis, 56(9):2816 – 2829.
[4] Lee, S. C. and Lin, X. S. (2010). Modeling and evaluating insurance losses via mixtures of Erlang distributions. North American Actuarial Journal, 14(1):107.
[5] Lee, S. C., and Lin, X. S. (2012). Modeling dependent risks with multivariate Erlang mixtures. ASTIN Bulletin, 42(1):153–180.
[6] McLachlan, G. and Peel, D. (2001). Finite mixture models. Wiley.
[7] Tijms, H. C. (1994). Stochastic models: an algorithmic approach. Wiley.
[8] Willmot, G. E. and Lin, X. S. (2011). Risk modelling with the mixed Erlang distribution. Applied Stochastic Models in Business and Industry, 27(1):2–16.


Poster Presentation, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 17:30 - 19:00, at the Welcome Reception, main lecture hall (FH1)

VIDAL-MELIÁ Carlos

Department of Financial Economics and Actuarial Science, University of Valencia, Spain

Integrating retirement and long-term care (LTC) annuities using a notional defined contribution (NDC) framework

With the aim of improving the efficiency of LTC insurance and universalizing its coverage, this paper develops a multistate overlapping generations model (MOLG) that integrates retirement and LTC annuities into a generic NDC framework. The results achieved in the numerical example we present make sense and show an optimal integration of both annuities into the NDC framework. Our model has many practical implications for policymakers because it can be implemented without too much difficulty, it would help to mitigate individual risk, it would universalize LTC coverage with a "fixed" cost, it would make it easier to adapt the system to changing realities, and it would discourage politicians from making promises about future LTC benefits without the necessary funding support. The model is in line with observations made by Murtaugh et al (2001) and Webb (2009) in that the risks of LTC needs and longevity are negatively correlated, and therefore, as Brown & Warshawsky (2013) and Davidoff (2009) point out, the integration of LTC and retirement annuities into the same system may broaden its appeal. Pitacco (2002) also proposed the establishment of an LTC insurance scheme embedded within the retirement pension system as a way of improving the diffusion of LTC insurance cover. Similarly, Barr (2010) gives sound reasons for extending social security to provide mandatory cover for LTC. Finally, the paper by Ventura-Marco & Vidal-Meliá (2014a) shows that the NDC framework could be useful for this purpose.

Joint work with Javier Pla-Porcel (GMS Management Solutions S.L.) and Manuel Ventura-Marco (University of Valencia).

Keywords: NDC, Pay-as-you-go, Retirement, LTC Insurance, Social Security


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 16:05 - 16:30, session / room B

VIEHMANN Thomas

B&W Deloitte GmbH, Cologne, Munich, Vienna

Your economic scenario set passed the tests. So is it good?

Insurance valuation and risk measurement heavily rely on actuarial projections. One of the assumptions with the largest impact on the results comes in the form of Monte Carlo paths of an economic model. We take a look how life insurance companies use these economic scenarios for projections in day-to-day practice.

The projection models typically consist of two layers, with a complex insurance company model layer on top of the economic scenario layer. While the economic models themselves are very similar to those used in other areas of financial mathematics, life insurance projections require very long projection terms. Also, the projection horizon typically lies beyond the terms of the instruments available for model calibration. In some areas where there is no clear guidance from theory and markets, standard practice has been established. In other areas, the situation is less clear. We highlight practical challenges in economic modelling and the validation of the scenarios.


Contributed Talk, Section: Life and Pension Insurance Mathematics
Thursday (Sept. 11, 2014), 13:30 - 13:55, session / room A

VIGNA Elena

University of Torino and Collegio Carlo Alberto, Italy 

Mean-variance target-based optimisation in DC plan with stochastic salary

We solve a mean-variance optimisation problem in the accumulation phase of a defined contribution pension scheme, in a multi-asset framework with stochastic investment opportunities and stochastic contribution. We provide the general form for the efficient frontier, the optimal investment strategy, and the ruin probability. We show that the mean-variance approach is equivalent to a "user-friendly" target-based problem, and that the ruin probability can be controlled through the choice of the target. We find closed-form solutions when the stochastic interest rate follows mean-reverting dynamics, the market consists of cash, one bond and one stock and the salary follows lognormal distribution with and without mean reversion. Numerical applications illustrate the impact of risk aversion and salary assumptions on the efficient frontier and the optimal investment strategy.

Joint work with: Francesco Menoncin (University of Brescia) and Simone Scotti (LPMA - Université Paris Diderot).

Keywords: Mean-variance approach, defined contribution pension scheme, efficient frontier, stochastic interest rate, risk aversion, ruin probability, martingale method


Contributed Talk, Section: Economics of Insurance
Thursday (Sept. 11, 2014), 15:40 - 16:05, session / room F

WAGNER Joël

Department of Actuarial Science Extranef, University of Lausanne, Switzerland 

What transaction costs are acceptable in life insurance products from the policyholders’ viewpoint?

In this paper we consider two life insurance contracts, a mutual fund and a risk-free investment as alternative investment forms. The two first products are a life insurance investment with point-to-point capital guarantee and a classical participating contract with annual (cliquet-style) interest rate guarantee and participation to the insurer’s surplus. We suppose that the three risky investments are based on the same stochastic financial underlying which can be characterized by a geometric Brownian motion. For the life insurance products we will only focus on the savings part and we assume that the insurance products are priced in a risk-adequate manner for a given upfront contribution and that possible transaction costs are carried by the policyholders. The policyholder assesses the different investment opportunities (life insurance products, fund product, risk-free investment) using various financial performance and utility measures. For selected types of risk profiles we assess the utility position and the preference of the investor for the different investments. On the basis of this analysis, we study what levels of costs are allowed to make all products equally rewarding for the investor (here: all investment yield the same utility). Our first findings indicate that insurance providers need to be very careful with the level of costs allowed in their products depending on the risk preferences on the targeted policyholder groups.

Joint work with Hato Schmeiser (University of St. Gallen, CH).

Keywords: life insurance products, financial guarantees, utility measure, transaction costs


Contributed Talk, Section: Risk Management and Solvency II
Wednesday (Sept. 10, 2014), 14:55 - 15:20, session / room C

WAN Cheng

Towers Watson, China

Market consistent valuation of liabilities in relation to longevity risk in Switzerland: Application of Swiss coherent mortality model for Swiss pension funds and life insurance business: a practical approach

Life business of insurance companies and pension funds in Switzerland are exposed to longevity risk in great extent. This is due to legal guarantees of the Swiss pension legislation as well as steady growing life expectancy. Swiss Solvency Test (SST) for insurance companies is based on the market consistent valuation of liabilities. There have been discussions around applications of SST for Swiss pension funds and now it can be done voluntarily.

Ca. 25% of the Swiss population was born abroad and initially came to Switzerland as qualified workforce and this trend continues. New investigations confirmed that in Switzerland the life expectancy depends on socio-economic factors like income, profession, education etc. and can be on average 3-5 years longer than for non-skilled workers. Mortality models should be able to take such features of Swiss labor force into account and reflect them into their liability modelling for pensions to ensure that the longevity risk on long life pensions is priced market consistent. From our point of view it could help insurance companies develop products for pension buy-outs. At the moment there is a dearth of financial products to combat longevity risk, with a lack of buy-in and very limited buy-out solutions available for bulk annuity transactions from pension funds to insurance companies. The solutions that do exist frequently come at a very high price and many pension funds are in deficit on a buy-out basis.

Our newly developed coherent stochastic mortality model is based on the reference population of 13 developed countries (Switzerland incl.). The projections for Swiss population take into account longevity trends in this reference population and reflect the Swiss life expectancy level and its development in the last 40 years. The aim of this publication is to develop examples to demonstrate how the longevity risk could be evaluated in the framework of internal valuation and risk models for SST purposes to reflect the risk specific to a pensioner population (either in a pension fund or in an insurance portfolio). Further we would like to address the basis risk as there are only two publicly available generational mortality tables for pension funds (which do not differentiate substantially) and show how we could take into account different socio-economic groups.

The research is a joint work with Ljudmila Bertschi (Towers Watson, Switzerland).

Keywords: Longevity risk, Mortality forecasts, Pension fund liabilities, Risk budgeting, Switzerland, LPP/BVG 2010, Plat model, Lee-Carter model, SAINT model, Human Mortality Database


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 13:55 - 14:20, session / room B

WANG Meng (Simon)

IFAM, University of Liverpool, UK

Optimal retention levels for excess-of-loss reinsurance

We propose a new method to find the optimal retention levels for an excess-of-loss reinsurance policy by keeping the probability of ruin at an acceptable level. Given that the minimum capital requirement of an insurance company over a specific time interval is equal to the Value at Risk of the infimum process, we introduce a new risk measure for individual risks. This risk measure enables us to find the retention levels of an excess-of-loss policy by using a method that has been developed in Assa (2014). More specifically, we can find the retention level by comparing the distortion function induced by ruin probability and that of the risk premium. Furthermore, we develop a numerical algorithm to find the retention levels. Our result can be considered as a completion to the existing literature in that, we find the optimal retention level when the tolerance level for the ruin probability is given, as oppose to finding the ruin probability when the retention levels are given (see H. Albrecher and S. Haas's (2011)).

References:
[1] Assa, Hirbod, On Optimal Reinsurance Policy with Distortion Risk Measures and Premiums (June 11, 2014). Available at SSRN: http://ssrn.com/abstract=2448678.
[2] Hansjoerg Albrecher, Sandra Haas, Ruin Theory with Excess of Loss Reinsurance and Reinstatements, 2011, Applied Mathematics and Computation, Volume 217, Issue 20.


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 13:30 - 13:55, session / room D

WEBER Stefan

Leibniz Universität Hannover, Germany

Systemic risk measures

Systemic risk refers to the risk that the financial system is susceptible to failures due to the characteristics of the system itself. The tremendous cost of this type of risk requires the design and implementation of tools for the efficient macroprudential regulation of financial institutions. The talk proposes a novel approach to measuring systemic risk.

Key to our construction is the philosophy that there is no distinction between risk and capital requirements, as recently described in Artzner, Delbaen & Koch-Medina (2009). Such an approach is ideal for regulatory purposes. The suggested systemic risk measures express systemic risk in terms of capital endowments of the financial firms. These endowments constitute the eligible assets of the procedure. Acceptability is defined in terms of cash flows to the entire society and specified by a standard acceptance set of an arbitrary scalar risk measure. Random cash flows can be derived conditional on the capital endowments of the firms within a large class of models of financial systems. These may include both local and global interaction. The resulting systemic risk measures are set-valued and allow a mathematical analysis on the basis of set-valued convex analysis.

We explain the conceptual framework and the definition of systemic risk measures, provide algorithms for their computation, and illustrate their application in numerical case studies - e.g. in the network models of Eisenberg & Noe (2001), Cifuentes, Shin & Ferrucci (2005), and Amini, Filipovic & Minca (2013).

This is joint work with Zachary G. Feinstein (Washington University in St. Louis) and Birgit Rudloff (Princeton University).


Contributed Talk, Section: Mathematical Finance with Applications in Insurance
Thursday (Sept. 11, 2014), 15:40 - 16:05, session / room B

WERNER Ralf

Augsburg University, Germany

Analysis of popular replicating portfolio approaches

We consider the most popular approaches for the construction of replicating portfolios for life insurance liabilities known as cash flow matching and terminal value matching. Solutions to these problems are derived analytically and a detailed comparison is provided. It is shown that the (unique) solutions have fair value equal to the fair value of liabilities. Then, the problems are generalized by relaxing the requirement of static replication to allow for dynamic investment strategies in a numeraire asset with zero present value. A relationship between the solutions to these generalized problems is established, which sheds new light on the relation of the original problems. Finally, it is proved that the fair values of the optimal solutions to the generalized problems remain equal to the fair value of liabilities. Based on numerical examples it is shown that the dynamic investment strategies can be reasonably approximated by linear regression, such that an out-of-sample implementation, as e.g. needed for MCEV and Solvency II calculations, is possible.

This is joint work with Jan Natolski (Augsburg University).


Contributed Talk, Section: Life and Pension Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:30 - 14:55, session / room C

WIELAND Jochen

Ulm University & Institut für Finanz- und Aktuarwissenschaften (ifa), Germany

Optimizing participating life insurance product designs for both, policyholders and insurers, under risk based solvency frameworks

Traditional participating life insurance contracts with year-to-year (cliquet-style) guarantees have come under pressure in the current situation of low interest rates and volatile capital markets, in particular when priced in a market consistent valuation framework. In addition, such guarantees lead to rather high capital requirements under risk based solvency frameworks such as Solvency II or the Swiss Solvency Test (SST). Therefore, insurers in several countries have recently developed new forms of participating products with different forms of (typically weaker and/or lower) guarantees that are less risky for the insurer.

In Reuß et al. (2014) it has been shown that such alternative product designs indeed lead to higher capital efficiency, i.e. more stable profits and reduced capital requirements. As a result, the financial risk for the insurer is significantly reduced while preserving the main guarantee features perceived and requested by the policyholder.

Based on these findings, this paper combines the insurer’s and the policyholder’s view by introducing some additional surplus to compensate policyholders for the less valuable guarantees. We particularly calculate combinations of asset allocation and profit participation rate for the different product designs that lead to an identical expected profit for the insurer, but differ with respect to the insurer’s risk and solvency capital requirements as well as with respect to the real-world return distribution for the policyholder.

We can show that alternative product designs can be designed in a way that the insurer’s expected profitability remains unchanged, the insurer’s risk and hence capital requirement is substantially reduced and the policyholder’s expected return is increased. We argue that such products might be able to reconcile insurers’ and policyholders’ interests and serve as an alternative to the rather risky cliquet-style products.

Joint work with Andreas Reuß (ifa) and Jochen Ruß (ifa & Ulm University).

Keywords: Participating Life Insurance, Interest Rate Guarantees, Capital Efficiency, Asset Allocation, Profit Participation Rate, Policyholder’s Expected Return, Solvency Capital Requirements, Solvency II, SST, Market Consistent Valuation

References:
[1] Reuß, A., Ruß, J., Wieland, J. (2014) Participating Life Insurance Contracts under Risk Based Solvency Frameworks: How to increase Capital Efficiency by Product Design. To appear in Innovations in Risk Management, Springer 2014.


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 16:05 - 16:30, session / room A

WONG Bernard

University of New South Wales, Australia 

On the interplay of periodic and continuous strategies in the optimal dividends problem

In the classical optimal dividends problem, dividend decisions are allowed to be made at any point in time (according to a continuous strategy). Depending on the surplus process that is considered and whether dividend payouts are bounded or not, optimal strategies are generally of a band, barrier or threshold type. In reality, dividends are generally paid on a periodic basis. Because of this, the actuarial literature has recently considered strategies where dividends are only allowed to be distributed at (random) discrete times - according to a periodic strategy.

In this paper, we focus on the Brownian risk model. In this context, the optimal continuous and periodic strategies have previously been shown (independently of one another) to be of barrier type. We combine both approaches by considering a hybrid strategy whereby decisions are allowed to be made either at any time at a higher (proportional) transaction cost, or periodically at a lower cost. We study the interplay between both types of dividends, and consider the optimal hybrid strategy. Results are illustrated.

Joint work with Benjamin Avanzi and Vincent Tu.


Contributed Talk, Section: Risk Management and Solvency II
Thursday (Sept. 11, 2014), 14:30 - 14:55, session / room D

YAO Jing

Faculty of Economics, Vrije Universiteit Brussels, Belgium

How robust is the VaR of credit risk portfolios?

In this paper, we assess the magnitude of model uncertainty of credit risk portfolio models, i.e., what is the maximum or minimum Value-at-Risk (VaR) that can be justified given a certain set of information? In the unconstrained homogeneous case, i.e., when the default probabilities, exposures and recovery rates of the different loans are known (and equal) but not their interdependence, some explicit sharp bounds are available in the literature; see for instance Rüschendorf (1982). However, the problem is fairly more complicated when the portfolio is heterogeneous. In this regard, Puccetti and Rüschendorf (2012) and Embrechts et al. (2013) propose the rearrangement algorithm (RA) to approximate the unconstrained VaR bounds of a portfolio that can be heterogeneous. While their numerical examples provide evidence that the RA makes it indeed possible to approximate the sharp bounds accurately, their results also indicate that the gap between worst-case and best-case VaR numbers is typically very high.

Hence, sharpening the VaR bounds by considering the presence of dependence information is of great practical relevance, but also hard to do because lack of sufficiently rich default data implies that knowledge of the joint default probabilities is typically not in reach. By contrast, the variance and perhaps also the skewness of the aggregated portfolio can be estimated statistically and can potentially be used as a source of dependence information allowing to get improvements of the VaR bounds. This idea is actually inherent in Bernard, Rüschendorf and Vanduffel (2013) who propose a version of the RA that incorporates a variance constraint and who show that such constraint has significant impact on the VaR bounds. Our paper is a further development of theirs.

We propose an efficient algorithm to approximate sharp VaR bounds in the unconstrained case, i.e., in comparison with the earlier algorithms that appeared in the literature, the algorithm that we propose is guaranteed to always converges to a candidate solution. Furthermore, we are able to adapt the algorithm so that it can deal with higher order constraints (variance, skewness, kurtosis,...). A feature of our approach is that we are able to incorporate statistical uncertainty on the moment constraints. We apply the results to real world credit risk portfolios and we show that in all typical situations VaR assessments that are performed at high confidence levels (as in Solvency II and Basel III) are not robust and subject to significant model uncertainty.

Joint work with Carole Bernard (University of Waterloo), Ludger Rüschendorf (University of Freiburg) and Steven Vanduffel (Vrije Universiteit Brussels).

Keywords: Bernoulli risks, Rearrangement algorithm, Moment bounds, Value-at-Risk


Contributed Talk, Section: Non-Life Insurance Mathematics
Wednesday (Sept. 10, 2014), 14:55 - 15:20, session / room B

YUEN Kam Chuen

Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong

Optimal proportional reinsurance for a risk model with dependent classes of insurance business

This research extends the work of Liang and Yuen about "Optimal dynamic reinsurance with dependent risks: variance premium principle" [1]. Under the expected value premium principle, we consider the optimal proportional reinsurance strategy for a risk model with dependent classes of insurance business. Specifically, we derive closed-form expressions for the optimal strategy and value function by maximizing the expected exponential utility, and present a numerical example to illustrate the impact of a model parameter on the optimal strategy.

This is a joint work with Prof. Zhibin Liang (Nanjing Normal University) and Prof. Ming Zhou (Central University of Finance and Economics, Beijing).

References:
[1] Liang, Z. and Yuen, K.C. (2014). Optimal dynamic reinsurance with dependent risks: variance premium principle. Scandinavian Actuarial Journal, DOI: 10.1080/03461238.2014.892899


(Former submission webpage for contributed talks and poster presentations)