Campus of WU Wien (© www.BOAnet.at) Vienna Parliament & St. Charles Cathedral & Giant Ferris Wheel (© andreas N & Gerfried Wagner & suju @ pixabay) Campus of WU Wien (© www.BOAnet.at) Campus of WU Wien (© www.BOAnet.at) Campus of WU Wien (© www.BOAnet.at)

Vienna Congress on Mathematical Finance - VCMF 2019
Mon–Wed, Sept. 9–11, 2019

VCMF Educational Workshop
Thu–Fri, Sept. 12–13, 2019

Program and Abstracts


--> News!!!
--> VCMF 2019 Conference (Mon-Wed)
--> VCMF 2019 Workshop (Thu-Fri)


 

 


Panel Discussion

Monday, September 9th, 2019, 17:10 - 18:00, Ceremonial Hall 1, building LC

"The big data revolution in mathematical finance"

Panellists:

  • Isabelle Flückiger 
    Managing Director - Accenture, leading the Financial Services Applied Intelligence practice and solutions in Austria, Switzerland, Germany & Russia
  • Nikolaus Hautsch 
    Professor of Finance and Statistics of University of Vienna
  • Jonas Hirz
    Boston Consulting Group (BCG) and Head of the Data Science Section of the Actuarial Association of Austria (AVÖ)
  • Sebastian Jaimungal 
    Professor at the Department of Statistical Sciences of University of Toronto, Director of the professional Masters of Financial Insurance program, Chair for the SIAM activity group in Financial Mathematics and Engineering (SIAG/FM&E), Fields-CQAM lab leader for the Systemic Risk Analytics lab.
  • Hannes Mösenbacher
    Chief Risk Officer and Member of Management Board of Raiffeisen Bank International AG

Moderator:

  • Josef Teichmann 
    Professor of Financial Mathematics, ETH Zürich

Abstract:

The financial industry has enthusiastically and profitably embraced big data and computational algorithms such as machine learning to (sometimes seemingly) better substantiate trading and risk management decisions. Specific examples include algorithmic trading, sophisticated pattern recognition methods to find drivers of stock market evolution, neural network approaches to calibration, scenario generation, prediction and many more. This opens new and exciting directions for research in quantitative finance: the development of new statistical methods and tools to treat high dimensional time series, research on automatic trading as well as machine learning techniques for traditional fields such as hedging of derivatives or portfolio optimization. It of course also urges broader questions related to the impact of the big data revolution on financial stability. In the panel discussion we want to shed light on these new developments from the perspective of financial industry, regulators and academia.


 

 


Plenary Talks at the VCMF 2019 Conference


Invited plenary talk: Mon, 9:50-10:40, Ceremonial Hall

Beatrice Acciaio  (London School of Economics)

Causal optimal transport as a tool in time-dependent optimization

In this talk I will illustrate applications of optimal transport in some dynamic optimization problems. To take into account the time-dependent nature of these problems, the causality constraint is imposed on the transport plans. This expresses the fact that no anticipation of information should be used from the original space, when transporting mass to the target space. In particular, we will see how this constrained class of transports offers a natural way to learn the cost function in dynamic generative adversarial models, where the generator is trained to produce evolutions of processes.


Invited plenary talk: Tue, 9:00-9:50, Ceremonial Hall

Fred Espen Benth  (University of Oslo)

Stochastic volatility in energy and commodity forward markets

We introduce stochastic volatility models for the dynamics of forward curves in energy and commodity markets. These models are appropriately defined as operator-valued random processes, based on Ornstein-Uhlenbeck (OU) processes. We discuss Gaussian OU processes leading to Heston-type models, as well as non-Gaussian models extending the Barndorff-Nielsen & Shephard dynamics to infinite dimensions. Issues like simulation and estimation will be discussed, as well as pricing of options. The talk is based on collaborations with Heidar Eyjolfsson, Barbara Ruediger, Iben Simonsen, Andre Suess and Almut Veraart.


Invited plenary talk: Wed, 15:20-16:05, room A

Bruno Bouchard  (Université Paris-Dauphine)

Dual formulation for perfect hedging with price impact

We consider a general market model with price impact inspired by the linear impact model of Bouchard, Loeper and Zou (2016). In the Brownian diffusion case, we show that perfect hedging is feasible and that the super-hedging price can be characterized by a fully non-linear parabolic partial differential equation, from which one can deduce a posteriori a dual formulation under an additional convexity assumption. This dual formulation reduces to a simple optimal control problem, in which the control is the volatility of some auxiliary martingale process. By using pure probabilistic arguments, we then extend this duality result to a general path-dependent setting, and explain how to construct from it a perfect hedging strategy explicitly.


Invited plenary talk: Tue, 9:50-10:40, Ceremonial Hall

Christa Cuchiero  (WU Vienna)

Deep neural networks, generic universal interpolation and controlled differential equations

Deep neural networks can be viewed as discretizations of certain controlled ordinary differential equations. We make use of this perspective to link expressiveness of deep networks to the notion of controllability of dynamical systems. Using this connection, we study an expressiveness property that we call universal interpolation, and show that it is generic in a certain sense. We also show that universal interpolation holds for certain deep neural networks even if large numbers of parameters are left untrained, and instead chosen randomly. This lends theoretical support to the observation that training with random initialization can be successful even when most parameters are largely unchanged through the training.

On a related note we consider calibration of local stochastic volatility and yield curve prediction from finance and view these problems from an optimal control perspective. We parameterize the controls with neural networks and learn them directly from data without performing any kind of interpolation.

The talk is based on joint works with Wahid Khosrawi, Martin Larsson and Josef Teichmann.


Invited plenary talk: Mon, 16:10-17:00, Ceremonial Hall

Paul Embrechts  (ETH Zurich)

Hawkes graphs: A graphical tool for the analysis of multi-type event streams

Multi-type event streams naturally occur in many applications in insurance and finance; standard models are so-called Hawkes, or self-exciting point processes. Examples include: insurance claims or credit loss events in different lines of business or markets, high-frequency trades executed by several traders, or in medicine neuron firing in distinct locations of the brain. An early field of applications of Hawkes processes concerned earthquake modelling. In this talk, based on joint work with Matthias Kirchner, I will discuss a graphical presentation of the statistical analysis of multi-type event streams. An application to high-frequency finance is discussed.


Invited plenary talk: Wed, 16:05-16:50, room A

Antoine Jacquier  (Imperial College London)

Deep learning and Path-dependent PDEs for rough local stochastic volatility

While rough volatility is now part of the common quantitative finance landscape, its enhanced version, adding a local volatility component, is yet to be grasped. The classical Markovian setup has attracted a lot of attention, whether through theoretical existence results or numerical analysis. However, the rough case (where the stochastic volatility component is driven by a Volterra process) creates further complications. We investigate here the pricing side of the problem. Writing the pricing problem as a solution to a path-dependent PDE, we show how to discretise it to a high-(but finite-)dimensional PDE; this is then tackled through the use of deep learning techniques, by simulating the corresponding BSDE. This is a joint work with Mugad Oumgari.


Invited plenary talk: Mon, 9:00-9:50, Ceremonial Hall

Sebastian Jaimungal  (University of Toronto)

Mean-Field Games with Differing Beliefs for Algorithmic Trading

Even when confronted with the same data, agents often disagree on a model of the real-world. Here, we address the question of how interacting heterogenous agents, who disagree on what model the real-world follows, optimize their trading actions. The market has latent factors that drive prices, and agents account for the permanent impact they have on prices. This leads to a large stochastic game, where each agents’ performance criteria is computed under a different probability measure. We analyse the mean-field game (MFG) limit of the stochastic game and show that the Nash equilibria is given by the solution to a non-standard vector-valued forward-backward stochastic differential equation. Under some mild assumptions, we construct the solution in terms of expectations of the filtered states. We prove the MFG strategy forms an \epsilon-Nash equilibrium for the finite player game. Lastly, we present a least-squares Monte Carlo based algorithm for computing the optimal control and illustrate the results through simulation in market where agents disagree on the model. Time permitting, this talk will also introduce some notions of Nash deep Q-learning related for model free approach to solving this and related problems.
This is joint work with Philippe Casgrain, U. Toronto & Citadel.


Closing talk: Wed, 16:50-17:20, room A

Walter Schachermayer  (University of Vienna)

From discrete to continuous time models: Some surprising news on an old topic

We reconsider the approximation of the Black-Scholes model by discrete time models such as the binominal or the trinominal model.

We show that for continuous and bounded claims one may approximate the replication in the Black-Scholes model by trading in the discrete time models. The approximation holds true in measure as well as "with bounded risk", the latter assertion being the delicate issue. The remarkable aspect is that this result does not only apply to the well-known binominal model, but to a much wider class of discrete approximating models, including, e.g., the trinominal model. By an example we show that we cannot do the approximation with "vanishing risk".

We apply this result to portfolio optimization and show that, for utility functions with "reasonable asymptotic elasticity", the solutions to the discrete time portfolio optimization converge to their continuous limit, again in a wide class of discretizations including the trinominal model. In the absence of "reasonable asymptotic elasticity", however, surprising pathologies may occur.

Joint work with David Kreps (Stanford University).


 

 


Plenary Lectures at the VCMF 2019 Workshop


Invited lecture: Thu, 9:10-10:40 and Thu, 10:40-12:40, room A

Julien Guyon  (Bloomberg, Columbia Univ., New York Univ.)

The Particle Method for Smile Calibration

The calibration of models to market smiles is a crucial issue for risk management in finance. This used to be done by running time-consuming optimization routines. In this short course we will show how particle methods very efficiently solve a wide variety of smile calibration problems, without resorting to any optimization:

  • Calibration of the local volatility model with stochastic interest rates
  • Calibration of stochastic local volatility models, possibly with stochastic interest rates and stochastic dividend yield
  • Calibration to the smile of a basket of multi-asset local volatility-local correlation models, possibly with stochastic volatility, stochastic interest rates, and stochastic dividend yields
  • Calibration of path-dependent volatility models and path-dependent correlation models
  • Calibration of cross-dependent volatility models

The particle method is a Monte Carlo method where the simulated asset paths interact with each other so as to ensure that a given market smile (or several of them) is fitted. PDE methods typically do not work for these high-dimensional models. The particle method is not only the first available exact simulation-based method for smile calibration. It is also robust, easy to implement, and fast (as fast as a standard Monte Carlo algorithm), as many numerical examples will show. As of today, it is the most powerful tool for solving smile calibration problems. Icing on the cake: there are nice mathematics behind the scenes, namely the theory of McKean stochastic differential equations, propagation of chaos, and a new Malliavin «disintegration by parts» formula.

Short Bio of Julien Guyon:

Julien Guyon is a senior quantitative analyst in the Quantitative Research group at Bloomberg L.P., New York. He is also an adjunct professor in the Department of Mathematics at Columbia University and at the Courant Institute of Mathematical Sciences, NYU. Before joining Bloomberg, Julien worked in the Global Markets Quantitative Research team at Societe Generale in Paris for six years (2006-2012), and was an adjunct professor at Universite Paris 7 and Ecole des ponts. He co-authored the book Nonlinear Option Pricing (Chapman & Hall, CRC Financial Mathematics Series, 2014) with Pierre Henry-Labordere. His main research interests include nonlinear option pricing, volatility and correlation modeling, and numerical probabilistic methods. Julien holds a Ph.D. in Probability Theory and Statistics from Ecole des ponts. He graduated from Ecole Polytechnique (Paris), Universite Paris 6, and Ecole des ponts. A big soccer fan, Julien has also developed a strong interest in sports analytics, and has published several articles on the FIFA World Cup, the UEFA Champions League, and the UEFA Euro in top-tier newspapers such as The New York Times, Le Monde, and El Pais, including a new, fairer draw method for the FIFA World Cup.


Invited lecture: Thu, 16:00-17:30 and Fri, 17:30-10:30, room A

Huyên Pham  (University Paris VII Diderot)

Control of McKean-Vlasov systems and applications

This lecture is concerned with the optimal control of McKean-Vlasov equations, which has been knowing a surge of interest since the emergence of the mean-field game theory. Such control problem corresponds to the asymptotic formulation of a N-player cooperative game under mean-field interaction, and can also be viewed as an influencer strategy problem over an interacting large population. It finds various applications in economy, finance, or social sciences for modelling motion of socially interacting individuals and herd behavior. It is also relevant for dealing with intermittence questions arising typically in risk management.

In the first part, I will focus on the discrete-time case, which extends the theory of Markov decision processes (MDP) to the mean-field interaction context. We give an application with explicit results to a problem of targeted advertising via social networks.

The second part is devoted to the continuous-time framework. We shall first consider the important class of linear-quadratic McKean-Vlasov (LQMKV) control problem, which provides a major source for examples and applications. We show a direct and elementary method for solving explicitly LQMKV based on a mean version of the well-known martingale optimality principle in optimal control, and the completion of squares technique. Variations and extensions to the case of infinite horizon, random coefficients and common noise are also addressed. We illustrate our results with an application to a robust mean-variance portfolio selection problem. Next, we present the dynamic programming approach (in other words, the time consistency approach) for the control of general McKean-Vlasov dynamics. In particular, we introduce the recent mathematical tools that have been developed in this context : differentiability in the Wasserstein space of probability measures, Itô formula along a flow of probability measures and Master Bellman equation. Extensions to stochastic differential games of McKean-Vlasov type are also discussed.

Affiliations of Huyên Pham:

  • Professor, University Paris VII Diderot
  • Laboratoire de Probabilités, Statistique et Modèlisation (LPSM)
  • Senior research fellow at CREST (Center for Research in Economics and Statistics) - ENSAE (Grande École for Data Science, Economics, Finance and Actuarial science)
  • Chair of Applied Mathematics, John von Neumann Institute, Vietnam National University

In 2007 he was awarded the Louis Bachelier Prize of the Natixis Foundation for Quantitative Research and the Société de Mathématiques Appliquées et Industrielles (SMAI) by the French Academy of Sciences.


Invited lecture: Fri, 13:30-15:00 and Fri, 15:00-16:50, room A

Josef Teichmann  (ETH Zurich)

The role of randomness in deep learning

We consider (recurrent) deep neural networks from the point of view of controlled differential equations and develop a mathematical theory which explains the role of random initializations in learning procedures. Concepts from differential geometry and random projections meet in a surprising way.
This is based on joint works with Christa Cuchiero, Lukas Gonon, Martin Larsson, Lyudmilla Grigoryeva and Juan-Pablo Ortega.

Short Bio of Josef Teichmann:

Josef Teichmann - an Austrian mathematician - is professor for Financial Mathematics at ETH Zürich since 2009.

In 2006 he received the "START-Preis" of the Austrian Science Fund (FWF) - the highest Austrian award for young scientists of any discipline.
In 2014 he was awarded the "Louis Bachelier Prize" of the Natixis Foundation for Quantitative Research and the Société de Mathématiques Appliquées et Industrielles (SMAI) by the French Academy of Sciences.
In 2016 he received the "Bob Alting von Geusau Prize" sponsored by the AFIR-ERM Section of the International Actuarial Association (IAA) (together with Mario Wüthrich).

Since 2018 he is principal investigator of the SNF-Project "Mathematical Finance in the light of Machine Learning" (duration: 4 years, around CHF 1,000,000).


Invited lecture: Thu, 14:00-15:30 and Fri, 15:30-12:20, room A

Luitgard A. M. Veraart  (London School of Economics)

Systemic risk in financial networks

The 2007-2008 financial crisis has highlighted the need for better risk management. In particular one needs to understand the interconnections in financial markets that can give rise to amplification and feedback effects. In this short course we show how network models can be used to model systemic risk in financial systems. We will look at different channels of systemic risk and mechanisms that can trigger default cascades/domino effects in financial systems. We show how network models can be used in macro-prudential stress tests to assess the stability of financial systems. We will also look at the problem of conducting such stress tests in situations where the financial system is not fully observable. We will conclude with some discussion on policy measures and new developments to enhance financial stability.

Short Bio of Luitgard A. M. Veraart:

Luitgard A. M. Veraart is Associate Professor at the Department of Mathematics of the London School of Economics and Political Science (LSE).

Her research interests focus on financial mathematics, particularly risk management in financial markets, financial networks, systemic risk, statistics in finance, optimal investment problems, modelling of energy markets and stochastic volatility models.

In 2019 she was co-winner of the Adams Prize awarded by the University of Cambridge, for achievements in the field of the Mathematics of Networks.
In 2016 received the award of a George Fellowship by the Houblon-Norman Fund, Bank of England, for her research on systemic risk in financial networks


 

 


Invited Talks at the VCMF 2019 Conference


Invited talk: Wed, 9:45-10:30, room A

Emmanuel Bacry  (École Polytechnique and Université Paris-Dauphine)

Disentangling and quantifying market participant volatility contributions

Thanks to the access to labeled orders on the Cac40 index future provided by Euronext, we are able to quantify market participants contributions to the volatility in the diffusive limit. To achieve this result we leverage the branching properties of Hawkes point processes. We find that fast intermediaries (e.g., market maker type agents) have a smaller footprint on the volatility than slower, directional agents. The branching structure of Hawkes processes allows us to examine also the degree of endogeneity of each agent behavior. We find that high-frequency traders are more endogenously driven than other types of agents.


Invited talk: Tue, 14:30-15:15, room A

Peter Bank  (TU Berlin)

Trading with transient price impact

We present a tractable model for trading with transient price impact where an investor's trades adversely affect bid- and ask-prices for a risky asset and where market resilience drives the resulting spread back towards zero at an exponential rate. Similar to the literature on models with a constant spread (cf., e.g., Cvitanic and Karatzas (1996), Kallsen and Muhle-Karbe (2010), Czichowsky and Schachermayer (2017), our dual description of super-replication prices involves the construction of suitable absolutely continuous measures with martingales close to the unaffected reference price. A novel feature in our duality is a liquidity weighted L2-norm that enters as a measurement of this closeness and that accounts for strategy dependent spreads. As applications, we establish optimality of buy-and-hold strategies for the super-replication of call options and we prove a verification theorem for utility maximizing investment strategies. We also describe explicitly optimal strategies in the Bachelier version of our model with exponential utility.
(This is based joint works with Yan Dolinsky and Moritz Voss.)


Invited talk: Tue, 16:30-17:15, room A

Zachary Feinstein  (Washington University in St. Louis)

Leverage and Capital Ratio Constrained Fire Sales and Price-Mediated Contagion

In this talk we consider a model for price-mediated contagion precipitated by a common exogenous stress to the banking book of all firms in the financial system. In this setting, firms are constrained so as to satisfy leverage or risk-weight based capital ratio requirements. In so doing we compare static and dynamic constructions of this financial contagion. In the case of risk-weighted capital ratio constraints, we use these models to find analytical bounds on the risk-weights for assets as a function of the market liquidity. Under these appropriate risk-weights, we find existence and uniqueness for the joint system of firm behavior and asset prices. Sensitivity of the clearing solutions to system parameters is undertaken in the static setting to consider uncertainty in these parameters. Considerations of different liquidation strategies are undertaken with special consideration of both proportional and a utility maximizing equilibrium strategies.
This is joint work with Tathagata Banerjee.


Invited talk: Mon, 11:20-12:05, room A

Damir Filipovic  (EPFL and Swiss Finance Institute)

A machine learning approach to portfolio risk management

This talk presents a computational framework for dynamic portfolio risk management building on machine learning with kernels. We learn the replicating martingale of a portfolio from a finite sample of its terminal cumulative cash flow. The learned replicating martingale is in closed form thanks to a suitable choice of the reproducing kernel Hilbert space. We develop an asymptotic theory and prove convergence and a central limit theorem. We also derive finite sample error bounds and concentration inequalities. Numerical examples show good results with relatively small training sample size. This talk is based on joint work with Lotfi Boudabsa and Lucio Fernandez-Arjona.


Invited talk: Mon, 12:05-12:50, room A

Kathrin Glau  (Queen Mary University of London)

Low Rank Tensor Approximation and Deep Learning for Parametric Option Pricing

Computationally intensive problems in finance are characterized by their intrinsic high-dimensionality which often is paired with optimizations leading to nonlinearities. While classial numerical methods typically suffer from a curse in dimensionality, machine learning approaches promise to yield fairly accurate results with a method that is scalable in the dimensions. Computational intense training phases and the required large set of training data pose some of the major challenges for the development of new and adequate numerical methods for finance. Merging classical numerical techniques with learning methods we propose a new approach to option pricing in parametric models. The work is based on [1] and ongoing research with Paolo Colusso and Francesco Statti.

[1] K. Glau, D. Kressner, F. Statti: Low-rank tensor approximation for Chebyshev interpolation in parametric option pricing. preprint 2019.


Invited talk: Tue, 11:55-12:40, room A

Archil Gulisashvili  (Ohio University)

Gaussian Stochastic Volatility Models: Scaling Regimes, Large Deviations, and Moment Explosions

In a Gaussian stochastic volatility model, the evolution of volatility is described by a stochastic process that can be represented as a positive continuous function (the volatility function) of a continuous Gaussian process (the volatility process). If the volatility process exhibits fractional features, then the model is called a Gaussian fractional stochastic volatility model. Important examples of fractional volatility processes are fractional Brownian motion, the Riemann-Liouville fractional Brownian motion, and the fractional Ornstein-Uhlenbeck process. If the volatility process admits a Volterra type representation, then the model is called a Volterra type stochastic volatility model. Forde and Zhang established a large deviation principle for the log-price process in a Volterra type model under the assumptions that the volatility function is globally Hölder continuous and the volatility process is fractional Brownian motion. We prove a similar small-noise large deviation principle under significantly weaker restrictions. More precisely, we assume that the volatility function satisfies a mild local regularity condition, while the volatility process is any Volterra type Gaussian process. Moreover, we establish a sample path large deviation principle for the log-price process in a Volterra type model, and a sample path moderate deviation principle for general Gaussian models. In addition, applications are given to the study of the asymptotic behavior of exit probabilities, call pricing functions, and the implied volatility in various mixed scaling regimes.

Another problem addressed in our work concerns moment explosions for asset price processes. We prove that for such a process in an uncorrelated Gaussian stochastic volatility model, all the moments of order greater than one explode provided that the volatility function grows faster than linearly. Partial results are also obtained for correlated models.


Invited talk: Tue, 11:10-11:55, room A

Julien Guyon  (Bloomberg, Columbia Univ., New York Univ.)

The Joint S&P 500/VIX Smile Calibration Puzzle Solved: A Dispersion-Constrained Martingale Transport Approach

Since VIX options started trading in 2006, many researchers and practitioners have tried to build a model that jointly and exactly calibrates to the prices of S&P 500 (SPX) options, VIX futures and VIX options. So far the best attempts, which used continuous-time jump-diffusion models on the SPX, could only produce an approximate fit. In this article we solve this puzzle using a discrete-time model. Given a VIX future maturity T1, we build a joint probability measure on the SPX at T1, the VIX at T1, and the SPX at T2 = T1 + 30 days which is perfectly calibrated to the SPX smiles at T1 and T2, and the VIX future and VIX smile at T1. Our model satisfies the martingality constraint on the SPX as well as the requirement that the VIX at T1 is the implied volatility of the 30-day log-contract on the SPX. In particular, this proves that the SPX and VIX markets are jointly arbitrage-free. The discrete-time model is cast as a dispersion-constrained martingale transport problem and solved using the Sinkhorn algorithm, in the spirit of De March and Henry-Labordere (2019). We explain how to handle the fact that the VIX future and SPX option monthly maturities do not perfectly coincide, and how to extend the two-maturity model to include all available monthly maturities.


Invited talk: Wed, 9:00-9:45, room A

Nikolaus Hautsch  (University of Vienna)

Limits to Arbitrage in Markets with Stochastic Settlement Latency

Distributed ledger technologies rely on consensus protocols confronting traders with random waiting times until the transfer of ownership is accomplished. This time-consuming settlement process exposes arbitrageurs to price risk and imposes limits to arbitrage. We derive theoretical arbitrage boundaries under general assumptions and show that they increase with expected latency, latency uncertainty, spot volatility, and risk aversion. Using high-frequency data from the Bitcoin network, we estimate arbitrage boundaries due to settlement latency of on average 124 basis points, covering 88% of the observed cross-exchange price differences. We document cross-exchange flows chasing arbitrage opportunities only if we account for transaction cost and settlement latency. Settlement through decentralized systems thus induces non-trivial frictions affecting market efficiency and price formation.


Invited talk: Wed, 13:30-14:15, room A

Blanka Horvath  (King's College and Imperial College London)

Deep pricing and hedging in rough volatility models and beyond

Rough Volatility models are by their non-Markovian nature delicate to handle in a pricing and hedging context. On the other hand, deep neural networks provide powerful approximation tools to address these computational challenges. We highlight specific aspects of network architectures with regards to the adaptation of learning to such non-Markovian contexts and draw conclusions from these observations towards more data driven modelling frameworks.


Invited talk: Tue, 17:15-18:00, room A

Ying Jiao  (Université Claude Bernard Lyon 1)

A branching process approach to default clustering modelling

We study the sovereign default clustering in Europe by using the approach of continuous-state branching processes with immigration (CBI). We present the theoretical framework in link with affine processes and Hawkes processes, and explain why it provides a realistic and parsimonious way to describe the clustering phenomenons observed on markets.


Invited talk: Mon, 14:55-15:40, room A

Sigrid Källblad  (KTH Royal Institute of Technology)

Stochastic control of measure-valued martingales with applications to robust finance

Motivated by robust pricing problems in mathematical finance, we consider in this talk a specific constrained optimisation problem. Our approach is based on reformulating this problem as an optimisation problem over so-called measure-valued martingales (MVMs) enabling the problem to be addressed by use of dynamic programming methods. In the emerging stochastic control problem MVMs appear as weak solutions to a specific SDE for which we prove existence of solutions; we then show that our control problem satisfies the Dynamic Programming Principle and relate the value function to a certain HJB-type equation. A key motivation for the study of control problems featuring MVMs is that a number of interesting probabilistic problems can be formulated as such optimisation problems; we illustrate this by applying our results to optimal Skorokhod embedding problems as well as robust pricing problems.
The talk is based on joint works with A. Cox, M. Larsson and S. Svaluto.


Invited talk: Tue, 15:15-16:00, room A

Eyal Neuman  (Imperial College London)

Deterministic vs Adaptive Strategies for Optimal Execution with Signals

We consider an optimal trade execution problem where a trader is looking at a short-term price predictive signal while trading. When the trader creates an instantaneous market impact, we first derive precise conditions on the model which determine when the optimal strategy is adaptive and when it should be deterministic (i.e. static). It is also shown that transaction costs of optimal adaptive strategies are substantially lower than the corresponding costs of the optimal static strategy. In the same spirit, in the case of transient impact it is shown that strategies that are not fully adaptive but where the trader observes the signal a finite number of times can dramatically reduce the transaction costs and improve the performance of the optimal static strategy.


Invited talk: Mon, 14:10-14:55, room A

Marcel Nutz  (Columbia University)

Fine Properties of the Optimal Skorokhod Embedding Problem

We study the problem of stopping a Brownian motion at a given distribution ν while optimizing a reward function that depends on the (possibly randomized) stopping time and the Brownian motion. Our first result establishes that the set T(ν) of stopping times embedding ν is weakly dense in the set R(ν) of randomized embeddings. In particular, the optimal Skorokhod embedding problem over T(ν) has the same value as the relaxed one over R(ν) when the reward function is semicontinuous, which parallels a fundamental result about Monge maps and Kantorovich couplings in optimal transport. A second part studies the dual optimization in the sense of linear programming. While existence of a dual solution failed in previous formulations, we introduce a relaxation of the dual problem and establish existence of solutions as well as absence of a duality gap, even for irregular reward functions. This leads to a monotonicity principle which complements the key theorem of Beiglbock, Cox and Huesmann. These results can be applied to characterize the geometry of optimal embeddings through a variational condition.
(Joint work with Mathias Beiglbock and Florian Stebegg)


Invited talk: Wed, 14:15-15:00, room A

Miklos Rasonyi  (Alfred Renyi Institute of Mathematics)

Optimal investment and correlation decay

We consider a trader in an illiquid market with instantaneous price impact which is a power function of the trading speed. He wishes to maximize his expected wealth on long horizons. We investiage how his performance depends on the correlation structure of the price. When the price is Gaussian with stationary increments we prove an explicit relationship between the speed of correlation decay and the growth rate (as a function of the horizon) of the expected wealth. The results apply, in particular, to fractional Brownian motion.
Joint work with Paolo Guasoni and Zsolt Nika.


Invited talk: Wed, 11:45-12:30, room A

Josef Teichmann  (ETH Zurich)

Representing dynamics through random dynamical systems

We re-discover the paradigm of reservoir computing in stochastic or rough differential equations and prove generalization bounds. This opens a new perspective on randomness in recurrent neural networks and on the approximation of stochastic or rough differential equations. Applications to time series prediction and term structure problems are discussed.

Joint work with Christa Cuchiero, Lukas Gonon, Martin Larsson, Lyudmilla Grigoryeva and Juan-Pablo Ortega.


Invited talk: Wed, 11:00-11:45, room A

Luitgard A. M. Veraart  (London School of Economics)

When does portfolio compression reduce systemic risk?

We analyse the consequences of portfolio compression on systemic risk. Portfolio compression is a post-trading netting mechanism that reduces gross positions while keeping net positions unchanged and it is part of the financial legislation in the US (Dodd-Frank Act) and in Europe (European Market Infrastructure Regulation).
We show that the recovery rate in case of default plays a significant role in determining whether portfolio compression is potentially beneficial.
If recovery rates of defaulting nodes are zero then compression weakly reduces systemic risk.
We also provide a necessary condition under which compression strongly reduces systemic risk.
If recovery rates are positive we show that whether compression is potentially beneficial or harmful for individual institutions does not just depend on the network itself but on quantities outside the network as well. In particular we show that portfolio compression can have negative effects both for institutions that are part of the compression cycle and for those that are not. Furthermore, we show that while a given conservative compression might be beneficial for some shocks it might be detrimental for others. In particular, the distribution of the shock over the network matters and not just its size.


 

 


Contributed Talks at the VCMF 2019 Conference

Abstracts are online in case the registration fee was paid already.
See "News" to see which abstracts were added recently to this webpage.


Contributed talk: Mon, 15:10-15:40, room C

Eduardo Abi Jaber (École Polytechnique)

Reconciling rough volatility with jumps

Starting from hyper-rough Volterra Heston models, for which we provide new existence and stability results for any Hurst index in (-1/2,1/2], we construct a Markovian approximating class of one dimensional Heston-type models parametrized by a fast mean reversion speed and an unconstrained Hurst index. This class not only enjoys closed form solutions for its Fourier-Laplace transform but is also able to mimick hyper-rough implied-volatility surfaces for any Hurst index in (-1/2,1/2]. More remarkably, for H smaller -1/2, sending the mean reversion to infinity, we obtain convergence of the reversionary model towards Lévy processes such as the IG-NIG.
Joint work with Ryan McCrickerd.


Contributed talk: Wed, 14:00-14:30, room E

Vilen Abramov (BB&T)

CCAR-consistent yield curve stress testing: from Nelson-Siegel to machine learning

Following the financial crisis of 2008, the regulators established stress testing framework known as comprehensive capital analysis and review (CCAR). The regulatory stress scenarios are macroeconomic and do not define stress values for all the relevant risk factors. In particular, only three Treasury rates are captured in these scenarios – 3-month, 5-year, and 10-year ones. Banks that are subject to CCAR, need to complement CCAR scenarios by defining stress values for the missing risk factors. The Treasury rates corresponding to different nodes are highly correlated. Hence, the changes in the three Treasury rates defined in the regulatory scenarios should somehow impact the other rates. In this paper, we will focus on the CCAR-consistent Treasury yield curve stress testing. We will consider three modeling approaches that would allow one to "build" the stressed curves under CCAR scenarios. We will start with Nelson-Siegel (NS) approach, a well know yield curve smoothing technique. We will show how to convert the changes in the three Treasury rates to the changes in the NS parameters in order to "build" a stressed curve. We will then review a more common approach, namely, principal component analysis (PCA). PCA approach fits scenario generation problem better because it explicitly takes into consideration correlation between historical changes in rates corresponding to different nodes. Surprisingly, a naive NS approach outperforms PCA. In case of PCA, we will demonstrate how to convert the changes in the three Treasury rates to the changes in the principal components. Finally, we will review artificial neural network (ANN) approach, a well know machine learning technique. This approach will allow us to directly link the changes in the three Treasury rates to the changes in the other rates. The performance of these approaches will be assessed via back-testing.
Joint work with Christopher Atchison and Zhengye Bian.


Contributed talk: Tue, 17:00-17:30, room B

Sühan Altay (Vienna University of Economics and Business)

Optimal converge trading with unobservable pricing errors

We study a dynamic portfolio optimization problem related to convergence trading, which is an investment strategy that exploits temporary mispricing by simultaneously buying relatively underpriced assets and selling short relatively overpriced ones with the expectation that their prices converge in the future. We build on the model of Liu and Timmermann (2013) and extend it by incorporating unobservable Markov-modulated pricing errors into the price dynamics of two co-integrated assets. We characterize the optimal portfolio strategies in full and partial information settings both under the assumption of unrestricted and beta-neutral strategies. By using the innovations approach, we provide the filtering equation that is essential for solving the optimization problem under partial information. Finally, in order to illustrate the model capabilities, we provide an example with a two-state Markov chain.
Joint work with Katia Colaneri and Zehra Eksi.


Contributed talk: Mon, 11:50-12:20, room C

Takuji Arai (Keio University)

Pricing and hedging of VIX options for Barndorff-Nielsen and Shephard models

The VIX call options for the Barndorff-Nielsen and Shephard models will be discussed. Derivatives written on the VIX, which is the most popular volatility measurement, have been traded actively very much. In this talk, we give representations of the VIX call option price for the Barndorff-Nielsen and Shephard models: non-Gaussian Ornstein-Uhlenbeck type stochastic volatility models. Moreover, we provide representations of the locally risk-minimizing strategy constructed by a combination of the underlying riskless and risky assets. Remark that the obtained representations are efficient to develop a numerical method using the fast Fourier transform. Thus, numerical experiments will be implemented in the last part of this talk.


Contributed talk: Wed, 14:30-15:00, room E

Axel Alejandro Araneda (Frankfurt Institute for Advanced Studies)

The fractional Jump-to-Default CEV model: pricing CDS with memory

The CEV model is a well-known formulation in the option pricing literature, which extends the classical Black-Scholes approach, to address two empirical facts in financial markets: the negative relationship between price and volatility (leverage effect) and the 'volatility skew'. Another important feature of the CEV model is its capability to allow bankruptcy. Then, the CEV model could be suitable for to price Credit Default Swaps (CDS), which becomes an important and widely-used tool in the risk management of credits. However, at the CEV model, the probabilities of hitting zero are very small for real applications. Taking it, Carr and Linetsky introduce to the CEV dynamics an affine default intensity as a function of the instantaneous variance, naming the whole process as Jump- to-Default extended CEV model (JD-CEV). This new specification on the model keeps the tractability of the CEV model.
However, several researchers find CDS presents long-range dependency or memory effect. To deal with this issue, this research extends the JD-CEV model, using a fractional Brownian motion instead of a classical one. With the help of the fractional Itô's calculus and the fractional Fokker-Planck, equation, we reduce the problem to a non-stationary Feller process with time-varying coefficients, given a solution for the CDS pricing with memory. Besides, the convergence to the fractional and classical CEV approaches is provided.
Joint work with Nils Bertschinger.


Contributed talk: Wed, 14:30-15:00, room B

Maria Arduca (Università degli Studi di Milano Bicocca)

A simple approach to duality for systemic risk measures

Broadly speaking, the goal of a systemic risk measure is to secure, through the injection of capital, a system of n financial institutions (or subgroups of the same institution). In the literature, it is standard to define an acceptable level of security by way of an aggregation function of the n individual equity profiles and an acceptability criterion for the aggregated system. Two types of systemic risk measures have been mainly investigated so far depending on whether the aggregation step precedes or is subsequent to the capital injection step. Dual representations for such risk measures have been recently studied, eg in [1], [2], [3] and [4]. The approach of these papers relies on Lagrangian duality techniques and is somewhat different from the standard duality approach employed in risk measure theory.
The objective of this talk is to show that, in the spirit of [5], standard Fenchel-Moreau techniques can be successfully used to establish dual representations for systemic risk measures on general spaces of random variables. This has two advantages. First, it hopefully makes the duality results more accessible to the broader risk measure community.
Second, it allows to view the standard representations for cash-additive risk measures as special cases of representation results for systemic risk measures. As a byproduct, we obtain a simple proof of the dual representation of utility-based risk measures in a univariate setting.

References:
[1] Armenti and Crépy and Drapeau and Papapantaleon (2018), Multivariate Shortfall Risk Allocation and Systemic Risk, Journal on Financial Mathematics.
[2] Rudloff and Ararat (2019), Dual representation for systemic risk measures, arXiv:1607.03430v2.
[3] Kromer and Overbeck and Zilch (2016), Systemic risk measures on general measurable spaces, Mathematical Methods of Operations Research.
[4] Chen and Iyeguard and Moallemi (2013), An Axiomatic Approach to Systemic Risk, Management Science.
[5] Farkas and Koch-Medina and Munari (2015), Measuring risk with multiple eligible assets, Mathematics and Financial Economics.


Contributed talk: Tue, 17:30-18:00, room C

Michele Azzone (Politecnico di Milano)

Additive normal tempered stable processes for equity derivatives and power law scaling

We introduce a simple model for equity index derivatives. The model generalizes well known Lévy Normal Tempered Stable processes (e.g. NIG and VG) with time dependent parameters. It fits accurately Equity index implied volatility surfaces in the whole time range of quoted instruments including small time horizon (few days) and long time horizon options (years). We prove that the model is an Additive process constructed using an Additive subordinator. It allows to use classical Lévy-type pricing techniques. We discuss in detail calibration issues: we show that, in terms of mean squared error, calibration is on average two order of magnitude better than both Lévy processes and Self-similar alternatives. We show that, even if the model loses the classical stationarity property of Lévy processes, it presents interesting scaling properties for the calibrated parameters.
Joint work with Roberto Baviera.


Contributed talk: Tue, 12:10-12:40, room E

Julio Backhoff-Veraguas (University of Vienna and TU Wien)

Adapted Wasserstein distances and their role in mathematical finance

The problem of model uncertainty in financial mathematics has received considerable attention in the last years. In this talk I will follow a non-parametric point of view, and argue that an insightful approach to model uncertainty should not be based on the familiar Wasserstein distances. I will then provide evidence supporting the better suitability of the recent notion of adapted Wasserstein distances (also known as Nested Distances in the literature). Unlike their more familiar counterparts, these transport metrics take the role of information/filtrations explicitly into account.
Based on joint work with M. Beiglböck, D. Bartl and M. Eder.


Contributed talk: Tue, 15:00-15:30, room E

Emilio Barucci (Politecnico di Milano)

On the design of Sovereign Bond-Backed Securities

Among the interventions to cope with the legacy of the financial crisis, the European Systemic Risk Board promoted a High level task force on safe assets. The Task force has advanced the proposal to build Sovereign Bond-Backed Securities (SBBS), a securitization of government bonds, see ESRB High-level task force on safe assets (2018) Sovereign bond-backed securities: a feasibility study, Volume I and II.
The nice thing of these assets is that they could be issued by private intermediaries and that they do not hinge on the joint liability by national States such as the Eurobonds. These assets would address some of issues encountered during the Euro crisis: the existence of a safe asset as the SBBS could help to weaken the bank-sovereign vicious circle, banks could hold SBBS as safe asset instead of government bonds that are not safe in many countries as they incorporate a default risk.
The goal of this paper is to test the robustness of the SBBS proposal. We concentrate on its risk features. We show the following results:
 – Considering Debt weights rather than GDP weights would rendere a higher yield rate for all the tranches
 – Considering a positive recovery rate would decrease the yield rate of senior tranches and would increase the yield rate of junior tranches
 – Increasing the correlation level would increase the yield rate of senior tranches and would decrease the yield rate of junior tranches
 – Considering a block correlation structure would increase the yield rate of senior tranches and would decrease the yield rate of junior tranches.
We also investigate the feasibility of safe bonds without tranching.
These results show that the correlation structure is not an issue to obtain a safe asset but may pusk riskiness of junior bonds in a significant way.
Joint work with Damiano Brigo, Daniele Marazzina and Marco Francischello.


Contributed talk: Tue, 15:30-16:00, room D

Roberto Baviera (Politecnico di Milano)

A closed formula for illiquid corporate bonds and an application to the European market

We deduce a simple closed formula for illiquid corporate coupon bond prices when liquid bonds with similar characteristics (e.g. maturity) are present in the market for the same issuer. The key model parameter is the time-to-liquidate a position, i.e. the time that an experienced bond trader takes to liquidate a given position on a corporate coupon bond.
The option approach we propose for pricing bonds illiquidity is reminiscent of the celebrated work of Longstaff (1995) on the non-marketability of some non-dividend-paying shares in IPOs. This approach describes a quite common situation in the fixed income market: it is rather usual to find issuers that, besides liquid benchmark bonds, issue some other bonds that either are placed to a small number of investors in private placements or have a limited issue size.
The model considers interest rate and credit spread term structures and their dynamics. We show that illiquid bonds present an additional liquidity spread that depends on the time-to-liquidate aside from credit and interest rate parameters.
We provide a detailed application for two issuers in the European market.
Joint work with Aldo Nassigh and Emanuele Nastasi.


Contributed talk: Mon, 11:50-12:20, room D

Nils Bertschinger (Frankfurt Institute for Advanced Studies)

Financial cross-ownership as a structural explanation for rising stock correlations in crisis times

Systemic risk has been studied extensively since the latest financial crisis. On the one hand, empirical measures of systemic risk have been proposed and applied to market data. A common idea underlying these measures states that asset prices tend to fall collectively during crisis times. Correspondingly, the correlation, in a dynamical and conditional framework, between different assets as well as between single assets and the overall market has been proposed by several scholars as a measure of contagion in turbulent periods. Nowadays, it is an established “stylized fact” that asset correlations rise during crisis times.
On the other hand, network models have made important contributions towards understanding systemic risk from a theoretical perspective. Especially the seminal work of Eisenberg & Noe is well-known in this respect, forming the basis for numer- ous studies of financial contagion arising from cross-ownership of debt. Suzuki, an extension of this basic model allowing for cross-ownership of debt as well as equity, provides an interesting, alternative viewpoint connecting the valuation of interbank contracts with derivative pricing. The famous Merton model, which relates the debt and equity of a firm with European put and call options respectively, provides an established framework for pricing credit risk of single firms. Yet, in case of cross- ownership, firms cannot be considered individually and risk management has to take into account the potential for financial contagion. Indeed, all firms have to be valued collectively and self-consistently when taking into account cross-ownership.
Here we show that network valuation models can readily explain the stylized fact concerning the rise of correlations during crisis times. For instance, the bivariate Suzuki model, with cross-ownership of debt only and one external business asset per company, shows that rising correlation emerges as a direct consequence from financial cross-holdings when considered from an ex-ante perspective. In particular, using a combination of analytic results and numerical simulations, we prove that the risk- neutral values of both firms are strongly correlated when these are insolvent – even with uncorrelated business assets. Thereby providing a structural explanation for rising correlations as firm values remain uncorrelated as long as both are solvent.
Joint work with Axel Alejandro Araneda.


Contributed talk: Tue, 15:30-16:00, room B

Corina Birghila (University of Vienna)

Pareto robust reinsurance contracts

In this talk we attempt to introduce the problem of Pareto robust reinsurance contracts and the connection to distributionally robust optimization. We consider a game theoretic approach, in which both insurer and reinsurer face model ambiguity of underlying losses. Similar to Boonen (2017) and Jiang (2019), we assume that insurance market participants agree to disagree on the probability distribution of losses. The main problem that we aim to study is the convex combination of insurer and reinsurer's risks under different economic and participation constraints. A possible methodology for solving the problem is proposed.


Contributed talk: Mon, 15:10-15:40, room D

Alessandro Calvia (University of Milano-Bicocca)

Risk measures and progressive enlargement of filtrations: a BSDE approach

From the beginning of the 21st century, connections between dynamic risk measures and Backward Stochastic Differential Equations (or BSDEs, for short) have been studied in the literature. BSDEs are well established tools in mathematical finance and, as is known, one can induce dynamic risk measures from their solutions. The theory of g-expectations, developed by S. Peng, paved the way for this connection, that has been studied when the noise driving BSDEs is either a brownian motion [1,4] or a brownian motion and an independent Poisson random measure [3].
Here we consider a class of BSDEs with jumps (BSDEJ) introduced in [2], whose driving noise is given by a brownian motion and a marked point process. Starting from the existence and uniqueness results of the solution (Y,Z,U) of the BSDEJ with fixed terminal time T > 0 provided in [2], we define the induced dynamic risk measure as the functional mapping any essentially bounded terminal condition of the BSDEJ into the first component Y of its solution.
From a financial perspective, this induced dynamic risk measure can be used to assess the riskiness of a future financial position (modelled by the terminal condition) when there are possible default events, described by the marked point process driving the BSDEJ. Another important feature is that the information available to financial agents is progressively updated as these random events occur. This feature is mathematically encoded in the progressive enlargement of a brownian reference filtration. It is proved in [2] that under such a framework it is possible to provide a decomposition of the solution (Y,Z,U) into processes that are solution, between each pair of consecutive random times, of BSDEs driven only by the brownian motion.
The aim of this paper is to show, in the single jump case to ease the notation, that a similar decomposition holds also for the dynamic risk measure induced by the BSDEJ: we obtain two risk measures, acting respectively before and after the default time. Furthermore, we prove that properties of the driver of the BSDEJ are reflected into desirable properties of the dynamic risk measure, such as monotonicity, convexity, homogeneity, etc... Finally, we show that the dynamic risk measure is time consistent, focus on its dual representation and provide some examples.
Joint work with Emanuela Rosazza Gianin.

References
[1] P. Barrieu and N. El Karoui. Pricing, hedging, and designing derivatives with risk measures. In Carmona, R. (ed.) Indifference pricing: theory and applications, pages 77-144. Princeton University Press, Princeton, 2009.
[2] I. Kharroubi and T. Lim. Progressive enlargement of filtrations and backward stochastic differential equations with jumps. J. Theoret. Probab. 27(3), 683-724, 2014.
[3] M. C. Quenez and A. Sulem. BSDEs with jumps, optimization and applications to dynamic risk measures. Stochastic Process. Appl. 123(8), 3328-3357, 2013.
[4] E. Rosazza Gianin. Risk measures via g-expectations. Insurance Math. Econom. 39(1), 19-34, 2006.


Contributed talk: Mon, 12:20-12:50, room C

Jiling Cao (Auckland University of Technology)

Pricing variance swaps under hybrid CEV and stochastic volatility

In this paper, we consider the problem of pricing a variance swap whose underlying asset price dynamics is modelled under a hybrid framework of constant elasticity of variance and stochastic volatility (CEVSV). Applying the multi-scale asymptotic analysis approach, we obtain a semi-closed form approximation of the fair continuous variance strike. We conduct numerical experiments by applying this approximation formula to calculate the square root of the fair continuous variance strike with different values of parameters. The market data of S&P 500 options are used to obtain calibrations of the CEVSV model, and then the estimated parameters are further used to compute the values of the square root of fair continuous variance strike. In addition, we also analyse and compare the performance of the CEV model, the CEVSV model and the Heston stochastic volatility model.
Joint work with Jeong-Hoon Kim and Wenjun Zhang.


Contributed talk: Wed, 11:30-12:00, room C

Jun Chen (University of New South Wales)

Application of exponential moving average smoothing to the computation of realized variance for irregular spaced high frequency data

We consider the problem of estimating Integrated Variance (IV) by means of Realized Variance (RV) irregularly spaced high frequency data that is. We study the regular and irregular exponential moving average (EMA) smoothing techniques that are used as sampling schemes for the non-equally spaced time series, in order to convert tick-by-tick (raw) data into equidistant time series. The constructed time series allow to compute the RV that is used to assess the performance of considered sampling schemes based on the accuracy of IV estimation. Simulation study and empirical analysis demonstrate that the considered sampling schemes are competitive, resulting in reliable RV estimates. A bias correction volatility estimator based on the pre-averaging sampling scheme results in the highest estimation accuracy for both, the simulated time series and the empirical dataset.
Joint work with Katja Ignatieva and Vitali Alexeev.


Contributed talk: Mon, 11:20-11:50, room E

Troels Sønderby Christensen (Aalborg University)

A dynamic programming approach for optimizing shipping scheduling in the liquefied natural gas market

With the growth of the peculiar liquefied natural gas (LNG) market worldwide, new interesting challenges have arisen. In this talk, we consider optimizing the shipping scheduling in the LNG supply chain. We take the perspective of an agent being obliged to both deliver and pickup LNG at various destinations and various points in time, while also being able to charter additional ships to take advantage of profitable market opportunities. Using Bellman's principle of optimality, we propose an approximation of the value function with the purpose of obtaining fast and near-optimal solutions. We exemplify our approach by solving large real-world problems, showcasing the proposed model. Further, the model is compared to standard optimization techniques from the literature that rely on heuristics.


Contributed talk: Wed, 09:00-09:30, room C

Katia Colaneri (University of Rome - Tor Vergata)

Value adjustments and dynamic hedging of reinsurance counterparty risk

Reinsurance counterparty credit risk (RCCR) is the risk of a loss arising from the fact that a reinsurer is unable to fulfill his contractual obligations towards the ceding insurer. Although its importance, existing techniques for managing RCCR are mostly qualitative. The objective of this paper is to study value adjustments and dynamic hedging for RCCR from a quantitative point of view. We propose a novel modelling framework that accounts for contagion effects between the default of the reinsurer and the price of the reinsurance contract. We give a concise model construction via the enlargement of filtration approach. We characterize the value adjustment in a reinsurance contract as classical solution of a partial integro differential equation. Hedging strategies for RCCR are derived via a quadratic hedging approach. The paper closes with a simulation study.
Joint work with Claudia Ceci, Ruediger Frey and Verena Koeck.


Contributed talk: Wed, 14:00-14:30, room D

Camilla Damian (WU Vienna University of Economics and Business)

EM algorithm for a CIR process with Markov-modulated mean reversion level and application to Eurozone credit spreads

This project is concerned with parameter estimation for a hidden Markov model where the available information stems from a (possibly multivariate) CIR process whose mean reversion level depends on an unobservable finite-state Markov chain. Employing a suitable transformation of the observation process, we are able to resort to the Expectation Maximization (EM) algorithm to solve this problem. The E-step of the algorithm consists of a nonlinear filtering problem, which we address using the so-called reference probability approach. As in practical applications available data consist of noisy observations arising discretely in time, we also derive a version of the filters which will perform well in a situation where observations are only approximating a diffusion. Making use of such robust filters gives rise to convenient discretization schemes, which in turn provides two main advantages. Firstly, numerical experiments show that it improves the stability and efficiency of the EM algorithm considerably. Secondly, working in a discretized setting allows to obtain an estimate for the diffusion coefficient, which is typically unknown in practical applications.
We will illustrate the performance of the algorithm and the accuracy of the estimates with a simulation study. Moreover, to highlight the practical relevance of such an inference problem, we will present an application in the context of credit risk: the quantitative analysis of sovereign-bond backed securities such as European Safe Bonds (ESBies). To this purpose, we model sovereign CDS spreads (or calibrated hazard rates) of Eurozone countries as CIR processes, whose mean reversion levels depend on a common, unobservable Markov chain. This naturally generates default dependence, and estimation results confirm that the proposed model successfully captures the qualitative features of the time-series, particularly their important co-movements.
Joint work with Rüdiger Frey and Kevin Kurt.


Contributed talk: Mon, 14:40-15:10, room B

Luca De Gennaro Aquino (Grenoble École de Management)

Bounds on multiasset derivatives via neural networks

Using neural networks, we compute bounds on the prices of multiasset derivatives given information on prices of related payoffs. As a main example, we focus on European basket options and include information on the prices of other similar options, with possibly different maturities. We show that, in most cases, adding further constraints gives rise to bounds that are considerably tighter and discuss the maximizing/minimizing copulas achieving such bounds. Our approach follows the literature on (constrained, martingale) optimal transport and, in particular, builds on a recent paper by Eckstein and Kupper (2019, Appl. Math. Optim.).
Joint work with Carole Bernard.


Contributed talk: Tue, 12:10-12:40, room C

Nils Detering (University of California Santa Barbara)

Suffocating Fire Sales

Fire sales are among the major drivers of market instability in modern financial systems. Due to iterated distressed selling and the associated price impact, initial shocks to some institutions can be amplified dramatically through the network induced by portfolio overlaps. In this paper we develop models that allow us to investigate central characteristics that drive or hinder the propagation of distress. We investigate single systems as well as ensembles of systems that are alike, where similarity is measured in terms of the empirical distribution of all defining properties of a system. This approach ensures a great deal of robustness to statistical uncertainty and temporal fluctuations, and we give various applications. A natural characterization of systems resilient to fire sales emerges, and we provide explicit criteria that regulators can readily exploit in order to assess the stability of any system. Moreover, we propose risk management guidelines in form of minimal capital requirements, and we investigate the effect of portfolio diversification and portfolio overlap. We test our results by Monte Carlo simulations for exemplary configurations and we can quantify the trade-off between objectives for classical single firm risk management and those for systemic risk management.
Joint work with Thilo Meyer-Brandis, Konstantinos Panagiotou and Daniel Ritter.


Contributed talk: Tue, 16:30-17:00, room B

Zehra Eksi-Altay (WU Vienna)

Momentum and mean reversion under partial information

We study a dynamic portfolio optimization problem in which stock returns tend to continue over short horizons, so-called momentum, and revert over longer horizons, so-called mean-reversion or reversal. We extend the continuous-time framework of Koijen et. al. (2009) into a partial information one, where the investor (trader) could not observe in what proportion of the drift uncertainty is attributable to the mean-reversion or momentum. Due to the Gaussian nature of the problem, we use the Kalman filter to obtain the estimated state variables. Since essentially the filtering and stochastic optimal control problems are separable, we obtain the optimal portfolio weights and optimal value function by standard techniques.
Joint work with Suhan Altay and Katia Colaneri.


Contributed talk: Wed, 14:00-14:30, room C

Andrea Fiacco (University of Oslo)

On the approximation of Lévy driven Volterra processes and their integrals

Volterra processes appear in several applications ranging from turbulence to energy finance where they are used in the modelling of e.g. temperatures and wind, and the related financial derivatives. Volterra processes are in general non-semimartingales and a theory of integration with respect to such processes is in fact not standard. In our work we consider Volterra type processes Y driven by Lévy noise L, with deterministic kernel, and we suggest to approximate them by semimartingales. This is because semimartingales constitute the largest class of integrators for a stochastic integration theory (Itô type integration), well-suited for applications where the adaptedness or the predictability with respect to a given information flow plays an important role. Also, numerical methods are flourishing in the case of semimartingale models. Thus such approximations open up for the study not only of the integration with respect to Y, but also the study of computation techniques. As illustration, the Lévy driven Gamma-Volterra processes and their integrals are studied in full detail along with their approximations.
Specifically, our approximation is based on the perturbation of the kernel in such a way that the approximating processes are semimartingales and tend to Y in Lp-sense for some p greater or equal to 1.
As for what concerns stochastic integration with respect to Y, we consider a pathwise-type of integration based on fractional calculus, by defining the generalized Lebesgue-Stieltjes integral. For this we define two classes of integrands and integrators for which such integral is well defined. Taking specifically the case of Volterra processes Y into account, we find conditions that ensures that Y is an appropriate integrator for all integrands X. Then, we exploit the approximations of Y to study the integrals and their approximations.
Finally, we include an algorithm for numerical simulation. We take again as example the case of a Gamma-Volterra process, driven by a symmetric tempered stable Lévy process. As far as it concerns the integrals, we consider two different integrands, using a classical numerical integration with an Euler scheme.
Joint work with Giulia Di Nunno and Erik Hove Karlsen.


Contributed talk: Mon, 11:20-11:50, room B

Tobias Fissler (Imperial College London)

Elicitability of Range-Value-at-Risk

The predictive performance of point forecasts for a statistical functional, such as the mean, a quantile, or a certain risk measure, is commonly assessed in terms of scoring (or loss) functions. A scoring function should be (strictly) consistent for the functional of interest, that is, the expected score should be minimised by the correctly specified functional value. A functional is elicitable if it possesses a strictly consistent scoring function.
In quantitative risk management, the elicitability of a risk measure is closely related to comparative backtesting procedures. As such, it has gained considerable interest in the debate about which risk measure to choose in practice. While this discussion has mainly focused on the dichotomy between Value at Risk (VaR) – a quantile – and Expected Shortfall (ES) – a tail expectation, this talk is concerned with Range Value at Risk (RVaR). RVaR can be regarded as an interpolation of VaR and ES, which constitutes a tradeoff between the sensitivity of the latter and the robustness of the former. Recalling that RVaR is not elicitable, we show that a triplet of RVaR with two VaR components at different levels is elicitable. We characterise the class of strictly consistent scoring functions. Moreover, additional properties of these scoring functions are examined, including the diagnostic tool of Murphy diagrams. The results are illustrated with a simulation study, and we put our approach in perspective with respect to the classical approach of trimmed least squares in robust regression.
This talk is based on joined work with Johanna F. Ziegel (preprint available at arXiv:1902.04489v2).


Contributed talk: Wed, 10:00-10:30, room C

Pavel V. Gapeev (London School of Economics)

Projections in enlargements of filtrations under Jacod's hypothesis and pricing of credit default swaps in two-dimensional models with various information flows

We consider the initial and progressive enlargements of a Brownian (reference) filtration with a (positive) random (default) time. We assume Jacod's equivalence hypothesis, that is, the existence of a positive conditional density for the random time with respect to the reference filtration. Then, starting with the predictable representation of a martingale with respect to the initially enlarged reference filtration, we derive explicit expressions for the coefficients which appear in the predictable representation properties of its projections on the progressively enlarged filtration and on the reference filtration. We give some explicit examples of conditional densities of the random (default) times.
We also study a credit risk model of a financial market in which the dynamics of intensity rates of two default times are described by linear combinations of three independent geometric Brownian motions. The dynamics of two default-free risky asset prices are modelled by two geometric Brownian motions which are dependent of the ones describing the default intensity rates. We obtain closed form expressions for the rational prices of both risk-free and risky credit default swaps (without and with consideration of counterparty risk) given the reference filtration initially and progressively enlarged by the two default times. The accessible default-free reference filtration is generated by the standard Brownian motions driving the model.
Joint work with Monique Jeanblanc.


Contributed talk: Tue, 11:10-11:40, room C

Laura Garcia-Jorcano (Universidad de Castilla-La Mancha)

Measuring systemic risk using multivariate quantile-located ES models

The recent financial crisis has brought to the surface the need for measuring, monitoring and forecasting the transmission of extreme downside market risk. This paper analyzes and measures the tail systemic risk between financial systems of Europe, North America, and Asia and their respective financial institutions belonging to different sectors. Our main contribution is the development of a new systemic risk measure DeltaQL-CoCARES based on the CoVaR of Adrian and Brunnermeier (2016). Our systemic risk measure clearly improves measures based on CoVaR as it better captures the extreme distress dependence between the system and institutions during crisis periods to better identify relevant institutions to be closely monitored by regulators. For this purpose, we employ a vector autoregression model based on conditional autoregressive expectile models of Taylor (2008), similar to model VAR of VaR proposed by White et al. (2005). More concretely, our systemic risk measure improves CoVaR measure in several ways:
1) we use the ES instead of VaR to compute the risk measure of the system and institutions. ES-based systemic risk measures have become more popular as they consider some stylized facts of financial returns as leptokurtosis, skewness and joint tail dependence,
2) the latent ES process is estimating without imposing any assumption on the underlying distribution of the data,
3) our regression based on expectiles have some advantages above the use of quantiles: regression using expectiles is known to be robust to outliers; it is more efficient that quantile regression because they are more alert than quantiles to the magnitude of infrequent catastrophic losses and they depend on both the tail realizations of a random variable and their probability; and the inference on expectiles is much easier than the inference on quantiles,
4) while the CoVaR focuses on the impact of the conditioning institution return, over the whole distribution, on the conditional quantile of the response variable, we improve that by accentuating the degree of distress among the financial connections by directly linking the past of a financial institution to the present of the system by assuming that both are in distress, and
5) our measure is more flexible than CoVaR as parameters monitor the impact of the conditional variables on the response variable might depend on the location of both variables over their marginal support.
We also propose a systemic stress indicator SSIES based on our new systemic risk measure. The main evidence suggests that exist cross significant volatility and ES effects between the system and financial institutions. The performance of the systemic risk measure shows that banks from Europe are the most systemic institutions. The systemic stress indicator SSIES captures the extreme distress dependence between the system and financial institutions during crisis periods to better identify relevant institutions to be closely monitored by regulators.
Joint work with Lidia Sanchis-Marco.


Contributed talk: Tue, 16:30-17:00, room D

Máté Gerencsér (IST Austria)

Discrete approximations of SDEs with irregular drift

Recent years saw increasing interest in establishing the rate of convergence of approximation schemes to solutions of stochastic differential equations (SDEs) whose drift is rough. In such situations it is crucial to exploit regularising effects of the noise.
We introduce a new take on the convergence analysis that, among others, allows one to establish strong convergence rates up to 1/2 higher for the Euler-Maruyama scheme than earlier methods.
We discuss several settings including fractional Brownian drivers or higher order Milstein-type approximations.
Joint work with Konstantinos Dareiotis.


Contributed talk: Wed, 13:30-14:00, room C

Michele Giordano (University of Oslo)

Maximum principles for Volterra time change processes

We establish a framework for the study of backward stochastic Volterra integral equations (BSVIE) driven by time-changed Lévy noises.
In fact we shall consider a random measure μ that can be decomposed as the sum of a conditional Gaussian measure and a conditional centered Poisson measure.
In this paper we deal with two information flows:
Ft namely the smallest right continuous filtration to which μ is adapted, Gt, generated by μ and the entire history of the time change processes, and we shall consider the information F as partial with respect to G.
Given a controlled dynamic, we will consider the optimization problem of finding the supremum of a functional J(u) for a suitable control set A which we consider to be either F or G predictable.
We prove both a sufficient and a necessary maximum principle for such performance functional, showing that in the F-predictable case we can find a solution by projecting the results obtained for the G-predictable case onto the F-predictable one.
We shall make use of stochastic derivatives. We stress that we cannot use the classical Malliavin calculus as our integrators are not the Brownian motion nor the centered Poisson random measure. Indeed we could use a conditional form of such calculus as introduced by Yablonski, however we resolve by using the non-anticipating derivative introduced and for martingale random fields as integrators.
The use of the non-anticipating derivative has also the advantage that we do not require more restrictive conditions on domains, since it is already well defined for all L2(P) random variables.
When studying such problems, we come across a BSVIE driven by a noise μ as above: we prove existence and uniqueness results for such BSVIE and we compute an explicit solution in the linear case.
Examples and applications will be presented.
Joint work with Giulia Di Nunno.


Contributed talk: Tue, 14:30-15:00, room E

Nikolay Gudkov (ETH Zurich)

Pricing and hedging of guaranteed minimum benefits using power series approximation techniques

Majority of the developed economies are experiencing population ageing with people living longer due to advances in medicines and lifestyle quality. Such developments have been putting a lot of pressure on governments and pension fund providers who are exposed to the resulting longevity risk. Variable annuities(VAs) constitute a class of financial products designed to tackle challenges associated with both investment and longevity risk. These contracts enable policyholders to participate in financial markets via linked funds and at the same time provide protection against long-term life contingencies. In this paper, we devise a numerical technique for pricing Guaranteed Minimum Benefit (GMB) riders embedded in Variable Annuities. The method utilises multidimensional transforms of the characteristic function for the underlying stochastic process and enables to express solutions to pricing partial differential equations in terms of power series with coefficients known in the closed form. Our results demonstrate the high computational efficiency of the series approximation method for the computation of prices and hedge ratios of GMBs under the stochastic volatility and stochastic interest rate modelling framework. The findings of the paper can help insurers with efficient quantification of various risks associated with GMBs in Variable Annuities.
Joint work with Jonathan Ziveyi.


Contributed talk: Wed, 11:30-12:00, room D

Martin Haubold (TU Dresden)

Fractionally time-changed polynomial models

We consider asset pricing models based on polynomial processes, time-changed by an inverse beta-stable subordinator. These models share many properties with rough volatility models, in particular they are non-Markovian and allow for variance paths of Hölder regularity strictly smaller than one half. Our approach allows us the calculation of arbitrary moments, similarly to non-subordinated polynomial models, by replacing the matrix exponential by the matrix Mittag-Leffler function. Having calculated the moments, we use these to approximate the transition density of the process by orthonormal expansion.
We illustrate our results and numerical methods for several examples of fractionally time-changed polynomial models.
Joint work with Martin Keller-Ressel.


Contributed talk: Tue, 14:30-15:00, room D

Alexander Herbertsson (University of Gothenburg)

Dynamic hedging of CDS index options in Markov chain models

We study hedging of CDS index options in a credit risk model where the defaults times have intensities which are driven by a finite-state Markov chain representing the underlying economy. The hedging of the option on the CDS index is done with the CDS index itself using minimal variance hedging in the Markov chain model. In this setting we derive compact computationally tractable formulas for the gains processes both to the CDS index and the option on the CDS index which are used to find highly analytical and convenient expressions for the angle bracket processes to the corresponding quadratic variation and quadratic covariation processes for these two gains processes. The minimal variance hedging strategy is given by the ratio of the differentials of the two angle bracket processes. The evaluation of the parts involving the CDS index option is handled by translating the Cox-framework into a bivariate Markov chain. Due to the potentially very large, but extremely sparse matrices obtained in this reformulating, special treatment is needed to efficiently compute the matrix exponential arising from the Kolmogorov Equation based on results developed in Herbertsson (2018). The finite-state Markov chain model is calibrated to data with perfect fits, and several numerical studies of the minimal variance hedging strategies are performed as well as other related numerical studies.


Contributed talk: Wed, 14:30-15:00, room D

Rainer Hirk (WU Vienna University of Economics and Business)

A joint model of failures and credit ratings

We propose a novel framework for credit risk modeling, where default or failure information together with rating or expert information are jointly incorporated in the model. These sources of information are modeled as response variables in a multivariate ordinal regression model estimated by a composite likelihood procedure. The proposed framework provides probabilities of default conditional on the rating information and is able to account for missing failure and credit rating information. In our empirical analysis, we apply the proposed framework to a data set of US firms over the period from 1985 to 2014. Different sets of financial ratios constructed from financial statements and market information are selected as bankruptcy predictors in line with prominent literature in failure prediction modeling. We find that the joint model of failures and credit ratings outperforms state-of-the-art failure prediction models and shadow rating approaches in terms of prediction accuracy and discriminatory power.
Joint work with Laura Vana, Kurt Hornik and Stefan Pichler.


Contributed talk: Wed, 13:30-14:00, room B

Jana Hlavinová (WU Vienna University of Economics and Business)

Elicitabity and identifiability of systemic risk measures

Estimating different risk measures, such as Value at Risk or Expected Shortfall, for reporting as well as testing purposes is a common task in various financial institutions. The question of evaluating and comparing these estimates is closely related to two concepts already well known in the literature: elicitability and identifiability.
A statistical functional, e.g. a risk measure, is called elicitable if there is a strictly consistent scoring function for it, i.e. a function of two arguments, forecast and realization of a random variable, such that its expectation with respect to the second argument is minimized only by the correct forecast. It is called identifiable, if there is a strict identification function, i.e. again a function of two arguments such that the its expectation with respect to the second argument is equal to zero exactly at the correct forecast.
We introduce these concepts for systemic risk measures defined by Feinstein, Rudloff and Weber (2016). A banking system with n participants is represented by a random vector Y and the quantity of interest is its aggregated outcome, using some nondecreasing aggregation function. The measure of systemic risk is defined as the set of n-dimensional capital allocation vectors k such that the aggregated outcome of Y+k is acceptable under a given scalar risk measure.
We establish the link between the elicitability and/or identifiability of the systemic risk measure and the underlying scalar risk measure, taking two perspectives on the measures of systemic risk that stem from their set-valued nature. Moreover, we study secondary quality criteria of the scoring and identification functions of these measures.
Joint work with Tobias Fissler and Birgit Rudloff.


Contributed talk: Tue, 17:30-18:00, room B

Thijs Kamma (Maastricht University)

Near-optimal investment strategies in incomplete markets

We develop a dual control method for approximating investment strategies in incomplete environments that emerge from the presence of non-traded risk. Convex duality enables the approximate technology to generate lower and upper bounds on the optimal value function. The mechanism rests on closed-form expressions pertaining to the portfolio composition, whence we are able to procure the near-optimal asset allocation explicitly. In a real financial market, we illustrate the accuracy of our approximate method on a dual CRRA utility function that characterizes the preferences of some finite-horizon investor. Negligible duality gaps and insignificant annual welfare losses substantiate veracity of the technique.
Joint work with Antoon Pelsser.


Contributed talk: Tue, 15:00-15:30, room C

Sven Karbach (University of Amsterdam, KdVI)

Ornstein-Uhlenbeck processes in Hilbert spaces with state-dependent stochastic volatility

We present a model for the instantaneous variance of an Ornstein-Uhlenbeck process in Hilbert spaces allowing for state-dependent jumps and yielding an exponential affine form of its characteristic function. The affine form inherits to the characteristic function of the Ornstein-Uhlenbeck process and leads to an analytically tractable stochastic volatility model in infinite dimensions.


Contributed talk: Wed, 14:30-15:00, room C

Wahid Khosrawi (ETH Zurich)

Polynomial Semimartingales

We extend the class of polynomial processes to incorporate examples beyond stochastic continuity. Such an extension has been recently provided in the affine case and we show how similar results can be obtained by developing a suitable two-parameter analogon to the theory of finite dimensional one-parameter semigroups. In particular we show how this new class of processes can be characterized by the polynomial structure of their semimartingale characteristics.
Joint work with Thorsten Schmidt (University of Freiburg).


Contributed talk: Mon, 15:10-15:40, room E

Sofonias Alemu Korsaye (University of Geneva, Swiss Finance Institute)

Smart SDFs

We introduce model-free Smart Stochastic Discount Factors (S-SDFs) minimizing various notions of SDF variability under general convex constraints on pricing errors, which can be motivated by particular market frictions, asymptotic APT-type no-arbitrage assumptions or a need for regularization in large arbitrage-free asset markets. S-SDFs give rise to new nonparametric SDF bounds for testing asset pricing models, under more general assumptions on a model's ability to price cross-sections of assets. They arise from a simple transformation of the optimal payoff in a penalized dual portfolio selection problem with uniquely determined penalization function. We demonstrate the properties of S-SDFs induced by various economically motivated pricing error penalizations, which can load on a sparse set of endogenously selected securities and can produce a more robust pricing performance. We then show how pricing error and dual portfolio weight sparsity can be made compatible with tractability, i.e., smoothness, of the corresponding dual portfolio problem. For such settings, we develop the relevant methodology for the empirical analysis of S-SDFs. Lastly, we demonstrate the properties and the improved out-of-sample pricing performance of S-SDFs in various APT settings where SDF-regularization naturally matters.
Joint work with Alberto Quaini and Fabio Trojani.


Contributed talk: Wed, 12:30-13:00, room B

Adriano Koshiyama (University College London)

Generative adversarial networks for financial trading strategies

Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and combination, have been extensively researched using several methods, but emerging techniques such as Generative Adversarial Networks can have an impact into such aspects. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. To this purpose, we provide a full methodology on: (i) the training and selection of a cGAN for time series data; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment with multiple trading strategies, encompassing 579 assets. We compared cGAN with an ensemble scheme and model validation methods, both suited for time series. Our results suggest that cGANs are a suitable alternative for strategies calibration and combination, providing outperformance when the traditional techniques fail to generate any alpha.
Joint work with Nick Firoozye and Philip Treleaven.


Contributed talk: Mon, 14:10-14:40, room D

Gabriela Kovacova (WU Wien)

Time consistency of the mean-risk problem

The mean-risk problem is a well known and extensively studied problem in Mathematical Finance. Its aim is to identify portfolios that maximize the expected terminal value and at the same time minimize the risk. The usual approach in the literature is to combine the two to obtain a problem with a single objective. This scalarization, however, comes at the cost of time inconsistency.
In this work, we show that these difficulties disappear by considering the problem in its natural form, that is, as a vector optimization problem. As such the mean-risk problem can be shown to satisfy under mild assumptions an appropriate notion of time consistency.
Additionally, the upper images, whose boundaries are the efficient frontiers, recurse backwards in time. We argue that this represents a Bellman's principle appropriate for a vector optimization problem: a set-valued Bellman’s principle. Furthermore, we provide conditions under which this recursion can be directly used to compute the efficient frontiers backwards in time.
Joint work with Birgit Rudloff.


Contributed talk: Mon, 15:10-15:40, room B

Anastasis Kratsios (ETH Zurich)

Universal Approximation Theorems

The universal approximation theorem established the density of specific families of neural networks in the space of continuous functions and in certain Bochner-Lebesgue spaces, defined between any two Euclidean spaces. We extend and refine this result by proving that there exist dense neural network architectures on a larger class of function spaces and that these architectures may be written down using only a finite number of functions. Moreover, we show that every separable function space which is metrizable can be naturally represented within a larger feature space in which a dense neural network architecture exists. We prove that upon appropriately randomly selecting the neural networks architecture’s activation function we may still obtain a dense set of neural networks, with positive probability. We use this to overcome the difficulty of appropriately selecting an activation function in more exotic architectures.
Conversely, we show that given any neural network architecture on a set of continuous functions between two T0 topological spaces, there exists a unique finest topology on that set of functions that make the neural network architecture into a universal approximator.
Several examples are considered throughout the paper. The existence of a universal feature space in which any separable metric function space can be approximated is also proven.


Contributed talk: Wed, 13:30-14:00, room D

Kevin Kurt (WU Wien)

Sovereign Bond backed Securities as a new safe Asset for the Eurozone: a dynamic Credit Risk Perspective

The creation of a market in so-called European Safe Bonds (ESBies) is a highly debated proposal to improve the European monetary system. From a credit risk perspective, ESBies form the senior tranche of a CDO backed by a diversified portfolio of sovereign bonds from all members of the euro area. We propose a novel credit risk model for the hazard rates of the obligors to analyze price dynamics and assess the market risk associated to such products. Our model captures salient features of the credit spread dynamics of euro area member states and is at the same time fairly tractable. We consider a reduced-form model with conditionally independent default times; the default intensities of the different obligors are modelled by CIR-type jump processes whose mean-reversion levels and jump intensities are modulated by a common Markov process. Two special cases, one where the modulating process is a finite Markov chain and another one where it is an affine process, give rise to semi-explicit (explicit up to the solution of ODE systems) pricing formulae. This in turn allows computationally tractable calibration of the underlying hazard rates via single-name credit products. The pricing of credit portfolio products is done by Fourier inversion methods. Additionally, we provide hedging results for the junior tranche of the underlying bond portfolio.
Joint work with Rüdiger Frey and Camilla Damian.


Contributed talk: Mon, 12:20-12:50, room D

Vladimir S. Ladyzhets (University of Connecticut)

Probability space of regression models and its applications to credit and operational risks

We introduce a notion of a probability space of regression models and discuss its applications to the stress testing based on the macroeconomic scenarios provided by the Federal Reserve Bank (FRB). The probability space of regression models L=(M,P) consists of a set of regression models M and a probability measure P, which is based on the model "quality", i.e. its ability to "fit" into historical data and to forecast the future values of the target variable. The set of regression models M is assembled by selecting various combinations of input variables with different lags, transformations, etc., and varying historical data sets that are used for model building and validation. It is assumed that the model set M is "complete" in the sense that it exhausts all the regression models that is possible to build given available historical data and independent variables. Each model m from the set M yields a scenario y(t;m) for the target variable y, and thus the probability space of regression models L=(M,P) allows to build a probability distribution for Y(t) for each projection time t. As an example, we demonstrate how those distributions can be used to estimate risk capital reserves required by the regulators for large U.S. banks for credit and operational risks.


Contributed talk: Tue, 17:00-17:30, room C

Marc Lagunas Merino (University of Oslo)

Pricing and hedging unit-linked policies under rough fractional stochastic volatility (RFSV) models

Unit-linked products have become more popular in the insurance business during the past years. This type of policies are usually financed by a down payment of a single premium by the insured and the insurer grants a certain payoff in case of occurrence of the insured event. The particularity of this product is that its payoff is given by the maximum between a number of shares of a fond and a guaranteed amount by the insurer.
Recent literature has shown that log-volatility behaves essentially as a fractional Brownian motion. Empirical evidence has also been found, proving that the Hurst exponent H, is close to 0.1 for a great variety of assets across different markets. This suggests that the log-volatility process in the asset pricing may not only be fractional but also rough, i.e. H<1/2.
In a joint work with Prof. Salvador Ortiz-Latorre and Prof. David Baños we have developed a complete market model for pricing Unit-linked products using fractional volatility dynamics to replicate the stylized facts observed in the volatility. We also give the replicating strategy to this insurance product as well as a comparative between prices obtained across different guaranteed amounts using classical models versus our RFSV model proposed.


Contributed talk: Mon, 11:50-12:20, room B

Olivier Le Courtois (emlyon business school)

Mean-risk and stochastic dominance: a comparison of efficient frontiers

The similarity of the mean variance (MV) efficient set and the second order stochastic dominance (SSD) efficient set inspires a further examination on the relationship of the mean variance skewness (MVS) and the third order stochastic dominance (TSD) efficient sets. We reinterpret the mean-risk and stochastic dominance decision rules in a more general framework of the Pareto improvement method, and make a comprehensive comparison of the efficient portfolio sets. We revise the MVS-TSD consistency concluded by previous literature, and specify two TSD algorithms that provide a good trade-off between the discriminatory power and the optimal portfolio performance.
Joint work with Olivier Le Courtois.


Contributed talk: Tue, 12:10-12:40, room D

Hanwu Li (Bielefeld University)

Optimal consumption with Hindy-Huang-Kreps preference

In this talk, we investigate the optimal consumption problem where the recursive utility is given through the Hindy-Huang-Kreps type under some budget feasible constraints. We first establish the existence and uniqueness result and then present an infinite-dimensional version of the Kuhn-Tucker theorem which indicates the sufficient and necessary conditions for optimality. Besides, this optimization problem can be studied in a dynamical form and we show that, if the consumption plan is optimal for time 0, then it is also optimal for the latter time.


Contributed talk: Mon, 11:20-11:50, room C

Zhuoqun Liang (Stockholm School of Economics)

Stochastic volatility models for VIX option pricing

As a new class of volatility derivatives, VIX options offer investors a way to trade volatility and have been a practical tool for hedging and risk management. An important fact is that VIX options have upward implied volatility skew, which is opposite to equity options. Pricing VIX options is challenging, and there is no consensus on adequate pricing models.
In this paper, we adopt the framework proposed by Ballotta and Rayée (2017), and examine the performance of six representative models on VIX options. These models vary in the number of stochastic volatility factors, jump-diffusion structure and sources of leverage effect. Variance Gamma process is chosen as the type of jump in our modelling due to its properties and convenience in factor construction. As far as numerical schemes for option pricing are concerned, we apply COS method developed by Fang and Oosterlee (2008) to increase speed. Parameters are divided into two groups: spot variance and structural parameters, and an iterative two-step estimation procedure is used in calibration. In consideration of speed, we use local optimization method and randomize starting points. Out-of-sample tests are also conducted.
By comparison, the best performing model we have found is characterized by one stochastic volatility factor, risk factors of both diffusive and jump nature, and dependence between diffusions. The model performs satisfactorily well across moneyness and maturities, and adding further volatility factors does not improve model performance in general. Models in absence of jumps do not perform well, especially for short-dated options; whereas assets distributions originated by pure jump process with infinite activity can perform much better when we incorporate leverage effect directly through correlated jumps. This is in accordance with the empirical findings of Carr et al. (2002), although we still find the necessity of diffusion components in one stochastic volatility factor case. A more flexible model, which is characterized by two stochastic volatility factors, two types of risk factors and two sources of leverage effect, does not show its advantages for short-dated options, but can perform well in the long run.
Joint work with Laura Ballotta.


Contributed talk: Wed, 10:00-10:30, room B

Thomas Liebmann (Thomas Liebmann)

Subordination, conditional expectations, and integration by parts

We present an approach to develop various formulae and estimators for conditional expectations in a setting with subordination and we show how to use the same technique to obtain integration by parts formulae.
Ameliorating a given payoff via conditional expectations, for instance, requires conditioning on future values of the pricing kernel. Conditioning the value of a function of the underlying process on previous values of the process from simulated paths is required when estimating the early exercise barrier of Bermudan options in order to approximate the barrier for and the value of American options.
We look at a setting with subordination because it is a powerful tool to construct and investigate classes of processes. Time-changing Brownian motions by Lévy subordinators leads e.g. to Variance Gamma, Inverse Gaussian, and Meixner processes, and many classes of subordinators are subclasses of the class of Generalized Gamma Convolutions. Multidimensional Lévy processes can be obtained by subordinating multi-dimensional processes or by extending the subordination approach and using multiple independent subordinators time-changing multiple independent multidimensional Lévy processes.


Contributed talk: Tue, 11:10-11:40, room E

Felix-Benedikt Liebrich (LMU Munich)

Robustness vs. tractability: the class (S) property

In robust finance, Knightian uncertainty is often captured by a non-dominated set of priors, probability measures on the future states of the world. This usually comes at the cost of losing tractability; in contrast to the dominated setting, advanced functional analytic tools are often not available anymore. For such a set of probabilities, we suggest the class (S) property: there is an alternative set of probabilities containing all the information encoded by the initial priors such that the mass of each measure is concentrated on a quasi sure support, a unique part of the possible future states of the world. The class (S) property allows to analyse precisely when analytic tools – such as the identification of countably additive measures with order continuous functionals, order completeness of resulting function spaces, or Grothendieck's Lemma – fail and when they can be recovered. Moreover, probabilistic and measure theoretic assumptions regularly made in the literature gain a functional analytic interpretation when the underlying model is of class (S). We illustrate our theoretical findings in the context of volatility uncertainty.
Joint work with Marco Maggis and Gregor Svindland.


Contributed talk: Mon, 12:20-12:50, room B

Oliver Lubos (University of Duisburg-Essen)

Natural hedging with fix and floating strike guarantees

The paper analyzes minimum return rate guarantees (MRRGs) including fixed guarantee rates prevailing for the whole contract horizon as well as floating guarantee rates which are linked to the interest rate evolution. In a complete arbitrage free market where the asset and bond price dynamics are given by Gaussian processes, we obtain closed form pricing solutions for both guarantee schemes. Differences in the guarantee costs are then explained by the difference of the arbitrage free values of the fix and floating rate guarantees and the difference between cumulated volatilities resulting from forward and simple volatilities. We then consider the perspective of the asset liability management, i.e. we analyze the sensitivities of the asset and liability side against changes in the interest rate. We show that a combination of fix price and floating strike guarantees enables a natural hedge against changes in the interest rate.
Joint work with Antje Mahayni and Katharina Stein.


Contributed talk: Mon, 14:10-14:40, room C

Assad Majid (TU Dresden)

A comparison principle between classical and rough Heston models

Extending some results on Volterra integral equations (cf. Gatheral and Keller-Ressel, 2018), we prove a comparison principle for classical and rough Heston models. In particular, we show that the classical moment generating function (MGF) represents a time-changed lower bound of the MGF in the rough case. Applying these results to the theory of moment explosions, this statement can be transferred to a comparison principle between the asymptotic steepness of implied volatility in the rough and classical Heston model. Moreover, all results can be stated in a more general framework, using models with arbitrary convolution kernels instead of the rough Heston's power-law kernel. The talk is based on joint work with Martin Keller-Ressel.
Joint work with Martin Keller-Ressel.


Contributed talk: Wed, 09:00-09:30, room B

Ludovic Mathys (University of Zurich)

Intra-horizon expected shortfall and risk structure in models of jumps

The present article deals with intra-horizon risk in models of jumps. Our general understanding of intra-horizon risk is similar to the approach taken by Bakshi and Panayotov (2010). In particular, we believe that quantifying market risk by strictly relying on point-in-time measures cannot be deemed a satisfactory approach in general. Instead, we argue that complementing this approach by studying measures of risk that capture the magnitude of losses potentially incurred over the full length of a trading horizon is necessary when dealing with (m)any financial positions. To address this issue, we propose an intra-horizon analogue to the expected shortfall for general profit-and-loss processes and discuss some of its properties. Our intra-horizon expected shortfall is well-defined for (m)any popular classes of Lévy processes encountered when modeling market dynamics and constitutes a coherent measure of risk, as introduce in Cheridito at el. (2004). On the computational side, we provide a simple method to derive the intra-horizon expected shortfall inherent to popular Lévy dynamics. Our general technique relies on results for maturity-randomized first-passage probabilities and allows for a derivation of diffusion and jump risk contributions. These theoretical results are finally discussed in an empirical analysis, where Lévy models are calibrated to data and our intra-horizon expected shortfall is compared to other measures of risk.
Joint work with Walter Farkas and Nikola Vasiljevic.


Contributed talk: Mon, 12:20-12:50, room E

Andrea Mazzoran (University of Padova)

A forward model for power markets based on branching processes.

We propose and investigate a model for forward power price dynamics, based on continuous branching processes with immigration. The model proposed describes the forward price dynamics and exhibits jumps clustering features. A similar approach for power markets was already exploited by other authors, like F.E. Benth and F. Paraschiv, who adopted a Gaussian framework for describing the forward curves dynamics. The novelty contained in our approach consists in combining the basic features of Branching Processes in order to get a realistic and parsimonious model setting. We discuss the no-arbitrage issues and the futures dynamics in the present modelling framework. We outline a possible methodology for model calibration.
Joint work with Giorgia Callegaro, Simone Scotti and Carlo Sgarra.


Contributed talk: Wed, 09:00-09:30, room D

Alexander Molitor (Goethe-Universität)

Prospective strict no-arbitrage and the fundamental theorem of asset pricing under transaction costs

In finite discrete time market models with proportional transaction costs, the no-arbitrage property does not imply the existence of a separating probability measure since there can still exist an approximate arbitrage. Schachermayer (2004) addressed this problem by introducing the robust no-arbitrage condition and showed that it is equivalent to the existence of a strictly consistent price system.
In this talk, we introduce the concept of prospective strict no-arbitrage that is a variant of the strict no-arbitrage property from Kabanov, Rasonyi, and Stricker (2002). The prospective strict no-arbitrage condition is slightly weaker than robust no-arbitrage, and it implies that the set of portfolios attainable from zero initial endowment is closed in probability. A weak version of prospective strict no-arbitrage allows us to establish a fundamental theorem of asset pricing with consistent price systems which are not necessarily strict, i.e., the consistent frictionless prices may lie on the boundary of the bid-ask spread.
On the technical level, a crucial difference to Schachermayer (2004) and Kabanov-Rasonyi-Stricker (2003) is that we prove closedness without having at hand that the null-strategies form a linear space.
Joint work with Christoph Kühn.


Contributed talk: Tue, 11:40-12:10, room D

Cosimo Munari (University of Zurich)

Robust portfolio selection under regulatory constraints

In a capital adequacy framework, risk measures are used to determine the minimal amount of capital that a financial institution has to raise and invest in a portfolio of pre-specified eligible assets in order to pass a given capital adequacy test. From a capital efficiency perspective, it is important to be able to do so at the lowest possible cost and to identify the corresponding optimal portfolios. We study the existence and uniqueness of such optimal portfolios as well as their stability with respect to perturbations of the underlying capital position. This behavior is naturally linked to the continuity properties of the set-valued map that associates to each capital position the corresponding set of optimal portfolios. Upper semicontinuity can be established under fairly natural assumptions. Lower semicontinuity is more elusive. While it is always satisfied in a polyhedral setting, it generally fails otherwise, even when the reference risk measure is convex. However, lower semicontinuity can often be established for portfolios that are close to being optimal. Besides capital adequacy, our results have a variety of natural applications to pricing, hedging, and systemic risk measurement.
The talk is based on a joint paper with Michel Baes and Pablo Koch-Medina and is to appear in Mathematical Finance.


Contributed talk: Tue, 11:40-12:10, room E

Max Nendel (Bielefeld University)

Semigroup envelopes and Markov processes under nonlinear expectation

Nonlinear expectations, as introduced by S. Peng, are closely related to monetary risk measures. Nonlinear expectations naturally appear in the context of pricing under model uncertainty, e.g. drift uncertainty (g-expectation) or volatility uncertainty (G-expectation). In this talk, we demonstrate how Markov processes under nonlinear expectations arise from solutions to certain fully nonlinear PDEs, where the Knigthian uncertainty is in the generator. In the case of Lévy processes this corresponds to ambiguity in the Lévy triplet. The results rely on the consideration of nonlinear semigroups and a nonlinear version of Kolmogorov's extension theorem. As an application, we provide sufficient conditions for families of Lévy processes, O-U processes and geometric Brownian Motions that guarantee the solvability of the related fully nonlinear partial (integro-)differential equation. We further show that the solution admits a stochastic representation in terms of a Markov process under a nonlinear expectation.
This presentation is based on joint works with Robert Denk, Michael Kupper and Michael Röckner.


Contributed talk: Wed, 11:00-11:30, room B

Zsolt Nika (Pázmány Péter Catholic University)

Log-optimal investments and adaptive strategies (based on Stochastic Gradient)

One major question in portfolio optimization is how to construct an optimal investment strategy when the stock price has long memory. I will present a family of models in discrete time, called Conditionally Gaussian model family, which can incorporate long memory in such a way that a log-optimal solution exists, and it can be calculated numerically.
Next, I will introduce two approximative solutions which are computationally feasible. The numerical solution in these cases can be obtained by simply evaluating a function instead of numerical integrals.
One of the approximative solutions suggests threshold type strategies. The essence of this strategy class is that, in this case, we are able to construct a stochastic gradient algorithm that converges to the optimal strategy. The method works even if the stock price has long memory.
For numerical results, I will use a variant of the Fractional Stochastic Volatility model to demonstrate log-optimal solution. It is constructed in the spirit of the Fractional
Stochastic Volatility model. This model, which has long memory, belongs to the Conditionally Gaussian family and has several statistical advantages. For showing results when we are using Stochastic Gradient method we rely on somewhat simpler processes.
This presentation relies on the paper:
Z. Nika, M. Rásonyi, "Log-optimal portfolios with memory effect", 2018 and on a manuscript in preparation.


Contributed talk: Tue, 15:00-15:30, room B

Sascha Offermann (University of Duisburg-Essen)

Participating life insurance contracts with periodic premium payments under regime switching

We consider participating life insurance products and their benefits to the insured. Our main focus is on the impact of different contribution schemes, i.e. when and how much the insured contributes. This is important since the insured's benefits depend on the performance of the investment strategy conducted by the product provider. In addition, we consider the interactions of the premium contribution scheme, embedded guarantees, and different management rules accounting of a regime switch in the risk profile of the insurance company. We shed light on two effects on the optimal expected utility of the insured. The combination of contribution scheme, embedded guarantees and management rule has an impact on the investment risk of the insured. A second main effect is a price effect: Assuming that the guarantees are fairly priced, the guarantee costs also depend on all the above mentioned factors.
Joint work with Antje Mahayni and Katharina Stein.


Contributed talk: Wed, 10:00-10:30, room D

José Orihuela (Murcia University)

Mackey constraints for Lebesgue risk measures

The following James type result is proved:
"Let A be a closed, convex, bounded and not weakly compact subset of a Banach space E. Let us fix an absolutely convex and weakly compact subset W of ε, a continuous linear functional ε0* with z0*(A) > 0, and ε > 0.
Then there is a continuous linear form x0* such that
| x0*(w) - z0*(w) | < ε
for every w in W, x0* does not attains its infimum on A but x0*(A) > 0."
As a consequence we show that a Fatou coherent monetary utility function u is a Lebesgue function if, and only if, the inf-convolution u*v is Fatou for every Fatou coherent monetary utility function v. We shall present the corresponding result for the Jouini-Schachermayer-Touzi representation theorem of concave monetary utility functions with the Lebesgue property too.
Joint work with Freddy Delbaen.


Contributed talk: Wed, 11:00-11:30, room D

Salvador Ortiz-Latorre (University of Oslo)

A Hull-White formula for fractional volatility Lévy models

The aim of this talk is to present a variant of the classical Heston model which allows for jumps and fractional volatility. The log price process is modelled by a jump-diffusion and the stochastic volatility process is correlated not only with the Brownian motion driving the asset price but also with the asset price jumps. In addition, the volatility process has a fractional component. In this setup, we first find a martingale representation for the future expected volatility by means of Malliavin calculus. Then, we find a Hull and White type decomposition formula for the price of European derivatives. This decomposition provides a way of finding approximation formulas for the prices of these derivatives as well as studying the implied volatility behaviour.
Joint work with Marc Lagunas-Merino.


Contributed talk: Mon, 11:20-11:50, room D

Natalie Packham (Hochschule für Wirtschaft und Recht Berlin)

Rating migration processes based on conditional transition matrices

We develop a model for credit rating migration. Since defaults depend on the state of the economy, rating transitions will in general be neither Markov nor time-homogeneous. We therefore include an economic state variable and jointly model the pair of economic state and rating class as a time-homogeneous Markov chain. First, we study various properties and the asymptotic behaviour of the resulting rating process. Second, we provide a mathematical framework to study the impact of different rating philosophies, such as point-in-time (PIT) and through-the-cycle (TTC) ratings. Finally, we provide empirical results based on a history of observed default frequencies.
Joint work with Michael Kalkbrener.


Contributed talk: Tue, 11:10-11:40, room D

Hyungbin Park (Seoul National University)

Sensitivity analysis of long-term cash flows

This talk discusses a sensitivity analysis of long-term cash flows. The price of the cash flow at time zero is given by the pricing operator of a Markov diffusion acting on the cash flow function. We study the extent to which the price of the cash flow is affected by small perturbations of the underlying Markov diffusion. The main tool is the Hansen-Scheinkman decomposition, which is a method to express the cash flow in terms of eigenvalues and eigenfunctions of the pricing operator. The sensitivities of long-term cash flows can be represented via simple expressions in terms of the eigenvalue and the eigenfunction.


Contributed talk: Tue, 11:10-11:40, room B

Mathias Pohl (University of Vienna)

Robust risk aggregation with neural networks

We consider settings in which the distribution of a multivariate random variable is partly ambiguous. We assume the ambiguity lies on the level of dependence structure, and that the marginal distributions are known. Furthermore, a current best guess for a distribution, called reference measure, is available. We work with the set of distributions that are both close to the given reference measure in a transportation distance (e.g. the Wasserstein distance), and additionally have the correct marginal structure. The goal is to find upper and lower bounds for integrals of interest with respect to distributions in this set. The described problem appears naturally in the context of risk aggregation. When aggregating different risks, the marginal distributions of these risks are known and the task is to quantify their joint effect on a given system. This is typically done by applying a meaningful risk measure to the sum of the individual risks. For this purpose, the stochastic interdependencies between the risks need to be specified. In practice the models of this dependence structure are however subject to relatively high model ambiguity. The contribution of this paper is twofold: Firstly, we derive a dual representation of the considered problem and prove that strong duality holds. Secondly, we propose a generally applicable and computationally feasible method, which relies on neural networks, in order to numerically solve the derived dual problem. The latter method is tested on a number of toy examples, before it is finally applied to perform robust risk aggregation in a real world instance.
Joint work with Stephan Eckstein and Michael Kupper.


Contributed talk: Tue, 15:30-16:00, room C

Jan Pospíšil (University of West Bohemia)

Robustnes and sensitivity analyses for rough fractional stochastic volatility models

In this talk we perform robustness and sensitivity analysis of several continuous-time rough fractional stochastic volatility (RFSV) models with respect to the process of market calibration. The analyses should validate the hypothesis on importance of the roughness in the volatility process dynamics. Empirical study is performed on a data set of Apple Inc. equity options traded in four different days in April and May 2015. In particular, the results for RFSV, rBergomi and aRFSV models are provided.


Contributed talk: Tue, 17:00-17:30, room D

Christian Pötz (Queen Mary University of London)

Efficient pricing and exposure calculation for early-exercise options using Chebyshev Interpolation

In this talk, we introduce a unified framework for the efficient pricing and exposure calculation of early-exercise (American, Bermudan) and barrier options. Formulating the pricing problem as discrete dynamic programming problem enables us to approximate the value function with Chebyshev polynomials in the backward induction. The structure of the algorithm allows us to shift all model dependent calculations into an offline-phase prior to the time stepping. We exploit this simple polynomial structure in the context of XVA calculation. Our method allows a highly efficient evaluation of the option's credit exposure, even for a large number of simulated risk factors. We extend the new approach to multivariate early-exercise options if the underlying process has multivariate normally distributed increments. The proposed method is flexible in terms of model choice, payoff profiles and asset classes. Numerical experiments confirm the flexibility and efficiency of the method.
Joint work with Kathrin Glau and Ricardo Pachon.


Contributed talk: Wed, 12:30-13:00, room C

Dragana Radojicic (Vienna University of Technology)

Random arrival times for the LOB (Limit Order Book) in the discrete time approximation

We introduce a stochastic model for the discrete time and space dynamics of a limit order book, driven by a simple symmetric random walk such that each step takes random time. More precisely, we assume that each step of the random walk takes random time. We introduce a two-parameter process V(n,U) which represents the volume of orders at time n with price U. We fix a integer spread parameter μ>1 and we assume that at each step of the random walk a new order will be placed at the distance μ above the mid-price. Moreover, two different execution mechanisms are obtained, named Type I (next trade occurs after excursions to the next running maximum) and Type II trades (mid-price falls quickly and then goes up by μ). We define the key quantity, avalanche length, as an avalanche period of trade executions, but alllow a small window without any execution event.
Joint work with Friedrich Hubalek and Thorsten Rheinländer.


Contributed talk: Wed, 09:30-10:00, room C

Stefan Rigger (University of Vienna)

Interacting particle systems, default cascades and the M1-topology

The M1-topology is one of the lesser-known topologies originally introduced by Skorokhod. An advantageous feature of this topology (in comparison to the more widely used J1-topology) is that it is particularly well-suited to deal with monotone functions. We prove a tightness result for processes that can be decomposed into a continuous and a monotone part and highlight its relevance to the study of particle systems. As a financial application, we establish a connection between mean field limits of interacting particle systems and systemic risk.
Joint work with Christa Cuchiero and Sara Svaluto-Ferro.


Contributed talk: Wed, 09:30-10:00, room D

Alet Roux (University of York)

Optimal investment and contingent claim valuation with disutility under proportional transaction costs

We consider the problem of an investor with a position in a contingent claim (represented by a payment stream), who is allowed to inject additional funds over the lifetime of the derivative to "top up" their position in the underlying assets. The performance of their overall investment can be measured by means of the expected total disutility of these injections over the lifetime of the derivative. This framework allows for the formulation of indifference bid and ask prices, which generalises the classical utility indifference pricing theory of European options.
In this talk we will present an efficient dynamic programming procedure for finding the optimal investment, and computing indifference prices of contingent claims, while working with exponential regret in a discrete time model. The dynamic programming procedure itself arises from a dual formulation of the indifference pricing problem, which involves the sum of the relative entropy of martingale measures with respect to the real-world probability. We also present some numerical examples.
This is joint work with my former PhD student Zhikang Xu.


Contributed talk: Tue, 11:40-12:10, room B

Philipp Schmocker (University of St. Gallen)

Deep stochastic portfolio theory

We propose a novel machine learning application within stochastic portfolio theory (SPT), a descriptive framework for analyzing stock market structure and portfolio behaviour. By using neural networks as portfolio generating functions, we try to solve the inverse problem of SPT: Given an investment objective, is it possible to learn a generating function, which generates the optimal portfolio with the desired investment characteristics? In numerical examples, we show that our machine learning approach can recover the most well-known generating functions of SPT, and apply our method to other examples to regain the desired portfolio.
Joint work with Christa Cuchiero and Josef Teichmann.


Contributed talk: Tue, 14:30-15:00, room B

Rafael Serrano (Universidad del Rosario)

ALM for insurers in a Lévy-type jump-diffusion model with multiple underwriting lines and nonlinear wealth frictions

We study a continuous-time asset-liability management problem for an insurance firm that backs up the liabilities raised by insurance contracts from multiple underwriting lines with the premium profits and the income resulting from investing in a Lévy-type jump-dffusion market model. The model allows to control the underwriting volume (number of policies) for each type of contract. Using the martingale approach and convex duality techniques we characterize strategies that maximize expected utility from consumption and final wealth under CRRA preferences. We consider underwriting lines with both dependent and independent risk processes, and nonlinear wealth frictions such as large-investor setting or differential rates for borrowing and lending.


Contributed talk: Tue, 14:30-15:00, room C

Carlo Sgarra (Politecnico di Milano)

A Gamma Ornstein-Uhlenbeck model driven by a Hawkes process

We propose an extension of the Gamma-OU Barndorff-Nielsen and Shephard model taking into account jumps clustering phenomena. We assume that the intensity process of the Hawkes driver coincides, up to a constant, with the variance process. By applying the theory of continuous-state branching processes with immigration, we prove existence and uniqueness of strong solutions of the SDE governing the asset price dynamics. We introduce a measure change of Esscher type in order to describe the relation between the risk-neutral and the historical dynamics. By exploiting the affine features of the model we provide an explicit form for the Laplace transform of the asset log-return, for its quadratic variation and for the ergodic distribution of the variance process. We show that the model proposed exhibits a larger flexibility in comparison with the Gamma-OU model, in spite of the same number of parameters required. In particular, we illustrate numerically that the left wing implied volatility could be first fit by using the original Gamma-OU model and then the right wing can be arranged by a trigger of the intensity and variance processes. Moreover, implied volatility of variance swap options is upward-sloped due to the self-exciting property of Hawkes processes.
Joint work with Guillaume Bernis and Simone Scotti.


Contributed talk: Tue, 16:30-17:00, room C

David Shkel (University of Hagen)

Model risk in a rough world

In derivative pricing, the risk that different models yield different prices for exotic derivatives even though they are calibrated to the same plain vanilla options is denoted as model risk. Throughout the literature, model risk is seen as significant and necessary to be dealt with. The focus of the literature lies on single asset options. We add to this by analyzing model risk in the case of multi-asset options and, additionally, we provide an empirical study based on multi barrier reverse convertibles (MBRC), a prominent type of retail derivatives in Switzerland.
We calculated prices for multi-asset options with representatives of three model classes, namely, the general a-Variance Gamma model by Guillaume (2013), the multi-variate Heston model by Dimitroff et al. (2011), and a multi-asset rough Bergomi model, which is an extension of the model proposed by Bayer et al. (2016). To quantify model risk, we apply a coherent model risk measure, which is defined as the range of model prices. Commonly, multi-asset models are calibrated to volatility surfaces and historical return correlations. We additionally calibrate the models to implied correlations and analyze an additional model risk component, introduced by the uncertainty of future correlations. Option prices are calculated for a wide variety of options written on different combinations of two to four underlyings, which leads to a final set of nearly 30 million option prices.
The results reveal that model risk takes on significant amounts and increases with the number of underlyings for all kind of analyzed options. But this increase is not linear, since the change from three to four underliyngs is a multiple of the change from two to three underlyings. Model risk based on historical correlations is about two percent in relation to the average prices of plain vanilla options (e.g. worst-of puts) written on two underlyings and it rises to over 13% for options written on four underlyings. In the case of barrier options (e.g. down and out puts), it rises from 5% to over 19%. The price divergence based on correlation estimates is also significant and increases with the option's time to maturity, parallel to an increase of the difference between historical and implied correlations. Regarding the MBRCs, model risk is significant as well, and we find some evidence that it is transferred to the private investors via margins incorporated in market prices. Overall, model risk is statistically and economically significant and should be closely monitored.
Joint work with Baule Rainer.

References:
Bayer, C., P. Friz, and J. Gatheral (2016). Pricing under rough volatility. Quantitative Finance 16, 887-904.
Dimitroff, G., S. Lorenz, and A. Szimayer (2011). A parsimonious multi-asset Heston model: calibration and derivative pricing. International Journal of Theoretical and Applied Finance 14, 1299-1333.
Guillaume, F. (2013). The aVG model for multivariate asset pricing: calibration and extension. Review of Derivative Research 16, 25-52.


Contributed talk: Wed, 14:00-14:30, room B

Alexander Smirnow (University of Zurich)

Systemic intrinsic risk measures

In recent years, it has become clear that an isolated micro-prudential approach to capital adequacy requirements of individual institutions is insufficient. It can increase the homogeneity of the financial system and ultimately the cost to society. For this reason, the focus of the financial and mathematical literature has shifted towards the macro-prudential regulation of the financial network as a whole. In particular, systemic risk measures have been discussed as a risk mitigation tool. In this spirit, we adopt a general approach of multivariate, set-valued risk measures and combine it with the recently proposed notion of intrinsic risk measures. In the latter, instead of using external capital to define the risk of a financial position, we use internal capital, which is received when part of the currently held position is sold. We translate this into a systemic framework and show that the systemic intrinsic risk measures have desirable properties such as monotonicity and quasi-convexity. Furthermore, for convex acceptance sets we derive dual representations of the systemic intrinsic risk measures.
Joint work with Jana Hlavinova.


Contributed talk: Tue, 15:30-16:00, room E

Pawel Sobala (UNIQA Insurance Group)

Pricing Cyber-Insurance using Copula Based Actuarial Model

Over the last several decades due to the development of new technologies and advanced computer networks, a completely new type of risk has emerged. Namely, a cyber risk. It is any risk emanating from the use of electronic data and their transmission. Yet, insurance companies do not offer full protection against this type of risk due to the small number of mathematical models describing this phenomenon.
This presentation presents two models that allow to analyse the security of computer networks and therefore calculate the cyber insurance premium. The first of the presented models is a theoretical model based on the alternating renewal process. Its aim is to analyse the security of computer networks. The key quantity is based on the key renewal theorem.
The second model is a simulation algorithm based on the Monte Carlo method. Using this algorithm it is possible to calculate the cyber insurance premium. The influence of the network structure and the assumed distributions of random variables are considered. The simulations indicate a significant impact of the above-mentioned factors on the price of the cyber insurance premium.


Contributed talk: Mon, 14:10-14:40, room E

Max Souza (Universidade Federal Fluminense)

Pricing options with non-uniform Fourier transform

We discuss how to price a class of European options using Fourier transforms efficiently and accurately. This is achieved by providing very accurate approximations to some Fourier integral operators. These approximations can then be efficiently implemented using NUFFT.
Joint work with Leonardo Müller and Jorge Zubelli.


Contributed talk: Mon, 14:40-15:10, room D

Moris Simon Strub (Southern University of Science and Technology)

Forward rank-dependent performance criteria: time-consistent investment under probability distortion

We introduce the concept of forward rank-dependent performance processes, extending the original notion to forward criteria that incorporate probability distortions. A fundamental challenge is how to reconcile the time-consistent nature of forward performance criteria with the time-inconsistency stemming from probability distortions. For this, we first propose two distinct definitions, one based on the preservation of performance value and the other on the time-consistency of policies and, in turn, establish their equivalence. We then fully characterize the viable class of probability distortion processes, providing a bifurcation-type result. Specifically, it is either the case that the probability distortions are degenerate in the sense that the investor would never invest in the risky assets, or the marginal probability distortion equals to a normalized power of the quantile function of the pricing kernel. We also characterize the optimal wealth process, whose structure motivates the introduction of a new, distorted measure and a related market. We then build a striking correspondence between the forward rank-dependent criteria in the original market and forward criteria without probability distortions in the auxiliary market. This connection also provides a direct construction method for forward rank-dependent criteria. A byproduct of our work are some new results on the so-called dynamic utilities and on time-inconsistent problems in the classical (backward) setting.
Joint work with Xue Dong He and Thaleia Zariphopoulou.


Contributed talk: Tue, 11:40-12:10, room C

Martin Summer (Oesterreichische Nationalbank)

Systematic systemic stress tests

For a given set of banks, which economic and financial scenarios will lead to big losses? How big can losses in such scenarios possibly get? These are the two central questions of macro stress tests. Most current macro stress testing models have deficits in answering these questions. They select stress scenarios in a way which might leave aside many dangerous scenarios and thus create an illusion of safety; and which might consider highly implausible scenarios and thus trigger a false alarm. With respect to loss evaluation most stress tests do not include Tools to analyse systemic risk arising from the interactions of banks with each other and with the markets. We make a conceptual proposal how these shortcomings may be addressed. We demonstrate the application of our concepts using publicly available data on European banks and capital markets, in particular the EBA 2016 stress test results.


Contributed talk: Wed, 12:30-13:00, room D

Sara Svaluto-Ferro (University of Vienna)

Infinite dimensional polynomial jump-diffusions

We introduce polynomial jump-diffusions taking values in an arbitrary Banach space via their infinitesimal generator. We obtain two representations of the (conditional) moments in terms of solution of systems of ODEs. These representations generalize the well-known moment formulas for finite dimensional polynomial jump-diffusions. We illustrate the practical relevance of these formulas by several applications. In particular, we consider (potentially rough) forward variance polynomial models and we illustrate how to use the moment formulas to compute prices of VIX options.
Joint work with Christa Cuchiero.


Contributed talk: Wed, 13:30-14:00, room E

Wayne Tarrant (Rose-Hulman Institute of Technology)

Financial contagion and self-organized criticality

After the disastrous 2008 crash, economists searched for answers of why crashes keep recurring and how to prevent them by using prediction. Most researchers in this area are currently concerned with locating contagious links, giving some measure of a total systemic risk, or predicting how much loss will occur when the complete collapse comes. The aim here is a different one. Understanding the mechanism of systemic collapse in interbank lending before trying to accomplish each of the above goals leads to a more thorough ability to compute and to a better understanding of the nature of the risk.
As always it is difficult to find data on bank assets, as each bank considers this to be proprietary data. Thus this work utilizes simulation based upon previous research that shows that bank capitals are skewed right, that banks with larger capitals tend to have more links to other banks, and that loan sizes tend to correlate to bank capitals. From these previous empirical results, a matrix of initial bank capitals and loans is derived.
New loans of sizes that follow similarly right skewed distributions are added to the initial setup. At the addition of each new loan, a determination is made about whether this causes a bank to fail the Basel ratios. If the 8% ratio is broached, a default is set in motion. This wipes out loans that this bank owed to other banks and also reduces the capital of all banks to which the defaulting bank owes money. Sometimes this leaves the system with just the removal of this bank. Other times the avalanche of debt destruction causes a ripple effect, a cascading of banks defaulting.
Self-organized criticality is a property of a system that returns to similar critical states over wide arrays of initial parameters and different choices of dynamics for the system. It is characterized by Pareto laws in sizes of cascades, no matter what initial conditions and governing dynamics are in the system.
Over many different simulations for varying parameters, the sizes of avalancheswere noted. For each simulation the maximum likelihood estimator was computed for a Zipf law, a discrete version of the Pareto law. Since this is discrete situation, the MLE must be computed through the use of a Hurwitz zeta function, which is a generalization of the Riemann zeta function, and its derivative.
Using a chi-square goodness of fit test, we investigate whether sizes of avalanches follow Zipf's law. This would be evidence that the situation of systemic financial risk is well described by the model of self-organized criticality. This should lead to better understanding of the mechanisms of systemic collapse and to the ramifications of such collapse.


Contributed talk: Wed, 11:30-12:00, room B

Kinga Tikosi (Central European University)

Optimizing threshold-type trading strategies with Kiefer-Wolfowitz algorithm

Algorithmic trading strategies are often based on some economic indicators reaching a target level. A natural question is how to choose the threshold parameters optimally. The functions describing these strategies in terms of the threshold parameters and the underlying stochastic process are not continuous (they have jumps when the target level is hit) and therefore classical recursive stochastic approximation schemes cannot be used to locate the optimal parameters algorithmically.
We generalize a decreasing gain gradient-like recursive scheme proposed by Kiefer and Wolfowitz, for functions which are not continuous, however we require continuity in conditional mean. In the algorithm finite differences of noisy measurements are used to estimate the gradient, as the objective function is assumed to be unknown. The underlying stochastic process is assumed to have a certain mixing property, which is satisfied by a large class of processes. Under appropriate assumptions we estimate the expected error of the scheme.
Joint work with Miklós Rásonyi.


Contributed talk: Wed, 11:00-11:30, room C

Markus Ulze (University of Augsburg)

Determinants of implied volatility smiles – An empirical analysis using intraday DAX equity options

The non-constant behavior of the implied volatility has been reported frequently, however, the role of the market microstructure has been disregarded. By extending and reviewing the determinants of the implied volatility in the context of high frequency (HF) trade-by-trade DAX equity options from the EUREX a mean-reversion autocorrelation process is revealed, besides confirming the low frequency results like moneyness, time, liquidity, volume and underlying moment dependencies. We show furthermore, that the mean-reversion process is present, even if we control for fluctuating trades between bid and ask prices. It is induced by algorithmic market making and a market microstructure effect. We address the HF research gap in market microstructure literature expressed by O'Hara (2015), who shows that markets and trading are radically different today, which consequently altered the basic constructs of market microstructure, and we give additional explanation for the flickering quote hypothesis of Hasbrouck and Saar (2009).
Joint work with Andreas Rathgeber and Johannes Stadler.


Contributed talk: Tue, 15:00-15:30, room D

Nneka Ozioma Umeorah (North-West University)

Valuation of basket credit default swaps under stochastic default intensity models

Portfolio credit derivatives, including the basket credit default swaps, are designed to facilitate the transfer of credit risk amongst market participants. Investors consider them as cheap tools to hedge a portfolio of credits, instead of individual hedging of the credits.
In this work, we focus on the nth-to-default swaps whereby the spreads are dependent on the nth default time. We formulated the default hazard rate process using one-factor stochastic interest rate models both for homogeneous and for heterogeneous portfolio, and we estimated the joint survival probability distribution functions of the intensity models under the risk-neutral pricing measure. This work further employed the Monte-Carlo simulation method, under the one-factor Gaussian copula model to numerically approximate the distribution function of the default time, and thus, the numerical experiments for pricing the nth default swaps were made viable.
Joint work with Matthias Ehrhardt and Hopolang Phillip Mashele.


Contributed talk: Mon, 11:50-12:20, room E

Janus Valberg-Madsen (Aalborg University)

A vine copula panel model for day-ahead electricity prices

Multivariate distributions that are not Gaussian or even elliptical are needed in many cases when modelling financial data, as these often exhibit heavy tails and asymmetry in their dependence structures. A common approach to handle such phenomena or patterns is to use copulas. In high dimensions it is now customary to use vine copulas, as these are able to model more complex dependency patterns by using separate pair-copulas as building blocks for constructing a full joint distribution.
In the present paper we consider a 24-dimensional vine copula model for day-ahead electricity prices at several European markets. We set up univariate time series models for each hour individually, and we tie those together into a joint model for all hours throughout the day using D-vine copulas.
To exemplify the proposed model, we consider the case of an energy management company that is exposed to different hours of the day by calculating different risk metrics such as value-at-risk and expected shortfall.
Joint work with Esben Høg, Troels Ø. Christensen and Anca Pircalabu.


Contributed talk: Wed, 09:30-10:00, room B

Michèle Vanmaele (Ghent University)

Utility maximization under time change

We consider the problem of maximizing expected utility from terminal wealth in a semimartingale setting, where the semimartingale is written as a sum of a time changed Brownian motion and a finite variation process. To solve this problem we consider an initial enlargement of filtration and we derive a transformation rule for stochastic integrals w.r.t time-changed Brownian motions.
The transformation rule allows us to shift the problem to a maximization problem under the enlarged filtration for models driven by a Brownian motion and a finite variation process. The latter can be solved by using martingale methods.
Then applying he transformation rule again, we derive the optimal strategy for the original problem for a power utility under certain assumptions on the finite variation process of the semimartingale.
Joint work with Giulia Di Nunno, Hannes Haferkorn and Asma Khedher.


Contributed talk: Mon, 14:40-15:10, room E

Moritz Voss (University of California, Santa Barbara)

A two-player price impact game

We study the competition of two strategic agents for liquidity in the benchmark portfolio tracking setup of Bank, Soner, and Voss (2017) both facing common aggregated temporary and permanent price impact à la Almgren and Chriss. The resulting stochastic linear quadratic differential game with terminal state constraints allows for an explicitly available Nash equilibrium in feedback form. Our results reveal how the equilibrium strategies of the two players take into account the other agent's trading targets: either in an exploitatory intent or by providing liquidity to the competitor, depending on the ratio between temporary and permanent price impact. These insights complement existing studies in the literature on predatory trading models examined in the context of optimal portfolio liquidation problems.


Contributed talk: Tue, 12:10-12:40, room B

Hanna Wutte (ETH Zurich)

Randomized shallow neural networks and their use in understanding gradient descent

Today, various forms of neural networks are trained to perform approximation tasks in many fields (including Mathematical Finance). However, the solutions obtained are not wholly understood. Empirical results suggest that the training favors regularized solutions. Moreover, it has been questioned how much training really matters, in the sense that randomly choosing subsets of the network's weights and training only a few leads to an almost equally good performance.
These observations motivate us to analyze properties of the solutions found by the gradient descent algorithm frequently employed to perform the training task. In particular, we consider one dimensional (shallow) neural networks in which weights are chosen randomly and only the last layer is trained. We show, that the resulting solution converges to the smooth spline interpolation of the training data as the number of hidden nodes tends to infinity. This might give valuable insight on the properties of the solutions obtained using gradient descent methods in general settings.
Joint work with Jakob Heiss and Josef Teichmann.


Contributed talk: Mon, 14:10-14:40, room B

Antonino Zanette (INRIA)

Machine learning for pricing American options in high dimension

In this paper we propose an efficient method to compute the price of American basket options, based on Machine Learning and Monte Carlo simulations. Specifically, the options we consider are written on a basket of assets, each of them following a Black-Scholes dynamics. The method we propose is a backward dynamic programming algorithm which considers a finite number of uniformly distributed exercise dates. On these dates, the value of the option is computed as the maximum between the exercise value and the continuation value, which is approximated via Gaussian Process Regression. Specifically, we consider a finite number of points, each of them representing the values reached by the underlying at a certain time. First of all, we compute the continuation value only for these points by means of Monte Carlo simulations and then we employ Gaussian Process Regression to approximate the whole continuation value function. Numerical tests show that the algorithm is fast and reliable and it can handle also American options on very large baskets of assets, overcoming the problem of the curse of dimensionality.
Joint work with Ludovic Goudenege and Andrea Molent.


 

 


Poster Presentations at the VCMF 2019 Conference

Abstracts are online in case the registration fee was paid already.
See "News" to see which abstracts were added recently to this webpage.

Posters will be presented from Monday lunchtime to Tuesday lunchtime.


Poster presentation:

Tereza Cristina Amorelli (Banco do Brasil)

Pricing non-traded assets using indifference pricing

Pricing contingent claims in incomplete markets arises naturally in many applications. We use an incomplete market model in discrete-time, with a nested complete market of traded assets. In this model, continuous pricing are allowed for the non-traded assets and this yields more flexibility, while keeping a simple setting. We then implement an indifference pricing algorithm for this model, with an exponential utility, and present a couple of examples.
Joint work with Max Souza.


Poster presentation:

Alejandro Balbás (University Carlos III of Madrid/Spain)

Golden strategies in derivative markets

We will study portfolio selection problems in derivative markets by means of the maximization of the expected wealth and the simultaneous minimization of both scalar and vector risk functions.
In particular, we will extend and integrate in a single approach some of our former findings contained in the references below.
Static (or buy and hold), discrete time and continuous time dynamic approaches will be integrated in a unified setting, and both uncertainty-free and ambiguous frameworks will be addressed.
Several mathematical results will prove that the use of derivatives may allow traders to significantly outperform the (risk, return) couple of the underlying security, and this finding will be confirmed by both some numerical/computational experiments and some empirical tests affecting very important stock/commodity international indices.

References:
• Balbás, A., B. Balbás and R. Balbás, 2010. "CAMP and APT like models with risk measures". Journal of Banking & Finance, 34, 1166–1174. DOI:10.1016/j.jbankfin.2009.11.013
• Balbás, A., B. Balbás and R. Balbás, 2016. "Outperforming benchmarks with their derivatives: Theory and empirical evidence" Journal of Risk, 18, 4, 25-52. DOI: 10.21314/J0R.2016.328
• Balbás, A., B. Balbás and R. Balbás, 2016. "Good deals and benchmarks in robust portfolio selection". European Journal of Operational Research, 250, 666-678. DOI: 10.1016/j.ejor.2015.09.023
• Balbás, A., B. Balbás and R. Balbás, 2019. "Golden options in financial mathematics". Mathematics and Financial Economics, forthcoming. DOI: 10.1007/s11579-019-00240-2
• Balbás, A. and J.P. Charron, 2019. "VaR representation theorems in ambiguous frameworks". Applied Stochastic Models in Business and Industry, forthcoming. DOI: 10.1002/asmb.2425
• Balbás, A., J. Garrido and R. Ohkrati, 2019. "Good deal indices in asset pricing: Actuarial and financial implications". International Transactions in Operational Research, 26, 1475–1503. DOI: 10.1111/itor.12424


Poster presentation:

Erwinna Chendra (Parahyangan Catholic University)

Pricing employee stock options with a binomial method: case study in indonesia

Employee stock options (ESOs) are call options granted by companies to their employees on the stock of the companies. In addition to retain employees who are highly motivated and potentially, ESO also can be used as a mean to align the employees incentive with the desire of the company's shareholders and to motivate employees to work towards improvement of the company's earning and management. As one form of non-cash compensation, ESO is an efficient cost component for small companies to compete with large companies. This paper discuss the ESO with a partial average Asian style (average over a part of the option life), which is prevailing in Indonesia. The price of the ESO with Asian style is determined by the binomial method that has been modified to meet the additional characteristics of that ESO. Numerical experiments are given to verify the robustness of the method and to analyze the sensitivity of the ESO price with respect to model parameters.
Joint work with Kuntjoro Adji Sidarto, Agus Sukmana and Chin Liem.


Poster presentation:

Ewa Dziwok (University of Economics in Katowice)

Fund Transfer Pricing mechanism – different approaches to the reference yield’s construction

The paper investigates different approaches to the construction of a term structure of interest rates – reference rates that are the base in Fund Transfer Pricing mechanism (FTP). While many positions in the literature focus on FTP mechanism as a part of asset liability management (ALM) process without any closer look at term structure construction, we identify features that let measure the behavior of the yield curve and detect the consequences of the model's choice. The results show that the arbitrarily chosen model of the reference yield has significant consequences for risk management process of a financial institution.
The study provides a twofold contribution to the literature describing FTP mechanism. First it introduces more complex approach to the reference rate modeling inside FTP mechanism and shows the consequences of the model's choice for liquidity management. Moreover it focuses on the construction of the reference curve itself and shows two different approaches covering a parsimonious model as well as Smith-Wilson one.
Joint work with Martin Wirth.


Poster presentation:

Alireza Fallahi (Amirkabir University of Technology)

Sufficient nonlinear forecasting using factor models

It is well known that in the forecasting of a target variable the linear models reduce to single index models, but it is shown by Fan et al that in nonlinear forecasting models, there is a possibility to extract multiple indices from the factors. In this method, the sufficient factors are identified as the eigenvectors of the conditional covariance matrix of the factors given the target variable. The covariance matrix is estimated using slicing method. The sufficient factors are those eigenvectors which are significantly larger than zero. Several tests are proposed in order to determine the optimal number L of sufficient factors. We propose a robust method based on resampling for determining L without imposing any assumption on the distribution of the noise.
Joint work with Erfan Salavati.


Poster presentation:

Pavel V. Gapeev (London School of Economics)

On the Fourier-Laplace transforms of first exit times for one-dimensional diffusions and their applications to models of stochastic volatility

We obtain the maximal values of the parameters of the Fourier-Laplace transforms of the first exit times for one-dimensional diffusions for which the value of the transforms are finite depending on the values of the parameters of the model and the distance between the given stopping levels. Our results are motivated by the ones of the paper by Friz, Gerhold, Gulisashvili, and Sturm (2011): "On refined volatility smile expansion in the Heston model", Quantitative Finance, 11:8, 1151-1164. We also consider applications of the obtained to the models of stochastic volatility.


Poster presentation:

Laura Garcia-Jorcano (Universidad de Castilla-La Mancha)

Traffic light system for systemic stress: TALIS-cube

For the purposes of financial stability, it is important to identify financial institutions that, when in distress, could have a large adverse impact on financial markets. This paper proposes a TrAffic LIght System for Systemic Stress (TALIS-cube) that provides a comprehensive color-based classification for grouping companies according to both the stress reaction level of the system when the company is in distress and the company's level of stress. Our proposal builds on the Conditional Value-at-Risk (CoVaR) measure proposed by Girardi and Ergun (2013), extending this by introducing a Filtered Historical Simulation, preferred to the use of a specific parametric density for the innovations, and three different specifications for the evolution of conditional covariance. In addition to the DCC with a GJR-GARCH specification for the marginal conditional variances, we use two other specifications: the BEKK and the Orthogonal GARCH model. TALIS-cube provides an evaluation of each company's risk, on a time-varying basis and conditional on the most recent financial company returns. TALIS-cube first evaluates two loss functions for each company, one at the system level, set to the system CoVaR, and one at the company level, set to the squared deviations between the returns of the financial company's equity and the corresponding Value-at-Risk. The two loss functions lead to the measurement of loss magnitudes, conditional on the stress states of the market and the company. If we consider company rankings based on loss magnitudes and compare them with the rankings provided by DeltaCoVaR, we find important differences, in particular for the Insurance sector, suggesting that TALIS-cube provides some improvement in the identification of systemically important companies. When moving to TALIS-cube outcomes, we observe how our approach provides an intuitive way to identify systemically important companies, and how the risk level changes in a sensible way over time, showing a diffuse risk increase around crises and a risk decrease after stabilizing events. We also analyze the predictive power of the aggregated version of TALIS-cube, especially valuable for predicting the lower conditional quantiles of financial markets suggesting the potential of aggregate TALIS-cube to be used as an intuitive, effective and powerful early warning system for financial crises, which could be used for the monetary authorities while designing their macroprudential policy strategies to accomplish financial stability. TALIS-cube can be used to enhance the performance and robustness of the current systemic risk measures. We provide an empirical analysis of the US market and several robustness checks evaluating different underlying models and different tuning parameters on the loss functions and company rankings.
Joint work with Massimiliano Caporin and Juan-Angel Jimenez-Martin.


Poster presentation:

Ivana Geček Tuđen (University of Zagreb)

Ruin probability for discrete risk processes

We study the discrete time risk process modeled by the skip-free random walk and derive the results connected to the ruin probability and crossing the fixed level for this type of process. We use the method relying on the classical ballot theorems to derive the results for crossing the fixed level and compare them to the results known for the continuous time version of the risk process. Further, we generalize this model by adding the perturbation and derive similar results using the skip-free structure of the process. At the end, we also derive the famous Pollaczek-Khinchine type formula for this generalized process, using the decomposition of the supremum of the dual process at some special instants of time.


Poster presentation:

Darjus Hosszejni (WU Vienna)

Approaches toward the Bayesian estimation of the stochastic volatility Model with leverage

The sampling efficiency of MCMC methods in Bayesian inference for stochastic volatility (SV) models is known to highly depend on the actual parameter values, and the effectiveness of samplers based on different parameterizations varies significantly. We derive novel algorithms for the centered and the non-centered parameterizations of the practically highly relevant SV model with leverage, where the return process and innovations of the volatility process are allowed to correlate. Moreover, based on the idea of ancillarity-sufficiency interweaving (ASIS), we combine the resulting samplers in order to guarantee stable sampling efficiency irrespective of the baseline parameterization.We carry out an extensive comparison to already existing sampling methods for this model using simulated as well as real world data.
Joint work with Gregor Kastner.


Poster presentation:

Verena Köck (WU Wien)

Option hedging in models with jumps

For pricing and hedging contingent claims, most literature is based on the assumption that prices of the underlying assets are described by a diffusion process driven by Brownian motion. Various empirical studies show that such models are not realistic and might induce mispricing. Consequentially, it is a natural approach to model stock prices as stochastic processes with discontinuous trajectories. One of the downsides of jump models is that market completeness is not given anymore and therefore perfect hedges are not guaranteed. The consequence is that most commonly used hedging methods, like delta or delta-gamma hedges, might not provide an acceptable performance. Therefore quadratic approaches in the case where the underlying discounted price process is a local martingale are suggested by Cont, Tankov and Voltchkova (2005). Of particular interest is the question how these quadratic hedging results perform compared with in practice used methods like delta or delta-gamma strategies for different exotic options, e.g. barrier options.


Poster presentation:

Borys Koval (Vienna University of Economics and Business)

Estimating a time-varying parameter model with shrinkage for the Standard&Poor's 500 index.

We use time-varying parameter models (TVP) to investigate in-sample and out-of-sample predictability for monthly returns of the Standard&Poor's 500 index (S&P 500). We consider unrestricted TVP model with a discount factor for the variance process similar to the model introduced by Dangl and Halling 2012. For the restricted TVP model, we follow the approach introduced by Bitto and Frühwirth-Schnatter 2019 to automatically shrink the time-varying coefficients to static ones. In addition, we differentiate between the significant and insignificant coefficients if the model is overfitted. We achieve this by introducing shrinkage priors based on the hierarchical double gamma prior for the variance of the latent shocks driving the regression coefficients. Both models are tested using simulated data and real market data. Furthermore, we investigate the sensitivity of the estimation approach and the time span used to evaluate the model. To evaluate one-step-ahead predictive densities, Kalman mixture approximations were applied.
Joint work with Sylvia Frühwirth-Schnatter and Leopold Sögner.


Poster presentation:

Gleda Kutrolli (University of Milano-Bicocca)

Sensitivity of model uncertainty and its impact in derivative pricing

Recently, various case studies have indicated the importance of model risk in the derivative industry and many other researches have emphasized the consequences of neglecting model uncertainty. Therefore, financial institutions have been proposed and developing new approaches to tackle and quantify them systematically and to use them as a decision aid for risk managers and regulators. Greek letters are used to represent how sensitive financial derivative prices are to changes in parameters of the chosen model used for pricing. In this paper we propose to study the sensitivity of model uncertainty, in order to understand better how changes in parameters affect it. We know that financial derivatives can be volatile and sensitive to factors such as changes in the pricing of the underlying asset and so on. These attributes are components of risk that a trader needs to control if he/she is to manage the risk of a portfolio when switch the model. On the other hand, since greeks allow an investor to determine how much risk their portfolio is facing, we have used this information to propose a new measure of model uncertainty considering some information of model risk as a penalty function.This approach will enhance some knowledge in context to the hedging strategies and enable the investors to protect their investments from adverse changes within the market and models. Some applications in standard short dated fx derivatives are illustrated.


Poster presentation:

Djaffar Lessy (Université Cote d'Azur)

Markov chain model for microcredit leading to inclusion

We will present a new such model to analyse another important feature of microcredit, namely the mechanism that allows a beneficiary of several succesive micro-loan to get access to reguler credit, and thus be "included" (in reguler banking). We consider the Markov Chain as an example has four states: A, two types of beneficiaries B and B+, and inserted I (in regular banking). We explain that if the production function of a loan is an increasing concave function of the amount of the loan, there are a minimal amount k and a maximal amount k+ between which the production exceeds costs and why the borrower may wish to successively be in the state A, B (with a loan of k), B+ (with a loan of k+), and finally, the state I where she gets a much better interest rate and thus, with a production stricly larger then her costs, and thus can generate profits. But the microfinance institution (MFI) offers a loan to an applicant only with probability γ (to avoid strategic default), a beneficiary B± is able to repay her loan only with probability β± (and get a new better loan) and otherwise she returns to state of applicant (with probability 1−β±), just as in case she does not refund her regular credit loan (with probability ε).
We compute the equilibrium distribution of this Markov chain and thus gives an insight into the efficiency of microcredit as a way to insertion into regular banking. We also explain how this allows to estimate the four parameters γ, β, β+, and e from the actual distribution of the clients of the MFI. Then we compute the expected total (intertemporal) profit, and how this is related to absence of strategic default.


Poster presentation:

Paul Felix Reiter (TU Dresden)

Feature engineering in univariate time series forecasting

Data are frequently sparse when forecasting univariate time series in economics and finance. Therefore it is mandatory to extract all information available. In this poster I will demonstrate the usefulness of feature engineering in time series forecasting. In particular, problems and possible solutions are discussed that may arise in this framework from the curse of dimensionality.


Poster presentation:

Anne Sumpf (Technische Universität Dresden)

Credit Risk with Credibility Theory: a distribution-free estimator for probability of default, value-at-risk and expected shortfall

Credibility theory is a distribution-free estimation technique from actuarial science. This paper shows an interpretation of the credibility theory for credit risk and connect it with Bernoulli mixture model. Thus, credibility theory is a generalization of Bernoulli mixture model. Based on credibility theory, we construct distribution-free estimators for probability of default, expected loss, Value-at-Risk and expected shortfall. In the end, the estimators are illustrated of a numerical example.


 

 


Gold Sponsors

Raiffeisen Bank International
BAWAG P.S.K.

Silver Sponsors

EAA-EnergieAllianz Austria
UNIQA Insurance Group AG
B&W Deloitte GmbH
Meyerthole Siems Kohlruss

Organisers

WU Vienna - Vienna University of Economics and Business
FAM @ TU Wien - Vienna University of Technology
Wolfgang Pauli Institute (WPI) Vienna
University of Vienna

Partners

Vienna Tourist Board / Wien Tourismus