Program
The final schedule should be online end of May at the latest.
Schedule Overview
Contributed Talks
More then 100 contributed talks have been accepted - abstracts are online in case the registration fee was paid already:
Contributed talk:
Sühan Altay (TU Wien, Austria)
Momentum vs mean reversion: partial information approach to optimal investment strategies
This paper investigates a dynamic portfolio optimization problem that integrates short-term momentum and long-term mean reversion in stock returns. Extending the continuous-time framework of Koijen et al. (2009) to a partial information setting, we develop a model where investors cannot directly observe the mean-reversion component. Using Kalman filtering, we estimate the hidden mean-reversion factor, transforming the optimization problem into a complete information framework and deriving optimal investment strategies. A simulation study shows that the partial information strategy closely tracks the full information strategy. Applying the model to S&P 500 data (1991-2020), we find that filtered mean reversion is smoother than traditional proxies and that the out-of-sample portfolio (2015-2020) remains stable in normal markets while the value of information rises during crises.
Contributed talk:
Aleksandar Arandjelović (Vienna University of Economics and Business, Austria)
Algorithmic strategies in continuous-time hedging and stochastic integration
We develop a rigorous framework for continuous-time algorithmic trading strategies from the point of view of mathematical finance. To this end, we first establish a universal approximation theorem for neural networks on locally convex spaces with respect to topologies in Orlicz spaces. When the underlying sigma-algebra is generated by an (uncountable) family of random variables, we prove that neural networks - through functional representations - can approximate functions in these Orlicz spaces arbitrarily well. Our main result then represents algorithmic strategies as simple predictable processes to establish their approximation capabilities in spaces of stochastic (integral) processes. As applications, we prove that algorithmic strategies can approximate mean-variance optimal hedging strategies arbitrarily well, and we establish a no free lunch with vanishing risk condition for algorithmic strategies.
Joint work with Uwe Schmock.
Contributed talk:
Elie Attal (Centre de Mathématiques Appliquées, École Polytechnique, France)
From Hyper-Roughness to Jumps as H goes to -1/2
We investigate the weak limit of the hyper-rough square root process as the Hurst index H goes to -1/2. This limit corresponds to the fractional kernel losing integrability. We establish the joint convergence of the couple (X,M), where X is the hyper-rough process and M is the associated martingale, to a couple of Levy jump processes of the Inverse Gaussian type. This unveils the existence of a continuum between continuous hyper-rough processes and jump processes, as a function of the Hurst index. Since we prove the convergence of continuous to discontinuous processes, the usual Skorokhod J1 topology is not suitable for our problem. Instead, we obtain the weak convergence in the Skorokhod M1 topology for X and in the non-Skorokhod S topology for M.
Joint work with Eduardo Abi Jaber and Mathieu Rosenbaum.
Contributed talk:
Esmaeil Babaei (Manchester Metropolitan University, United Kingdom)
Asset pricing and hedging in financial markets with fixed and proportional transaction costs
This paper extends the hedging and pricing principles based on von Neumann-Gale dynamical systems in several key directions. It incorporates both fixed and proportional transaction costs, allows for short selling under specific constraints, including margin requirements, and accounts for assets that pay dividends, with differing rates for long and short positions. The paper introduces general hedging criteria in terms of consistent discount factors and valuation systems, extending the concept of an equivalent martingale measure. These advancements provide a more comprehensive framework for asset pricing and risk management in dynamic markets.
Contributed talk:
Niccolò Bagnoli (Universitat Ramon Llull, ESADE Business School, Spain)
Recovering the physical measure from market data: A nonparametric approach with economic constraints
We propose a nonparametric approach to recovering the physical measure from market data without directly estimating the pricing kernel. Instead, we begin with the empirical risk-neutral measure, extracted from option prices, and refine it by projecting it onto a set of economically plausible densities, ensuring adherence to fundamental economic constraints. By leveraging optimal transport theory, precisely the Wasserstein metric, our method preserves the structural relationships between probability distributions, ensuring theoretical consistency with observed market data while offering a flexible reconstruction of the physical measure.
Joint work with Carlo Sala.
Contributed talk:
Laura Ballotta (Bayes Business School, City St George's, University of London, United Kingdom)
The term structure of implied correlations between S&P and VIX markets
We develop a joint model for the S&P500 and the VIX indices with the aim of extracting forward looking information on the correlation between the two markets. We achieve this by building the model on time changed Levy processes, deriving closed analytical expressions for relevant quantities directly from the joint characteristic function, and exploiting the market quotes of options on both indices. We perform a piecewise joint calibration to the option prices to ensure the highest level of precision within the limits of the availability of quotes in the dataset and their liquidity. Using the calibrated parameters, we are able to quantify the leverage effect along the term structure of the VIX options and corresponding VIX futures. We illustrate the model using market data on S&P500 options and both futures and options on the VIX.
Joint work with Ernst Eberlein and Gregory Rayee.
Contributed talk:
Fabio Baschetti (University of Verona, Italy)
Deep joint SPX/VIX calibration of the 4-factor Markov PDV model
Joint calibration to SPX and VIX market data can be computationally burdensome, especially when the standard resort for pricing volatility derivatives is nested Monte Carlo simulation.
This is the case of the four-factor Markov Path-Dependent Volatility model by Guyon and Lekeufack [Volatility is (mostly) path-dependent. Quant. Finance, 2023]. Then, a crucial boost to the joint problem comes as Gazzani and Guyon [Pricing and calibration in the 4-factor path-dependent volatility model, (To appear in) Quant. Finance, 2025] train a neural network to learn the model VIX as a random variable. The network replaces the inner simulation, and the pricing of VIX derivatives reduces to a standard Monte Carlo computation. This is a move in the right direction, with progress unfolding. In this talk, we make a step forward in training neural networks to learn SPX implied volatilities, VIX futures, and call option prices as a function of model parameters and contract specifications. As a result, pricing boils down to a (series of) matrix-vector product that runs in real-time, while calibration only takes a few seconds.
Joint work with Giacomo Bormetti and Pietro Rossi.
Contributed talk:
Dirk Becherer (Humboldt University Berlin, Germany)
Common Noise by Random Measures: Mean-Field Equilibria for competitive Investment and Hedging
We study mean-field games where common noise dynamics are described by integer-valued random measures, for instance Poisson random measures, in addition to Brownian motions. In such a framework, we describe Nash equilibria for mean-field portfolio games of both optimal investment and hedging under relative performance concerns with respect to exponential (CARA) utility preferences. Agents have independent individual risk aversions, competition weights and initial capital endowments, whereas their liabilities are described by contingent claims which can depend on both common and idiosyncratic risk factors. Liabilities may incorporate, e.g., compound Poisson-like jump risks and can only be hedged partially by trading in a common but incomplete financial market, in which prices of risky assets evolve as Itô-processes. Mean-field equilibria are fully characterized by solutions to suitable McKean-Vlasov forward-backward SDEs with jumps, for whose we prove existence and uniqueness of solutions, without restricting competition weights to be small. We obtain an explicit decomposition of the mean field equilibrium strategy into three components being related to the investment, the hedging and the mean field interaction part. I will moreover show numerical examples and also explain, how related results for Nash equilibria among finely many agents are obtained by the same original change-of-measure method in an analogous way.
Joint work with Stefanie Hesse (preprint arXiv:2408.01175).
Contributed talk:
Mohamed Ben Ghalleb (University of Twente, Netherlands, The)
Portfolio selection: stochastic dominance efficiency
Stochastic dominance is a fundamental method in decision theory for comparing uncertain outcomes, particularly in the context of investment decision-making. It enables the comparison of random variables with differing probability distributions and yields results that align with the principles of maximum expected utility theory. Consequently, when an investor employs stochastic dominance, their choices remain consistent with investment behavior as defined by expected utility theory. These properties are particularly relevant in portfolio theory; however, classical portfolio selection methods do not always adhere to them (Ogryczak & Ruszczynski, 1999).
Despite its theoretical advantages, stochastic dominance remains relatively underutilized in portfolio optimization due to a significant limitation: it only allows for pairwise comparisons. In practice, portfolio managers seek to identify the most optimal portfolios from the entire set of feasible options. Achieving this requires an extensive number of pairwise comparisons to determine the efficient setthe collection of portfolios that are optimal under stochastic dominance criteria. This approach, while direct, is computationally intensive and impractical for large portfolio sets.
To address this, we leverage the equivalence between stochastic dominance-based portfolio comparisons and risk assessments using Conditional Value at Risk and Entropic Value at Risk (Ben Ghalleb, Spierdijk, and Roorda, 2024) to characterize the efficient set for orders greater than two. Utilizing this characterization, we analyze key properties of the efficient set, such as closeness and connectivity, and leverage these insights to develop an algorithm that efficiently identifies all optimal portfolio compositions within the feasible setoutperforming the conventional direct search method.
However, identifying the efficient set, while necessary, is not a sufficient condition for making optimal investment choices. The reason is that the efficient set at a given order includes all efficient sets of higher orders. At very high orders, investors exhibit extreme risk aversion, leading to overly conservative decision-making (Eeckhoudt & Schlesinger, 2006). To refine investment choices, we introduce the concept of the efficient region,a subset of the efficient set that facilitates more balanced decision-making. By focusing on portfolios within the efficient region, investors can optimize their choices while avoiding excessively restrictive strategies.
Joint work with Laura Spierdijk and Berend Roorda.
Contributed talk:
Edoardo Berton (Università Cattolica del Sacro Cuore, Italy)
On consistency of optimal portfolio choice for state-dependent exponential utilities
In an arbitrage-free simple market, we demonstrate that for a class of state-dependent exponential utilities, there exists a unique prediction of the random risk aversion that ensures the consistency of optimal strategies across any time horizon. Our solution aligns with the theory of forward performances, with the added distinction of identifying, among the infinite possible solutions, the one for which the profile remains optimal at all times for the market-adjusted system of preferences adopted.
Joint work with Marzia De Donno and Marco Maggis.
Contributed talk:
Ole Cañadas (Dublin City University, Ireland)
Limit theorems for stochastic Volterra processes
In this talk, we investigate the long-time behaviour of Hilbert space-valued stochastic Volterra processes. For Markovian systems, such problems are typically analyzed using various methods that leverage the Markov property. However, solutions to stochastic Volterra equations are generally neither Markov processes nor semimartingales, making their asymptotic analysis both intriguing and challenging. To overcome the absence of the Markov property, we provide a flexible framework that allows us to implement Hilbert space-valued Markovian lifts. In particular, it encompasses fractional and non-fractional kernels commonly used in Mathematical Finance and Physics. As an application of our framework, we study the asymptotic behaviour of Volterra processes. To this end, we provide a full characterization of corresponding invariant measures, derive a law-of-large numbers, and show that in certain cases, a central limit theorem with the usual Gaussian domain of attraction holds.
Joint work with Luigi Amedeo Bianchi, Stefano Bonaccorsi and Martin Friesen.
Contributed talk:
Laurence Carassus (CentraleSupélec, France)
Quasi-sure essential supremum, Knightian uncertainty and Absence of instantaneous profit
When uncertainty is modelled by a non-dominated and non-compact set of probability measures, a notion of essential supremum for a family of real-valued functions is developed in terms of upper semi-analytic functions. We show how the properties postulated on the initial functions carry over to their quasi-sure essential supremum. We propose various applications to financial problems with frictions. We analyse super-replication and prove a bi-dual characterization of the super-hedging cost. We also study a weak no-arbitrage condition called absence of instantaneous profit (AIP) under which prices are finite. This requires new results on the aggregation of quasi-sure statements.
Contributed talk:
Ranu Castaneda Medina (University of Alberta, Canada)
A dynamic model for open banking
First introduced in the United Kingdom, open banking has been adopted or explored in various countries worldwide. Recent efforts in the USA, including proposed rules by the Consumer Financial Protection Bureau, signal a growing commitment to fostering competition and data portability in financial services. Open banking allows customers to share their financial data with third-party providers, such as fintech companies, thereby increasing competition and expanding access to financial services. While this shift promises benefits such as improved product offerings and expanded financial inclusion, it also introduces significant technical, social, and economic risks. We develop a continuous-time model using a search-and-match framework to analyze borrower heterogeneity and competition between banks and fintechs. Our preliminary analysis shows that open banking reshapes financial services by leveling the playing field between banks and fintechs. However, while it enhances competition, it could also over-empower fintechs, potentially reversing the benefits to borrowers.
Joint work with Christoph Frei.
Contributed talk:
Andreas Celary (Vienna University of Economics and Business, Austria)
Finite-dimensional linear Discount Models - from Consistency to Calibration
In this project we present the theory of the bond discount, first introduced in arXiv:2306.16871. We show that in an HJM setting, a model for the discount curve fulfilling no-arbitrage remains on a finite-dimensional manifold of curves for any tangential volatility parameter if and only if it is linear.
Assuming a finite-dimensional linear model, we make use the theory of reproducing kernel Hilbert spaces and derive a rich family of kernels which generate consistent curve spaces suitable for fitting the model to financial data. Finally, using the powerful representer theorem for reproducing kernels, we reduce the infinite-dimensional curve optimisation problem to a finite-dimensional Ridge regression and test our methodology on treasury data obtained from the real market. We verify our approach with a thorough statistical analysis.
Joint work with Zehra Eksi-Altay, Paul Eisenberg and Damir Filipovic.
Contributed talk:
Konstantinos Chatziandreou (University of Amsterdam, Netherlands, The)
Semi-static hedging of Volumetric Risk in Energy Markets - An application to Green PPAs and Quanto Options
Power purchase agreements (PPAs) play a pivotal role in the green energy transition by locking in energy prices for uncertain future production from renewable plantsoften up to 10 or 15 years. The value of a PPA is largely determined by the joint distribution of future production rates and forward energy prices. In this talk, we present quantitative methods for pricing and hedging PPAs. For this, we develop a coupled model for forward electricity prices and renewable power production indices using an HJM formulation. The use of a Wishart-type stochastic covariance model allows us to capture the complex covariance structure between future production rates and forward energy prices. We derive semi-closed form solutions for the (semi-static) variance-optimal price and hedge and analyse the effectiveness of this integrated approach on mitigating the volume and price risks intrinsic to PPAs compared to a Delta-hedging strategy and a fully static one consisting of a portfolio of power and weather derivatives.
Joint work with Sven Karbach.
Contributed talk:
Minshuo Chen (Northwestern University, United States of America)
Diffusion models for high-dimensional return generation and factor recovery
Financial scenario simulation is crucial for risk management, portfolio optimization, and stress testing, yet it remains challenging under small-data constraints. Effective simulation methods must integrate economic structure with advanced generative learning techniques to ensure robustness and interpretability. We propose a principled approach for financial scenario simulation based on diffusion model that incorporates factor structure in data. Grounded in asset pricing theory yet novel in the generative diffusion model literature, our method addresses the challenges of the curse of dimensionality and data scarcity in financial markets. More specifically, by fully leveraging the data structure, we decompose the score functiona key component in diffusion modelsusing time-varying orthogonal projections. Incorporating this decomposition into the neural network architecture design, we establish non-asymptotic error bounds for score estimation at $\widetilde{\mathcal{O}}(d^{5/2}n^{-\frac{2}{k+5}})$ and generated distribution at $\widetilde{\mathcal{O}}(d^{5/4}n^{-\frac{1}{2(k+5)}})$, where $d$ is the number of assets and $k$ is the number of (unknown) factors, surpassing the dimension-dependent limits in the classic diffusion model literature. This work presents the first theoretical integration of factor models with diffusion models, bridging econometrics with generative AI for robust financial simulation. Numerical tests on synthetic data demonstrate that our method outperforms classical empirical and shrinkage techniques in small-data regimes.
Joint work with Renyuan Xu, Yumin Xu and Ruixun Zhang.
Contributed talk:
Tahir Choulli (University of Alberta, Canada)
Conditional essential supremum for informational models and super-hedging-pricing formulas for vulnerable claims
We consider the discrete-time setting, and the market model described by the triplet (S,F,T). Herein F is the "public" flow of information which is available to all agents over time, S is the discounted price process of d tradable assets, and T is an arbitrary random time whose occurrence might not be observable via F. As a random time can not be observed before its occurrence, the adequate larger flow G – which incorporates F and makes T an observable random time – is the progressive enlargement of F with T. This framework covers the credit risk theory setting where T represents the default time of a firm or client, the life insurance setting where T models the death time of an insured, and other areas of finance. Our first principal contribution consists of singling out the impacts, of change of prior and of additional information represented by T, on conditional essential supremum operator. These novel mathematical results are vital tools to address the super-hedging pricing problem for vulnerable claims, which are claims that depend somehow on the occurrence or not of T. This constitutes our second major contribution. Herein, on one hand, we address the existence of the super-hedging price as a price, and this is intimately related to the absence of some Immediate-Profit arbitrage (IP hereafter for short). On the other hand, we focus on elaborating explicit super-hedging formula and show it varies with the benefit policy. Furthermore, we show – with high precision – how the super-hedging prices' set expands under the stochasticity of T. Finally, we quantify the impact of various risks borne in the stochasticity of T on the super-hedging price of any vulnerable claim. This is achieved by decomposing its dynamics and singling out explicitly the various informational risks in the dynamics of the price process. This latter fact is highly vital when the insurance securitization process is adopted instead of the classical reinsurance process, towards amortizing the mortality and/or longevity risks.
This talk is based on the following joint work with Emmanuel Lepinette (Paris-Dauphine, France):
T. Choulli and E. Lepinette (2024): Super-hedging-pricing formulas and Immediate-Profit arbitrage for market models under random horizon. A version of the paper is available at: arXiv:2401.05713.
Contributed talk:
Fabio Colpo (TU Wien, Austria)
Optimal control for an Ornstein-Uhlenbeck surplus
We consider an insurance company whose surplus follows an Ornstein-Uhlenbeck (OU) process driven by a standard Brownian motion. The company pays dividends to its shareholders and seeks to maximise the expected value of the future discounted dividends. Late dividend payments are penalised/rewarded not only through the usual discounting, but through an additional exponential factor.
We find the optimal strategy for the case of mean-reverting and non-mean-reverting OU processes and illustrate our findings by a numerical example.
Joint work with Julia Eisenberg.
Contributed talk:
Christoph Czichowsky (LSE, Switzerland)
Duality theory for utility maximisation in Volterra kernel models for transient price impact
Incorporating the impact of past trades on future prices is crucial to understand the profitability of trading strategies. Recently, Volterra kernels have been proposed for tractable models to capture this transient feature of price impact in optimal execution problems. In this talk, we consider expected utility maximisation in such Volterra kernel propagator models for transient price. We solve this problem by convex duality and establishing the solution to a suitable dual problem. For this, we identify an appropriate class of dual variables and develop a novel super-replication theorem. Despite the infinite dimensionality of the state variables of the optimal control problem, our approach allows us to recover the tractability of linear-quadratic optimal execution in the non-linear utility maximisation problems.
The talk is based on joint work with Jun Cheng.
Contributed talk:
James Luke Dalby (King's College London, United Kingdom)
Collective pensions in the presence of systematic longevity risk
There is growing interest in collective pension investment in the UK, and the first collective defined contribution fund was launched in October 2024. In this talk, we will discuss a theoretically optimal way to manage a collective fund by creating an internal insurance market which is required to clear when everyone behaves optimally. We determine the equilibrium price of these contracts and the benefits they may bring when an insurance market is created to hedge systematic longevity risk.
This talk is joint work with the Pension Policy Institute and is funded by Nuffield grant FR-000024058.
Joint work with John Armstrong.
Contributed talk:
Josha Dekker (University of Amsterdam, Netherlands, The)
Optimal decision-making with randomly arriving decision moments
Decision-making problems with randomly arriving decision moments occur naturally. Financial situations in which these phenomena may emerge are e.g. asset-liquidity spirals or optimal hedging in illiquid markets. We develop algorithms and methods to analyze such problems in continuous time finite horizon problems, under mild conditions on the arrival process of decision moments.
Operating on the random timescale implied by the decision moments, we obtain a discrete time, infinite-horizon problem. This problem may be solved directly or suitably truncated to a finite-horizon problem. We develop stochastic simulation-and-regression algorithms for both cases in both the optimal stopping and control case. In this talk, we focus on optimal control problems, for which we provide a primal-dual algorithm which does not require knowledge of the transition probabilities, as these may not be readily available for such problems. To this end, we present a new martingale representation result.
We then apply our methods to study an optimal hedging question in an illiquid market, where we explore the effect of randomly arriving rebalancing moments on the optimal hedging positions.
During the talk, we also comment on the similarities and differences with the problem of optimal stopping, which we have studied in previous work.
Joint work with Roger Laeven, Michel Vellekoop and John Schoenmakers.
Contributed talk:
Indira Dhar (Rheinland-Pfälzische Technische Universität Kaiserslautern (RPTU), Germany)
Stochastic control problems in the dynamic Nelson-Siegel framework
In this paper, we consider a stochastic model of the dynamic Nelson-Siegel yield curve. We then employ stochastic control theory to solve portfolio optimization problems under this yield framework and compare the results with [1]. Furthermore, we extend this framework to include the case of rolling horizon bonds.
Joint work with Ralf Korn.
References:
[1] Korn, R., and Kraft, H. (2002). A stochastic control approach to portfolio problems with stochastic interest rates. SIAM Journal on Control and Optimization 40(4), pp. 1250–1269.
Contributed talk:
Alessandro Doldi (Università degli Studi di Milano, Italy)
Collective arbitrage and dynamic risk measures
We extend the framework introduced in "Collective Arbitrage and the Value of Cooperation" by F. Biagini, A. Doldi, J.-P. Fouque, M. Frittelli, and T. Meyer-Brandis (Finance and Stochastics, forthcoming, 2025) in order to analyze collective dynamic risk measures. In segmented markets, we explore the implications of cooperation on dynamic risk measurement, focusing particularly on aggregation and time consistency.
Joint work with Marco Frittelli and Emanuela Rosazza Gianin.
Contributed talk:
David-Jacob Economides (University of Piraeus, Greece)
Perpetual American Options in a jump-diffusion model under Poisson observation times
We investigate the problem of pricing Perpetual American put and call options, considering the exercise opportunity occurring under Poissonian inspection times. Employing an exponential Levy model, we specifically examine a jump diffusion with two-sided jumps. We present key identities to express the options payoff and exercise probability in terms of undershoot/overshoot for put/call scenarios. Additionally, we derive explicit formulas for options payoff, exercise probability, and optimal boundary under double exponential jump diffusion process. Valuation under the risk-neutral probability measure is provided, along with asymptotic formulas yielding well-established results from continuous time model. Our numerical examples illustrate the impact of inspection intensity sensitivity on pricing outcomes.
Joint work with Michael V. Boutsikas.
Contributed talk:
Matteo Ferrari (University of Amsterdam, Netherlands, The)
Measuring financial resilience using backward stochastic differential equations
We propose the resilience rate as a measure of financial resilience that captures the rate at which a dynamic risk evaluation recovers, i.e., bounces back, after the risk-acceptance set is breached. We develop the associated stochastic calculus by establishing representation theorems of a suitable time-derivative of solutions to backward stochastic differential equations (BSDEs) evaluated at stopping times. These results reveal that our resilience rate can be represented as an expectation of the generator of a BSDE. We also introduce resilience-acceptance sets and study their properties in relation to both the resilience rate and the risk measure. The definition of resilience rate is generalized to the case of dynamic risk measures induced by BSDEs with jumps. We illustrate our results in several examples.
Joint work with Roger J. A. Laeven, Emanuela Rosazza Gianin and Marco Zullino.
Contributed talk:
Marco Frittelli (Milano University, Italy)
Collective Free Lunch and the FTAP
This paper extends the discrete-time framework of collective arbitrage and super-replication, introduced by Biagini, Doldi, Fouque, Frittelli, and Meyer-Brandis in their forthcoming publication "Collective Arbitrage and the Value of Cooperation" (Finance and Stochastics, 2025). That work established a multi-agent market model where agents operate across submarkets and collaborate via risk exchange mechanisms. Building upon these foundations, we generalize the theory to a continuous-time setting with general semimartingale markets. We introduce the concept of a Collective Free Lunch and examine the implications of the No Collective Free Lunch assumption. Furthermore, we establish the corresponding Fundamental Theorem of Asset Pricing and pricing-hedging duality for these markets. We will also discuss developments concerning collective replication and market completeness within this collective framework.
Contributed talk:
Gianluca Fusai (Bayes Business School, City St George, University of London, United Kingdom)
Monotonic transformation, implied stock price process and market consistent pricing of derivatives contracts
The paper expands on Fusai (2024) to identify a diffusion process for the stock price that aligns with observed option prices at specific maturities. This is achieved by modeling stock returns as a monotonic generalized transformation, termed g-normal, of an arithmetic Brownian motion (ABM). With the function g defined, deriving the dynamics of log-returns becomes straightforward using Ito's lemma, an improvement over the approach by Dupire (1994). This paper further develops this framework by demonstrating its full implementation in pricing popular exotic products and compare the proposed approach with other popular approaches in the literature, such as stochastic volatility processes.
Joint work with Giovanni Longo.
Contributed talk:
Pavel V. Gapeev (LSE, United Kingdom)
Pricing of pair trading strategies in models with maxima and minima of mean-reverting underlying risky asset prices
We derive closed-form solutions to the optimal double-stopping problems related to the problems of autonomous trading and optimal prediction in models with maxima and minima of mean-reverting geometric and arithmetic Ornstein-Uhlenbeck processes (Black-Karasinski and Vasicek models for the underlying risky asset prices). It is shown that the optimal trading times are the first hitting times by the asset price processes of either lower or upper stochastic boundaries depending on the running values of the associated running maximum and minimum price processes. The necessarily three-dimensional double-optimal stopping problems are reduced to the equivalent (double-parametrised) coupled ordinary free-boundary problems which characterise the optimal boundaries as the maximal and minimal solutions to the systems of first-order (nonlinear) ordinary differential equations. We also consider the case in which the asset prices are modelled by time-inhomogeneous diffusions being linear (Kalman) filtering estimates of unobservable Ornstein-Uhlenbeck processes. In that case, the time-inhomogeneous optimal double-stopping problems with payoffs representing linear functions of the asset prices are reduced to the equivalent parabolic-type free-boundary problems which characterise the optimal time-dependent boundaries as unique solutions to the (nonlinear) Fredholm-type integral equations.
Contributed talk:
Guido Gazzani (University of Verona, Italy)
Polynomial path-dependent volatility models
Recently path-dependent volatility models have been receiving growing attention from both researchers and practitioners. Most of these models reproduce a wide range of stylized facts that are well known to underlie the volatility process. We build on the work of Guyon and Lekeufack (2023), where after an empirical study comparing different models a simple parametrization of the volatility process was introduced. The main features consist in exponentially weighted past returns and past returns squared. Although simple, the former model homogeneous in volatility, comes with several computational challenges that can only be overcome using machine-learning techniques (see Gazzani and Guyon (2025)). Here, we take a step back in favor of mathematical tractability, dealing with a similar model that is homogeneous in variance. We call this class polynomial path-dependent volatility (PDV) models. We tackle existence, uniqueness, and absence of explosion of the SDE and we derive conditions for the non-negativity/positivity of the variance process. Calibration to real market data benefits from the fast computation of the VIX via the moment formula, and an empirical study demonstrates strong performance under the physical measure. Finally, we derive explicitly an expression for the forward variance, a result that lays the foundation for studying the dynamical properties of polynomial PDV models.
Joint work with Fabio Baschetti and Julien Guyon.
Contributed talk:
Alessandro Gnoatto (University of Verona, Italy)
When defaults cannot be hedged: an actuarial approach to XVA calculations via local risk-minimization
We consider the pricing and hedging of counterparty credit risk and funding when there is no possibility to hedge the jump to default of either the bank or the counterparty. This represents the situation which is most often encountered in practice, due to the absence of quoted corporate bonds or CDS contracts written on the counterparty and the difficulty for the bank to buy/sell protection on her own default. We apply local risk-minimization to find the optimal strategy and compute it via a BSDE.
Joint work with Francesca Biagini and Katharina Oberpriller.
Contributed talk:
Luca Gonzato (University of Vienna, Austria)
Polynomial stochastic volatility models: quasi-Bayesian estimation and empirical performances
Polynomial processes are a class of Markov processes for which the calculation of (mixed) moments up to a fixed order only requires the computation of matrix exponentials. Although polynomial models have been considered for option pricing purposes, there are currently no efficient methods available for their econometric estimation using time series of derivatives data. In fact, the literature has devoted much more effort on affine models (which are a special case of polynomial models), since they are mathematically and computationally more tractable. In this paper, we address this gap by presenting an econometric estimation methodology for polynomial option pricing models. The methodology builds on four main building blocks. First, we demonstrate that the cumulants of the log-returns can be expressed as a polynomial function of latent states. Second, we use this fact, as well as the fact that risk-neutral cumulants can be easily calculated from quoted option prices, to cast polynomial option pricing models in a state-space form, with market cumulants serving as the observed variable. Third, we use recent literature to calculate market implied physical cumulants, resulting in an effective identification of the risk-premia. Fourth, we use Sequential Monte Carlo samplers to sample from the quasi-posterior of the model parameters. The result is a fast quasi-Bayesian econometric estimation methodology that works for general polynomial option pricing models. Finally, we evaluate the econometric performances of polynomial option pricing models against well-established affine models. Our results show that polynomial stochastic volatility models largely outperform affine stochastic volatility models.
Joint work with Riccardo Brignone.
Contributed talk:
Julien Guyon (École nationale des ponts et chaussées, Institut polytechnique de Paris & NYU, France)
Bergomi models with volatility memory
A family of stochastic volatility models with memory in the volatility process is presented as an extension of Bergomi models. The volatility feedback accounts for two important features of equity markets that the classical time-homogeneous Bergomi models fail to capture: (a) the time-asymmetry of large positive VIX spikes, and (b) the positive VIX skews. Inspired by the path-dependent volatility models of Chicheportiche and Bouchaud (2014) and Guyon and Lekeufack (2023) while aiming to keep the forward variance process as explicit as possible, we consider models where the instantaneous variance is a function of Ornstein-Uhlenbeck factors – which, like in Bergomi models, capture the market trend – plus a weighted average of past instantaneous variances, which captures the volatility feedback. Convex combinations of exponential feedback weights yield handy Markov models. An expansion in small volatility of volatility, along the lines of the Bergomi-Guyon expansion, provides an approximation of the smile of model implied volatilities. Extensive numerical experiments are conducted that illustrate the properties of the models, compared to classical Bergomi models, and assess the accuracy of the smile expansion for market-calibrated parameters.
Joint work with Jules Delemotte and Stefano De Marco.
Contributed talk:
Paul Peter Hager (University of Vienna, Austria)
Signatures in stochastic control: open loop and beyond
We explore recent advances in stochastic optimal control beyond the Markovian setting, using path signatures in several aspects. Beyond the recent open-loop parameterization approach from [Bayer et al., "Stochastic Control with Signatures", 2024] and its accompanying Monte Carlo method, we discuss recent progress in treating the closed-loop and path-dependent framework. In another direction, we explore a duality framework where signatures emerge through a functional Taylor expansion of the pathwise penalty term. Together, these methods provide new insights into both the primal and dual formulations of stochastic control, with applications in non-Markovian Mathematical Finance modeling.
Contributed talk:
Yevhen Havrylenko (Ulm University, Germany)
Equilibrium control theory for Kihlstrom–Mirman preferences in continuous time
In intertemporal settings, the multi-attribute utility theory of Kihlstrom and Mirman suggests the application of a concave transform of the lifetime utility index. This construction, while allowing time and risk attitudes to be separated, leads to dynamically inconsistent preferences. We address this issue in a game-theoretic sense by formalizing an equilibrium control theory for continuous-time Markov processes. In these terms, we describe the equilibrium strategy and value function as the solution of an extended Hamilton-Jacobi-Bellman system of partial differential equations. We verify that (the solution of) this system is a sufficient condition for an equilibrium and examine some of its novel features. A consumption-investment problem for an agent with CRRA-CES utility showcases our approach.
Joint work with Luca De Gennaro Aquino, Sascha Desmettre and Mogens Steffensen.
Contributed talk:
Joshua Hayes (EPFL, Switzerland)
Shadow-model extrapolation of the yield curve
The extrapolation of the yield curve presents a critical challenge for financial institutions holding portfolios of long-dated liabilities, especially when the maturities of these liabilities exceed those of any traded financial instruments. In many such cases, extrapolation is unavoidable, yet the choice of methodology is often ad hoc and lacks a rigorous theoretical foundation. Any decision making about extrapolation, we argue, ought to be informed by explicit economic assumptions and guided by sound mathematical arguments. We address this challenge, introducing a general framework that solves this extrapolation problem, guided by the principle of arbitrage-free pricing.
We propose a shadow model - an arbitrage-free, stationary, term-structure model that is parsimonious and yet flexible enough to accommodate diverse market conditions and calibrated to reflect stationary economic conditions. Given a yield curve defined up to a last data point, our shadow-model framework defines and explains the long-term behaviour of the yield curve, ensuring long-term stability. The shadow model is constructed to ensure perfect replication of the observed yield curve up to the last data point, while providing theoretically sound extrapolation of longer maturities. Only mild technical assumptions are required, notably, that the instantaneous forward curve is absolutely continuous, leading to a very general framework.
Key to our approach is the introduction of the long forward rate, defined as the infinite-maturity limit of the forward curve (equivalently viewed as the infinite-maturity yield), and a speed-of-convergence parameter, both of which are intended to be set by expert judgement. We show how to derive closed-form expressions for the model-implied forward curve and yield curve under a two-factor Gaussian term-structure model. We then demonstrate how calibration of this specific shadow model can be reduced to an efficient constrained convex optimisation problem which is guaranteed a unique solution.
The stability of our shadow-model extrapolation is studied and compared to several well-known extrapolation approaches. Particular attention is devoted to methodology proposals in recent policy documents from the European Insurance and Occupational Pensions Authority (EIOPA). The extrapolation approach specified by EIOPA does not guarantee absence of arbitrage, and is arguably ad hoc in nature. In comparison, our model fairs much better from a theoretical and practical perspective; offering stability, guaranteed absence of arbitrage, and control over the speed-of-convergence to the long forward rate. Our work is particularly salient, with both our framework and our proposed shadow model being of interest to policy-makers and practitioners concerned with extrapolation.
Joint work with Damir Filipovic.
Contributed talk:
Alexander Herbertsson (University of Gothenburg, Sweden)
Optimal loss-absorbing resources for central counterparties
We study the optimal level of loss-absorbing resources, such as the default fund contribution and initial margin, for central counterparties (CCPs) and other relevant quantities needed for CCP risk management. For a general homogeneous static credit portfolio modeling a network of defaultable clearing members with random exposures trading via a CCP, we derive compact and computationally tractable analytical tools. These tools enable us to determine, among other things, the optimal default fund contribution and initial margin, subject to the "cover 2" condition. Using numerical optimization routines applied to our developed framework, we find the optimal default fund contribution and initial margin as functions of the number of clearing members, individual default probabilities, default correlations, and the volatility of pairwise exposures. Our numerical results consistently indicate that the optimal default fund should be approximately 75%-95% of the optimal initial margin, regardless of the default model used. This finding contradicts current market practice, where the default fund contribution constitutes around 50% or less of the initial margin. Our numerical results on the relationship between the optimal default fund contribution and initial margin are robust with respect to the choice of default model, parameters, and clearing member size. We also provide analytical formulas for the first-order conditions, allowing us to find the optimal quantities as numerical solutions to a system of equations, thus avoiding more computationally demanding steepest-descent optimization algorithms.
Contributed talk:
Felix Hoefer (Princeton University, United States of America)
Synchronization games and applications
We propose a new mean field game model with two states to study synchronization phenomena, and we provide a comprehensive characterization of stationary and dynamic equilibria along with their stability properties. The game undergoes a phase transition with increasing interaction strength. In the subcritical regime, the uniform distribution, representing incoherence, is the unique and stable stationary equilibrium. Above the critical interaction threshold, the uniform equilibrium becomes unstable and there is a multiplicity of stationary equilibria that are self-organizing. Under a discounted cost, dynamic equilibria spiral around the uniform distribution before converging to the self-organizing equilibria. With an ergodic cost, however, unexpected periodic equilibria around the uniform distribution emerge. The second part of the talk considers an application of this model to Stackelberg games.
Joint work with Mete Soner.
Contributed talk:
Haejun Jeon (The University of Osaka, Japan)
Certainty equivalent and uncertainty premium of time-to-build
Time-to-build of an investment project induces a discrepancy between the timing of investment and that of revenue generation. Jeon (2024) showed that uncertainty in the time-to-build always accelerates investment and enhances pre-investment firm value, regardless of its distribution. This study examines the extent to which the uncertainty advances the timing of investment and improves firm value. Specifically, we show that there always exists a unique certainty equivalent of uncertain time-to-build and derive it in an analytic form. This enables us to derive the investment strategy with uncertain time-to-build in the form of the one that would have been adopted in the absence of such uncertainty. Even without full knowledge of the uncertainty, the firm can approximate the optimal investment strategy using only the mean and variance of time-to-build. We also clarify the positive impact of entropic risk measure of time-to-build on investment and derive the dual representation of the certainty equivalent of time-to-build based on relative entropy. Furthermore, we show that there always exists an uncertainty equivalent of fixed time-to-build. This implies that the firm can deduce the equivalent risk that its investment strategy, established without considering uncertainty in time-to-build, implicitly assumes. Lastly, we illustrate the practical application of our findings using some representative probability distributions and analyze the effects of the variance of time-to-build. In particular, we contrast the effects of uncertainty in demand with those of uncertainty in time-to-build, deriving the level of variance in time-to-build that offsets the negative impact of increased demand volatility on investment.
Joint work with Michi Nishihara.
Contributed talk:
Parun Juntanon (Walailak University, Thailand)
Analytical computation of conditional moments in the extended Cox–Ingersoll–Ross process with regime switching: Hybrid PDE system solutions with financial applications
In this work, we introduce a novel analytical approach for the computation of the nth conditional moments of an m-state regime-switching extended Cox–Ingersoll–Ross process driven by a continuous-time finite-state irreducible Markov chain. This approach is applicable for all integers n ≥ 1 and m ≥ 1, thereby ensuring wide-ranging utility. The key of our investigation is a complex hybrid system of inter-connected PDEs, derived through a utilization of the Feynman–Kac formula for regime-switching diffusion processes. Our exploration into the solutions of this hybrid PDE system culminates in the derivation of exact closed-form formulas for the conditional moments for diverse values of n and m. Additionally, we study the asymptotic characteristics of the first conditional moments for the 2-state regime-switching Cox–Ingersoll–Ross process, particularly focusing on the effects of the symmetry inherent in the Markov chain’s intensity matrix and the implications of various parameter configurations. Highlighting the practicality of our methodology, we conduct Monte Carlo simulations to not only corroborate the accuracy and computational efficacy of our proposed approach but also to demonstrate its applicability to real-world applications in financial markets. A principal application highlighted in our study is the valuation of VIX futures and VIX options within a dynamic, mean-reverting, hybrid regime-switching framework. This exemplifies the potential of our analytical method to significantly impact contemporary financial modeling and derivative pricing.
Joint work with Sanae Rujivan, Boualem Djehiche and Nopporn Thamrongrat.
Contributed talk:
Martin Keller-Ressel (TU Dresden, Germany)
Shape and dynamics of the term structure in multi-factor interest rate models
We examine the shapes attainable by the forward- and yield-curve in several multi-factor interest rate models, in particular in the two-factor Vasicek model and the Svensson family of models. We provide a complete classification of all attainable shapes and partition the parameter space of each family according to these shapes. Building upon these results, we then examine the consistent dynamic evolution of the Svensson family under absence of arbitrage. Our analysis shows that consistent dynamics restrict the set of attainable shapes, and we demonstrate that certain complex shapes can no longer appear after a deterministic time horizon. As mathematical tools, the theory of total positivity and envelopes of plane curves are employed.
The talk is based on joint work with Felix Sachse.
Contributed talk:
Thomas Kirkegaard Kloster (Aarhus University, Denmark)
An ambit field framework for the full panel of day-ahead electricity prices
This paper considers the often overlooked fact that electricity spot prices in individual European generation zones evolve as a panel structure. A general continuous time framework is developed by formulating the panel as an ambit field indexed by a cylinder surface, where the cross sectional dimension is represented by a circle. This requires a treatment of ambit fields on manifolds, but the departure from Euclidean space allows for embedding intrinsic dependence structures into the index set in a flexible and parameter-free way, where the daily delivery periods have a canonical mapping onto the circle. The model is a natural space-time extension of volatility modulated Lévy-driven Volterra processes, which have previously been studied in the context of energy markets, and the pricing of derivatives turns out to be essentially as analytically tractable as in the null-spatial setting. The space-time framework further allows for considering derivatives written on individual delivery periods, where spreads between these constitute an interesting example that allows for the hedging of more heterogeneous delivery commitments than conventional futures contracts. As an application, an estimation experiment is carried out on German data, where a semi-parametric model specification is fitted to learn a complex correlation structure in both the temporal and cross sectional dimension.
Contributed talk:
Evgeny Kolosov (ETH Zürich, Switzerland)
On arbitrage-free prices of American options
One of the key questions in robust finance is the study of arbitrage-free models consistent with the observed prices of various derivatives. Most recent research focuses on the case where the observations are the prices of a certain set of European options. It is known that the knowledge of European option prices allows us to determine the marginal distributions of the underlying asset prices, and the condition for the existence of a compatible arbitrage-free model is equivalent to these distributions being in convex order. However, if the observations are the prices of American options, the problem becomes significantly more complex, as their prices do not provide complete marginal distributions. In this work, we investigate the conditions for the existence of arbitrage-free models consistent with the observed prices of American options. This leads to an extension of the concept of convex order, which we term "biased convex order". We also study the properties of this order and prove a Strassen-type result for it.
Joint work with Beatrice Acciaio, Mathias Beiglböck and Gudmund Pammer.
Contributed talk:
Jan Korbel (Complexity Science Hub, Austria)
Applications of fractional diffusion in option pricing
In this talk, I will revise some recent applications of space-time fractional diffusion in option pricing. I will show how one can derive the option price under the assumption that the underlying process is driven by a space-time fractional diffusion, describe the interpretation of the fractional-order derivatives, and derive several representations of the option prices, including Mellin-Barnes representation, subordinator representation and residue-series representation. Finally, I will also discuss some issues of the model, as the presence of risk-neutral measure.
Contributed talk:
Gabriela Kovacova (University of California Los Angeles, United States of America)
Robust multi-objective stochastic control
Model uncertainty is relevant for various dynamic optimization problems within the field of financial mathematics. As one example, let us mention the portfolio selection problem -- an investor does not know the true distribution of asset return on the market. There has been a significant body of work dedicated to the study of uncertain stochastic control problem within the literature. Various approaches to handling the uncertainty have been developed, among them the robust approach with the aim of optimizing under the worst-case scenario.
While model uncertainty and robust optimization are relatively well understood for standard control problems with a single (scalar) objective, this is much less the case for problems with multiple objectives. In recent years, several (dynamic) problems of financial mathematics have been approached through methods of multi-objective and set optimization. Set-valued Bellman's principle, a version of the well known Bellman's principle for problems with multiple or set-valued objectives, has been derived across different problems.
In this work we explore the robust approach to model uncertainty for multi-objective stochastic control problems. Robust multi-objective optimization has been explored in the static but not in the dynamic setting. We are particularly interested in the application of dynamic programming and the impact model uncertainty has on the set-valued Bellman's principle. We show how the set-valued Bellman's principle is replaced by certain set relations (or inclusions) under robustness and present assumptions under which equality can be obtained. These results are the first step to extending dynamic programming also to multi-objective problems in the context of model uncertainty.
Joint work with Igor Cialenco.
Contributed talk:
Anna P. Kwossek (University of Vienna, Austria)
A pathwise stability analysis of log-optimal portfolios
Classical approaches to optimal portfolio selection problems are based on probabilistic models for asset returns or prices. However, it is now widely recognized that the performance of optimal portfolios is highly sensitive to model misspecifications. To account for model risk, robust and model-free approaches have gained increasing importance in portfolio theory.
In this talk, we develop a pathwise approach allowing to analyze the stability of well-known optimal portfolios in local volatility models under model uncertainty.
In particular, we study the pathwise stability of the classical log-optimal portfolio with respect to the model parameters and investigate the pathwise error resulting from trading with respect to a time-discretized version of the log-optimal portfolio.
This talk is based on joint work with A. L. Allan, C. Liu and D. J. Prömel.
Contributed talk:
Johannes Langner (Leibniz Universität Hannover, Germany)
Bipolar Theorems for Sets of non-negative Random Variables
We assume a robust, in general not dominated, probabilistic framework and provide necessary and sufficient conditions for a bipolar representation of subsets of the set of all quasi-sure equivalence classes of non-negative random variables, without any further conditions on the underlying measure space. This generalizes and unifies existing bipolar theorems proved under stronger assumptions on the robust framework. Applications are in areas of robust financial modeling.
Joint work with Gregor Svinland.
Contributed talk:
Nicolas Langrené (Beijing Normal-Hong Kong Baptist University, People's Republic of China)
Spectral Volterra processes to solve the fractional volatility puzzle
Stochastic volatility models driven by a fractional Brownian motion cannot account for both roughness of volatility paths and long memory, a problem known in the literature as the fractional volatility puzzle. This puzzle is a consequence of the self-similarity property of the fractional Brownian motion, which connects its fractal dimension D and its Hurst exponent H by the celebrated formula D=2-H.
Fortunately, there exist alternative Gaussian processes which do not obey the self-similarity property, allowing for both rough paths and long memory of volatility by calibrating D and H separately. One such process is the so-called Generalized Cauchy process, whose covariance follows a two-parameter inverse power function. In the past, the idea to use a Generalized Cauchy process as a stochastic volatility model for option pricing has been hampered by the lack of any simple stochastic integral representation of this process.
In this research, we establish an explicit stochastic integral representation of the Generalized Cauchy process, using known results about the spectral density of its covariance kernel. We then adapt this spectral representation formula to define an explicit Gaussian Volterra process, which we call Spectral Volterra process, whose stationary distribution coincides with any prescribed Gaussian process, for example the Generalized Cauchy process. Since option pricing expansion formulas are available for Gaussian Volterra volatility models, the spectral Volterra formulation provides a workable approach to overcome the fractional volatility puzzle in practice.
Joint work with Mingmei Xu and Peng Jin.
Contributed talk:
Christian Laudagé (RPTU Kaiserslautern-Landau, Germany)
Multi-asset return risk measures
In this talk, we revisit the recently introduced concept of return risk measures (RRMs) and extend it by allowing risk management via multiple so-called eligible assets. The resulting new class of risk measures, termed multi-asset return risk measures (MARRMs), introduces a novel economic model for multiplicative risk sharing. We analyze typical properties of these risk measures. In particular, we prove that a positively homogeneous MARRM is quasi-convex if and only if it is convex. Furthermore, we state conditions to avoid a notion of arbitrage in our setup. Then, we point out the connection between MARRMs and the well-known concept of multi-asset risk measures (MARMs). This is used to obtain different dual representations of MARRMs. Moreover, we conduct a series of case studies, in which we use typical continuous-time financial markets and different notions of acceptability of losses to compare RRMs, MARMs, and MARRMs and draw conclusions about the cost of risk mitigation.
Joint work with Felix-Benedikt Liebrich and Jörn Sass.
Contributed talk:
Emmet Lawless (Dublin City University, Ireland)
A variational approach to portfolio choice
In this talk we propose a calculus of variations approach to an optimal consumption problem with isoelastic preferences over an infinite horizon. Specifically we consider a complete market with a single state variable on which all model coefficients can depend. Under some mild assumptions we characterise the value function as the unique solution to a convex variational problem which is far more amenable to numerical methods. This approach circumvents the need to solve the associated Hamilton-Jacobi-Bellman (HJB) equation. This is a desirable situation as explicitly solving the HJB equation is often intractable and even numerics cannot be readily employed due to the lack of boundary conditions. We illustrate the utility of this approach by providing examples of models which cannot be solved using existing methods in the literature but can be solved using our approach. Additionally we highlight how this approach may be extended to solve similar optimisation problems in incomplete markets.
Joint work with Paolo Guasoni and Ho Man Tai.
Contributed talk:
Haibo Liu (Purdue University, United States of America)
Why insurers price carbon low: An analysis of financed emissions and investment decisions
A recent research finds that the insurance sector has the lowest median internal carbon price (ICP) among thirteen sectors. In this paper, we rationalize the low ICP through an analysis of insurers emissions, which are currently dominated by financed emissions. We develop and analytically solve an extended mean-variance model with a constraint on portfolio emissions. Our model identifies an emission-efficient investment frontier for environmentally conscious investors. Using emissions data from London Stock Exchange Group, we empirically estimate the frontier. According to our model, to maximize the insurers carbon-adjusted performance metric in our model, the ICP should not exceed $1.4 per tonne, and a low ICP can reduce insurers financed emissions substantially without compromising their investment performances. Our findings provide support for using the low ICPs for financed emissions.
Joint work with Zhongyi Yuan.
Contributed talk:
Daniele Mancinelli (University of Rome Tor Vergata, Italy)
Design and hedging of unit-linked life insurance with environmental factors
We consider an insurance company that issues a unit-linked life insurance policy incorporating environmental factors in the investment selection process. Given the increasing demand for sustainable financial products, ESG-oriented endowment policies appeal to investors seeking financial returns and adherence to sustainability principles. With growing ESG regulations, financial institutions are encouraged to develop products that meet sustainability criteria, making this insurance policy more relevant. As recently documented in several studies, including Anquetin et al. (2022), Hartzmark and Sussman (2019) and Lagerkvist et al. (2020), stakeholders around the world have increasingly perceived climate change as a global threat. As a result, institutional investors have begun to care about sustainability and integrate social, environmental and governance (ESG) criteria in constructing their portfolios and consequently evaluate the footprint of their investments. For example, as highlighted by Peng et al. (2024), the Government Pension Investment Fund has allocated 163 trillion yen in passive ESG index products, and the California Public Employees Retirement System follows a social change investment approach with ESG guidelines. Although this footprint is multidimensional and encompasses all ESG pillars, this article focuses exclusively on the most challenging aspect, which is the one related to emissions reduction. Indeed, one of the main risks associated with the transition to a low-carbon economy is the so-called carbon risk, which includes regulatory, market and reputational risks. In this work, we address two issues. In the first step, we provide a criterion for building an investment fund sensitive to environmental factors. Following, e.g. Anquetin et al. (2022) and Hellmich and Kiesel (2021), we choose in particular carbon intensity. The main advantage of this criterion is that it provides an auto-selection of the assets to be included in the fund that avoids the criticisms of a pre-selection based, for example, on ESG scores. In the second step, we address the issue of finding and hedging a strategy for the unit-linked life insurance policy, which has the fund as the underlying. Such a contract, in fact, is non-redundant since it is subject to market risk, as well as to carbon risk and mortality risk. To determine the hedging strategy, we rely on a quadratic hedging criterion that minimizes the tracking error. We finally show the effectiveness of our strategies on market data.
Contributed talk:
Alexander Melnikov (University of Alberta, Canada)
Option pricing via market completion and machine learning
We consider diffusion and jump-diffusion markets with a reducible incompleteness, i.e. they can be embedded to an auxiliary complete market by adding new risky assets. We call such embedding a market completion. Using variety of all possible market completions one can develop a dual theory of option pricing as well as of optimal investment in the markets with reducible incompleteness. We give a dual characterization of upper and lower option prices via maximization/minimization of expectations of discounted payoffs over market completions instead of martingale measures. We also show how the method works for quantile hedging and for indifference option pricing. Finally, we demonstrate a combination of the market completion method and machine learning technique in an incomplete jump-diffusion market model.
Contributed talk:
Vasily Melnikov (University of Alberta, Canada)
Risk measures on incomplete markets: a new non-solid paradigm
We study risk measures on vector spaces of random variables which a priori have no lattice structurea blind spot of the existing risk measures literature. In particular, we address when there exists a tractable dual representation (one which does not contain non-sigma-additive signed measures), and whether one can extend it to a solid superspace. The existence of a tractable dual representation is shown to be equivalent, modulo certain technicalities, to a Fatou-like property, while extension theorems are established under the existence of a sufficiently regular lift, a potentially non-linear mechanism of assigning random variable extensions to certain linear functionals. Our motivation is broadening the theory of risk measures to spaces without a lattice structure, which are ubiquitous in financial economics, especially when markets are incomplete.
Contributed talk:
Andres Mauricio Molina Barreto (Keio University, Japan)
Remarks on a copula-based conditional value at risk for the portfolio problem
We deal with a multivariate conditional value at risk. Compared with the usual notion for the single random variable, a multivariate value at risk is concerned with several variables, and thus, the relation between each risk factor should be considered. We here introduce a new definition of copula-based conditional value at risk, which is real valued and ready to be computed. Copulas are known to provide a flexible method for handling a possible nonlinear structure; therefore, copulas may be naturally involved in the theory of value at risk. We derive a formula of our copula-based conditional value at risk in the case of Archimedean copulas, whose effectiveness is shown by examples. Numerical studies are also carried out with real data, which can be verified with analytical results.
Joint work with Naoyuki Ishimura.
Contributed talk:
Harold A. Moreno Franco (HSE University, Russian Federation)
An optimal multibarrier strategy for a singular stochastic control problem with a state-dependent reward
We consider a singular control problem that aims to maximize the expected cumulative rewards, where the instantaneous returns depend on the state of a controlled process. The contributions of this paper are twofold. Firstly, to establish sufficient conditions for determining the optimality of the one-barrier strategy when the uncontrolled process X follows a spectrally negative Lévy process with a Lévy measure defined by a completely monotone density. Secondly, to verify the optimality of the (2n+1)-barrier strategy when X is a Brownian motion with a drift. Additionally, we provide an algorithm to compute the barrier values in the latter case.
Joint work with Mauricio Junca and Jose-Luis Pérez.
Contributed talk:
Edouard Motte (Catholic University of Louvain, Belgium)
The Volterra Stein-Stein model with stochastic interest rates
We introduce the Volterra Stein-Stein model with stochastic interest rates, where both volatility and interest rates are driven by correlated Gaussian Volterra processes. This framework unifies various well-known Markovian and non-Markovian models while preserving analytical tractability for pricing and hedging financial derivatives. We derive explicit formulas for pricing zero-coupon bond and interest rate cap or floor, along with a semi-explicit expression for the characteristic function of the log-forward index using Fredholm resolvents and determinants. This allows for fast and efficient derivative pricing and calibration via Fourier methods. We calibrate our model to market data and observe that our framework is flexible enough to capture key empirical features such as the humped-shaped term structure of implied volatilities for cap options and the concave ATM skew (in a log-log plot) of the S&P 500 options. Finally, we establish connections between our characteristic function formula and expressions that depend on infinite-dimensional Riccati equations, thereby making the link with conventional linear-quadratic models.
Joint work with Eduardo Abi Jaber and Donatien Hainaut.
Contributed talk:
Ludger Overbeck (Justus-Liebig-Universität Gießen, Germany)
Rough and infinite-dimensional affine models.
We extend recent results on affine Volterra processes to the inhomogeneous case. This includes moment bounds of solutions of Volterra equations driven by a Brownian motion with an inhomogeneous kernel and inhomogeneous drift and diffusion coefficients and in the case of affine drift and variance we show how the conditional Fourier-Laplace functional can be represented by a solution of an inhomogeneous Riccati-Volterra integral equation. For a time homogeneous kernel of convolution type we establish existence of a solution to the stochastic inhomogeneous Volterra equation. If in addition the coefficients are affine, we prove that the conditional Fourier-Laplace functional is exponential-affine in the past path. Finally, we apply these results to an inhomogeneous extension of the rough Heston model used in mathematical finance.
Contributed talk:
Natalie Packham (Hochschule für Wirtschaft und Recht Berlin, Germany)
Jump risk premia in the presence of clustered jumps
This paper presents an option pricing model that incorporates clusters of jumps using a bivariate Hawkes process with exponential decay memory kernels. The Hawkes process captures self- and cross-excitement of positive and negative jumps, allowing the model to effectively capture the volatile price dynamics observed in cryptocurrencies such as Bitcoin (BTC), while also fitting the implied volatility surface. The model can fit the dynamics of implied volatilites with changing preferences for the skewness risk. As an example, we use BTC dynamics where the skewness can change from negative (stronger demand for puts) to positive (stronger demand for calls). We derive positive and negative jump risk premia, defined as the discrepancies in jump measures between the objective measure and the risk-neutral measure. Our findings reveal that these jump risk premia: (i) provide insights on how the BTC options market reacts to major events, such as the COVID-19 outbreak and the FTX scandal; (ii) possess signif- icant predictive power for delta-hedged option returns; and (iii) are indicators in explaining the volatile cost-of-carry implied from BTC futures prices.
Joint work with Francis Liu and Artur Sepp.
Contributed talk:
Zbigniew Palmowski (Wroclaw University of Science and Technology, Poland)
Pricing of time-capped American options
In this talk we present a derivation of the explicit price for the perpetual American put option time-capped by the first drawdown epoch beyond a predetermined level. We consider a geometric spectrally negative Lévy market. We demonstrate that the optimal exercise strategy involves executing the option when the asset price first falls below a specified threshold. The proof relies on martingale arguments and the fluctuation theory of Lévy processes. We supplement the theoretical results by numerical analyses based on the Least Squares Monte Carlo Method.
The talk is based on the joint works with P. Stepniak.
Contributed talk:
Gudmund Pammer (TU Graz, Austria)
Calibration of the Bass local volatility model
The Bass local volatility model, introduced by Backhoff-Beiglböck-Huesmann-Källblad, is a Markov model perfectly calibrated to vanilla options at finitely many maturities, approximating the Dupire local volatility model. Conze and Henry-Labord`ere proposed a fixed-point method for its calibration. We analyze this fixed-point iteration scheme and show linear rate of convergence. Additionally, we introduce the geometric version of the Bass local volatility model, establish its intimate connection to the change of numéraire method of Campi-Laachir-Martini, and provide an efficient method for its computation.
The talk is based on joint work with Beatrice Acciaio, Julio Backhoff, Mathias Beiglböck, Antonio Marini, Lorenz Riess and Walter Schachermayer.Contributed talk:
Alexandre Pannier (Université Paris Cité, France)
Kolmogorov equations for Volterra processes
We study a class of Stochastic Volterra Equations (SVEs) with multiplicative noise and convolution-type kernels. Our focus lies on rough volatility models and thus we allow for kernels that are singular at the origin. Working with carefully chosen Hilbert spaces, we rigorously establish a link between the solution of the SVE and the Markovian mild solution of an associated Stochastic Partial Differential Equation (SPDE). Our choice of a Hilbert space solution theory allows access to well-developed tools from stochastic calculus in infinite dimensions. In particular, we obtain an Itô formula for functionals of the solution to the SPDE and show that its law and (conditional) expectations solve infinite-dimensional Fokker-Planck and backward Kolmogorov equations respectively. This is a joint work with Ioannis Gasteratos.
Contributed talk:
Léo Parent (École nationale des ponts et chaussées, France)
The discrete-time 4-factor PDV model: calibration under P and Q
This article examines calibration approaches under both the risk-neutral measure Q and the historical measure P for a discrete-time version of the 4-factor path-dependent volatility (PDV) model introduced by Guyon and Lekeufack. First, the article demonstrates the model's ability to fit option data across multiple dimensions, including the VIX time series, the SPX volatility surface, and joint SPX/VIX smiles.
The article then focuses on estimating the historical measure model via maximum likelihood. It is demonstrated that the considered PDV model specification outperforms competing models in the academic literature in terms of explanatory power.
The paper subsequently evaluates the proximity between the probability measure implied by the P-estimated model and the risk-neutral measure. The obtained results reveal that these measures are fairly close, which both validates the model's consistency with market data and supports the hypothesis of high endogeneity in the formation process of the P and Q measures, along with their tight interconnection.
In light of this relationship, we propose a new model estimation approach combining P and Q information in order to enhance calibration robustness. The approach's effectiveness is then benchmarked against classical calibration methods.
This is joint work with Julien Guyon.
Contributed talk:
Dimosthenis Pasadakis (Università della Svizzera italiana (USI), Switzerland)
Graph-based anomaly detection in financial transactions
The accuracy of classification algorithms in detecting fraudulent financial activity is critical in assisting human analysts in the task of preventing financial crime. Graph-based anomaly detection methods represent the state-of-the-art in analyzing connectivity patterns within monetary transaction networks and identifying suspicious behaviors. This talk will present algorithms designed to detect anomalies using graph partitioning methods for highly imbalanced datasets, which offer fully interpretable results, as well as Graph Neural Networks (GNNs) that leverage the latest advancements in message-passing architectures. The effectiveness of these approaches is demonstrated in experiments on datasets that simulate real-world financial behaviour, and are infused with a variety of anomalous money laundering topologies.
Joint work with Madan Sathe and Olaf Schenk.
Contributed talk:
Ari-Pekka Perkkiö (LMU Munich, Germany)
Convexity and regularity in stochastic dynamic programming and control
We study existence and regularity of solutions to Bellman equations in discrete-time stochastic optimization and control with and without convexity. We establish the existence of solutions under general conditions that extend well-known criteria for Markov decision processes. Under convexity, existence is obtained under a purely algebraic condition that, in financial applications, coincides with the no-arbitrage criterion. Sufficient conditions are given for (Lipschitz) continuity of the solutions which is instrumental in approximation and discretization of the Bellman equations.
Joint work with Teemu Pennanen.
Contributed talk:
Eric Pilling (BTU Cottbus-Senftenberg, Germany)
Stochastic modeling and optimal control of an industrial energy system
We consider a power-to-heat energy system providing superheated steam for industrial processes. It consists of a high-temperature heat pump for heat supply, a wind turbine for power generation, a thermal energy storage to store excess heat and a steam generator. If the systems energy demand cannot be covered by electricity from the wind turbine, additional electricity must be purchased from the power grid. For this system we investigate the cost-optimal management aiming to minimize the cost for electricity from the grid by a suitable combination of the wind power and the systems thermal storage. This is a decision making problem under uncertainties about the future prices for electricity from the grid and the future generation of wind power. The resulting stochastic optimal control problem is treated as finite horizon Markov Decision Process (MDP) for a multi-dimensional controlled state process. We first consider the classical backward recursion techniques for solving the associated dynamic programming equation for the value function and compute the optimal decision rule. Since that approach suffers from the the curse of dimensionality we also apply Q-learning techniques that are able to provide a good approximate solution to the MDP within a reasonable computational time.
Joint work with Ralf Wunderlich and Martin Bähr.
Contributed talk:
Philipp Plank (Imperial College London, United Kingdom)
Policy gradient methods for continuous-time finite-horizon linear-quadratic graphon mean field games
We analyze the convergence of policy gradient methods for continuous-time finite-horizon linear-quadratic graphon mean field games, which model the large-population limit of competing agents interacting weakly through a weighted graph. Each agents equilibrium policy is an affine function of the state variable, sharing a common slope function while having an agent-specific bias term.
We propose a policy gradient method that iteratively performs multiple policy updates for a fixed population distribution, followed by an update of the distribution using the latest policies. We prove that these policy iterates converge globally to the optimal policy at a linear rate. Our analysis leverages the optimization landscape over infinite-dimensional policy spaces and carefully controls error propagation across iterations. Numerical experiments across various graphon structures validate the convergence and robustness of our algorithm.
Joint work with Yufei Zhang.
Contributed talk:
Jean-Francois Renaud (UQAM, Canada)
Optimization of capital injections and absolutely continuous dividend strategies
We consider the problem of optimizing capital injections and absolutely continuous dividend payments with a dividend rate bounded by an increasing and concave function of the controlled process. We show that the optimal pair follows the following dichotomy: dividends are paid according to a refracted mean-reverting strategy while capital injections must be made each time the cash/surplus process reaches zero, so the firm is never ruined; or, dividends are paid according to another refracted mean-reverting strategy and no injection of capital is ever made until ruin. No in-between strategy is optimal.
Contributed talk:
A. Max Reppen (Boston University, United States of America)
Before the storm: firm policies and varying recession risk
Recession risk fluctuates "before the storm" of a recession. Incorporating this into a model of liquidity and investment allows firms to endogenously delay actions until recession risk increases. When small, a firm chooses to act early when risk is low to protect attractive investments as investment reduces cash and raises liquidation risk. A larger firm delays actions as it invests less and accumulates cash. But, an imminent recession shortens its saving time, necessitating quick precautionary measures. Thus, as recession risk rises, the larger firm responds more by issuing preemptively and cutting investments and payouts. We estimate and validate our model.
Joint work with Ali Kakhbod, Dmitry Livdan and Tarik Umar.
Contributed talk:
Sanae Rujivan (Walailak University, Thailand)
Analytically pricing volatility options and capped/floored volatility swaps with nonlinear payoffs in discrete observation case under the Merton jump-diffusion model driven by a nonhomogeneous Poisson process
In this work, we introduce novel analytical solutions for valuating volatility derivatives, including volatility options and capped/floored volatility swaps, employing discrete sampling within the framework of the Merton jump-diffusion model, which is driven by a nonhomogeneous Poisson process. The absence of a comprehensive understanding of the probability distribution characterizing the realized variance has historically impeded the development of a robust analytical valuation approach for such instruments. Through the application of the cumulative distribution function of the realized variance conditional on Poisson jumps, we have derived explicit expectations for the derivative payoffs articulated as functions of the extremum values of the square root of the realized variance. We delineate precise pricing structures for an array of instruments, encompassing variance and volatility swaps, variance and volatility options, and their respective capped and floored variations, alongside establishing put-call parity and relationships for capped and floored positions. Complementing the theoretical advancements, we substantiate the practical efficacy and precision of our solutions via Monte Carlo simulations, articulated through multiple numerical examples. Conclusively, our analysis extends to the quantification of jump impacts on the fair strike prices of volatility derivatives with nonlinear payoffs, facilitated by our analytic pricing expressions.
Contributed talk:
Daria Sakhanda (ETH Zürich, Switzerland)
Optimal Consumption Policy in a Carbon-Conscious Economy: a Machine Learning approach
Due to the significant carbon emissions generated by various sectors of the economy, fast economic growth can hinder efforts to combat climate change. We study this trade-off by considering an optimal control problem based on the single-good economy model of Borissov/Bretschger (2022) in discrete time. There, a social planner aims to determine an optimal consumption policy while ensuring simultaneously that the economy grows and overall emissions do not breach a given climate budget. We use a machine learning approach to find an approximate optimal solution to the social planner's control problem. In our main result, we show that the optimal consumption policies for the problem with finite time horizon converge pointwise to the optimal consumption policy with an infinite time horizon. The aforementioned framework is initially implemented in a single-economy setting and then extended to a multi-country model with heterogeneous characteristics.
Joint work with Josef Teichmann.
Contributed talk:
Simona Sanfelici (Politecnico di Milano, Italy)
Short rate models with stochastic discontinuities: a PDE approach.
In the ongoing reform of interest rate benchmarks, risk-free rates (RFRs), such as the Secured Overnight Financing Rate (SOFR) in the U.S. or the Euro Short-Term Rate (STR) in Europe, play a pivotal role. An observed characteristic of RFRs is the occurrence of jumps and spikes at regular intervals, due to regulatory and liquidity constraints.
In this paper, we consider a general short-rate model featuring discontinuities at fixed times with random sizes. Within this framework, we introduce a PDE-based approach to price interest rate derivatives. For affine models, we also derive (quasi) closed-form solutions. Finally, we develop numerical methods to price interest rate derivatives in general cases.
Joint work with Marzia De Donno, Chiara Guardasoni and Simona Sanfelici.
Contributed talk:
Alexandros Saplaouras (ETH Zurich, Switzerland)
Stability of backward propagation of chaos
It will initially be considered the asymptotic behavior of the solution of a mean-field system of Backward Stochastic Differential Equations with Jumps (BSDEs), as the multitude of the system equations grows to infinity, to independent and identically distributed (IID) solutions of McKeanVlasov BSDEs. This property is known in the literature as backward propagation of chaos. Afterwards, it will be provided the suitable framework for the stability of the aforementioned property to hold. In other words, assuming a sequence of mean-field systems of BSDEs which propagate chaos, then their solutions, as the multitude of the system equations grows to infinity, approximates an IID sequence of solutions of the limiting McKeanVlasov BSDE. The generality of the framework allows to incorporate either discrete-time or continuous-time approximating mean-field BSDE systems.
Joint work with Antonis Papapantoleon and Stefanos Theodorakopoulos.
Contributed talk:
Thorsten Schmidt (University of Freiburg, Germany)
Benchmark-Neutral Risk Minimization for insurance products and non-replicable claims
In this talk we will study the pricing and hedging of nonreplicable contingent claims, such as long-term insurance contracts like variable annuities. This problem is approached in the benchmark-neutral setting of Platen (2024). In contrast to the classical benchmark approach the stock growth-optimal portfolio is emplyed as numéraire which in typical settings leads to an equivalent martingale measure, the benchmark-neutral measure. The resulting prices can be significantly lower than the respective risk-neutral ones, which is particularly attractive for long-term investments. We derive associated risk-minimizing hedging strategies under the assumption that the contingent claim possesses a martingale decomposition. For a set of nonreplicable contingent claims, these strategies allow monitoring the working capital needed to generate their payoffs and assess the emerging diversification effect. Finally, we propose an algorithmic refinancing strategy that allows the modeling of the working capital.
Joint work with Michael Schmutz and Eckhard Platen.
Contributed talk:
Stefan Schrott (University of Vienna, Austria)
The fundamental theorem of weak optimal transport
The fundamental theorem of classical optimal transport establishes strong duality and characterizes optimizers through a complementary slackness condition. Milestones such as Brenier's theorem and the Kantorovich-Rubinstein formula are direct consequences.
In this paper, we generalize this result to non-linear cost functions, thereby establishing a fundamental theorem for the weak optimal transport problem introduced by Gozlan, Roberto, Samson, and Tetali. As applications we provide concise derivations of the Brenier-Strassen theorem, the convex Kantorovich-Rubinstein formula and the structure theorem of entropic optimal transport.
We also extend Strassen's theorem in the direction of Gangbo-McCann's transport problem for convex costs.
Moreover, we determine the optimizers for a new family of transport problems which contains the Brenier-Strassen, the martingale Benamou-Brenier and the entropic martingale transport problem as extreme cases.
Joint work with Mathias Beiglböck, Gudmund Pammer and Lorenz Riess.
Contributed talk:
Carlo Sgarra (University of Bari "Aldo Moro", Italy)
Optimal Self-Protection via BSDEs for risk models with jump clusters
We investigate the optimal self-protection problem, from the point of view of an insurance buyer, when the loss process is described by a Cox-shot-noise process and a Hawkes process with an exponential memory kernel.
The insurance buyer chooses both the percentage of insured losses and the prevention effort in order to maximize expected exponential utility of terminal wealth, in presence of a terminal reimbursement. We show that an optimal solution exists by proving that this problem can be described in terms of a suitable backward stochastic differential equation (BSDE), which can be explicitly solved in the case of no terminal reimbursement. We extend in several directions the results obtained by Bensalem, Santibanez and Kazi Tani and compare our results with those presented therein.
Contributed talk:
Yonatan Shadmi (Imperial College, United Kingdom)
Fluid-Limits of Fragmented Limit-Order Markets
Maglaras, Moallemi, and Zheng (2021) have introduced a flexible queueing model for fragmented limit-order markets, whose fluid limit remains remarkably tractable. In the present study we prove that, in the limit of small and frequent orders, the discrete system indeed converges to the fluid limit, which is characterized by a system of coupled nonlinear ODEs with singular coefficients at the origin. Moreover, we establish that the fluid system is asymptotically stable for an arbitrary number of limit order books in that, over time, it converges to the stationary equilibrium state studied by Maglaras et al. (2021).
Joint work with Johannes Muhle-Karbe and Eyal Neuman.
Contributed talk:
Andrea Stanghellini (University of Verona, Italy)
A joint framework for SPX, VIX and VXX
This article introduces a novel framework that simultaneously accommodates three interconnected processes: the underlying S&P 500 index, the CBOE Volatility Index (VIX), and the iPath S&P 500 VIX Short-Term Futures ETN (VXX). Each of these assets possesses its own implied volatility surface in the options market, presenting a complex modeling challenge. The signature-based methodology employed in this study allows for a flexible and efficient representation of the joint dynamics, overcoming limitations of traditional modeling techniques.
We first consider a stochastic volatility model in which the dynamics of the VIX index is modeled by a linear combination of the signature of an underlying polynomial process. By exploiting the properties of the expected signature of a polynomial process, we derive the dynamics of the VXX index in closed form. Furthermore, using the analytical definition of the VIX index as the conditional expected value of the integrated volatility of the SPX asset over a 30-day time window, we are able to retrieve the dynamics of the volatility as a linear combination of the signature of the underlying polynomial process. Leveraging the flexibility of signature methods, we propose a groundbreaking approach that links the dynamics of these three assets, resulting in a comprehensive model capable of calibration across all three volatility surfaces concurrently. Our results demonstrate a more coherent representation of the volatility term structure across all three assets.
Joint work with Sara Svaluto-Ferro and Martino Grasselli.
Contributed talk:
Paweł Stępniak (Wrocław University of Science and Technology, Poland)
Pricing American Options Time-Capped by a drawdown Event
In this talk, we explore the valuation of perpetual American put options in a setting where exercise is constrained by the first drawdown epoch beyond a predefined level. Initially, we derive an explicit pricing formula within the BlackScholes framework and demonstrate that the optimal exercise strategy corresponds to the first passage of the asset price below a critical threshold. Our approach utilizes martingale techniques and fluctuation theory of Lévy processes. Extending these results, we consider a more general setup in which the asset follows a spectrally negative Lévy process, allowing for jumps and capturing a broader range of financial dynamics. We provide analytical solutions under this generalization and illustrate key results through numerical analysis.
Joint work with Zbigniew Palmowski.
Contributed talk:
Ibrahim Tahri (International Institute for Applied System Analysis, Austria)
Information cost and sustainable investment
Investment in green assets is crucial for a sustainable transition, yet underinvestment remains a challenge. This paper develops a continuous-time learning-investment model to analyze how information costs shape investors' decisions. Investors allocate attention to acquiring costly private signals about asset returns, influencing portfolio diversification between green and conventional assets.
The model reveals that high information costs discourage green investment, especially in highly uncertain environments. When risky assets are positively correlated, investors favor the asset with lower information costs, reinforcing familiarity bias toward conventional assets. In contrast, when assets are negatively correlated, the effect of information costs on portfolio allocation is diminished.
Through Bayesian learning and active information acquisition, the study finds that:
1. Green investment decreases with rising information costs, particularly when risk correlation is high.
2. Investors demand higher returns to justify costly information acquisition, leading to suboptimal green asset allocation.
3. Attention allocation follows a convex cost structure, meaning investors limit learning beyond a threshold where costs outweigh benefits.
These findings have strong policy implications. Lowering information barriersthrough standardized disclosures, improved ESG transparency, and subsidies for green asset researchcan reduce perceived risk and unlock sustainable capital flows.
By integrating financial economics and climate finance, this study highlights the critical role of information frictions in sustainable investment. Reducing these costs can facilitate a more efficient capital allocation toward net-zero objectives, reinforcing the urgency of policy-driven solutions.
Contributed talk:
Jonathan Tam (University of Verona, Italy)
Performance attribution of sustainable constraints for dynamic portfolios in continuous time
There is a recent debate on whether sustainable investing necesarily impact portfolio performance negatively. We model the financial impact of portfolio constraints by attributing the performance of dynamic portfolios to contributions from individual constraints. We consider a mean-variance portfolio problem with unknown asset returns. Investors impose a dynamic constraint based on a firm characteristic that contains information about returns, such as the environmental, social, and governance (ESG) score. We characterize the optimal investment strategy through two stochastic Riccati equations. Using this framework, we demonstrate that, depending on the correlation between returns and firm characteristics, incorporating the constraint can, in certain cases, enhance portfolio performance compared to a passive benchmark that disregards the information embedded in these constraints. Our results shed light on the role of implicit information contained in constraints in determining the performance of a constrained portfolio.
Joint work with Ruixun Zhang and Yufei Zhang.
Contributed talk:
Jayen Tan (Cornell University, United States of America)
Pricing weather contracts with persistent temperature memory driven by a Fractional Ornstein-Uhlenbeck process
Despite the high persistence observed in temperature series, most prevailing statistical temperature models and weather contract valuation methodologies do not adequately account for long-range dependency, potentially leading to suboptimal temperature forecasts and mispricing of weather-related contracts.
In response, we propose the generalized fractional Ornstein-Uhlenbeck (gfOU) process, which captures the trends, seasonality, mean reversion, and long-range dependence inherent in temperature data. We further derive a simplified covariance formula for the stationary fractional Ornstein-Uhlenbeck process and evaluate the consequences of model misspecification when memory effects are erroneously omitted.
For the valuation of temperature-linked contracts, we derive closed-form expressions for both the payoff frequency and the expected payoff based on the gfOU temperature process. We demonstrate that our model yields more accurate payoff predictions and enhances insurer profitability within a Bertrand competitive game. Our empirical analysis further validates the presence of temperature persistence across the United States and reveals significant regional variations over recent decades. Additionally, we present evidence that the restricted gfOU model outperforms its benchmark gOU counterpart and the Burn approach in forecasting the payoffs of temperature-linked contracts.
Contributed talk:
Waleed Taoum (King's College London, United Kingdom)
Statistical modeling of SOFR term structure
SOFR derivatives market is still illiquid and incomplete so it is not amenable to classical risk-neutral term structure models which are based on the assumption of perfect liquidity and completeness. This paper develops a statistical SOFR term structure model that is well-suited for risk management and derivatives pricing within the incomplete markets paradigm. The model incorporates relevant macroeconomic factors that drive central bank policy rates which, in turn, cause random jumps often observed in the SOFR rates. The model is easy to calibrate to historical data, current market quotes, and the user's views concerning the future development of the relevant macroeconomic factors. The model is illustrated by indifference pricing of SOFR derivatives.
This is joint work with Teemu Pennanen.
Contributed talk:
Stefan Thonhauser (Graz University of Technology, Austria)
A dynamic Reinsurance Game for classical Surplus Processes
We study a stochastic differential game in a ruin theoretic environment. Two insurers with access to reinsurance are linked through a performance functional, which one strives to maximize, while the other seeks to minimize it. The underlying surplus processes are given by classical risk models which are extended by dynamic reinsurance opportunities. We analyze the resulting game from the perspective of piecewise deterministic Markov processes and provide a numerical example.
Joint work with Lea Enzi.
Contributed talk:
Barbara Torti (Università degli Studi di Roma "Tor vergata", Italy)
Thin-thick decomposition of a random time and martingale representations in progressive enlargement
In this talk, we address the problem of martingale representation in the progressive enlargement Ft of a reference filtration F with a random time t. This kind of problem arises in credit risk theory, looking for the completeness of markets with default.
Our approach is based on the thin-thick decomposition of t. It allows us to work without assuming either avoiding condition or the existence of an equivalent decoupling measure for F and t. In this setting, we prove a martingale representation theorem on Ft valid across the entire time axis. Our result holds under weaker assumptions either when F is a Lévy filtration or when F is a continuous filtration.
As applications, we prove the completeness of well-known models of market with default. Finally, we present the ongoing developments of this work.
This talk is based on a joint work with Antonella Calzolari.
Contributed talk:
Giacomo Toscano (University of Florence, Italy)
Asymptotic Normality and Finite-Sample Robustness of the Fourier Spot Volatility Estimator in the Presence of Microstructure Noise
We study the efficiency and robustness of the Fourier spot volatility estimator when high-frequency prices are contaminated by microstructure noise. First, we show that the estimator is consistent and asymptotically efficient in the presence of additive noise, establishing a Central Limit Theorem (CLT) with the optimal rate of convergence $n^{1/8}$. Additionally, we complete the asymptotic theory in the absence of noise, obtaining a CLT with the optimal rate of convergence $n^{1/4}$. Feasible CLTs with the optimal convergence rate are also obtained by proving the consistency of the Fourier estimators of the spot volatility of volatility and the quarticity in the presence of noise. Second, we introduce a feasible method for selecting the cutting frequencies of the estimator in the presence of noise, based on the optimization of the integrated asymptotic variance. Finally, we provide support to the accuracy and robustness of the method by means of a numerical study and an empirical exercise, which is conducted using tick-by-tick prices of three U.S. stocks with different liquidity.
Joint work with Mavira Mancino and Tommaso Mariotti.
Contributed talk:
Theresa Traxler (Vienna University of Economics and Business, Austria)
Playing with fire? A mean field game analysis of fire sales and systemic risk under regulatory capital constraints
We study the impact of regulatory capital constraints on fire sales and financial stability in a large banking system using a mean field game model. In our model banks adjust their holdings of a risky asset via trading strategies with finite trading rate in order to maximize expected profits. Moreover, a bank is liquidated if it violates a stylized regulatory capital constraint. We assume that the drift of the asset value is affected by the average change in the position of the banks in the system. This creates strategic interaction between the trading behavior of banks and thus leads to a game. The equilibria of this game are characterized by a system of coupled PDEs. We solve this system explicitly for a test case without regulatory constraints and numerically for the regulated case. We find that capital constraints can lead to a systemic crisis where a substantial proportion of the banking system defaults simultaneously. Moreover, we discuss proposals from the literature on macroprudential regulation. In particular, we show that in our setup a systemic crisis does not arise if the banking system is sufficiently well capitalized or if improved mechanisms for the resolution of banks violating the risk capital constraints are in place.
Joint work with Rüdiger Frey.
Contributed talk:
Barbara Trivellato (Politecnico di Torino, Italy)
Pricing climate change risks: insights from CAPM with self-excited jumps
We develop a dynamic asset pricing framework with brown and green assets. Green assets are affected by rare natural disasters linked to climate change by rare macroeconomic events. Brown assets are also affected by transition risk which is assumed to be related to physical risk. The novelty of the work is to assume that natural disasters are generated by a self-excited jump. Using analytical results and simulations we show how these natural disasters impact portfolio composition, risk-free rate, credit spread and asset prices.
Joint work with Davide Radi and Marina Santacroce.
Contributed talk:
Ioannis Tzouanas (Bielefeld University, Germany)
Existence and approximation of optimal mean-field coarse correlated equilibrium
We examine the approximation of coarse correlated equilibria within the framework of continuous-time mean-field games. In this equilibrium concept, a regulator (correlation device) is able to recommend strategies to the players that are not advantageous to reject unilaterally. A natural question that arises is: "Can we approximate a coarse correlated equilibrium?" Upon introducing the concept of optimal coarse correlated equilibria, we provide an equivalent formalization of the problem using the linear programming approach. This relaxed framework enables us to demonstrate the existence of optimal mean-field coarse correlated equilibria under weak assumptions.\ Subsequently, we focus on the approximation of the optimal mean-field coarse correlated equilibria.\ In particular, we introduce the notion of external regret minimization for relaxed problems and establish that the optimal mean-field coarse correlated equilibria can be approximated through a regret minimization fictitious play algorithm, along with its convergence rate. Finally, we present several examples to illustrate the applicability of our results.
This is a joint work with Luciano Campi (University of Milan) and Federico Cannerozzi (Bielefeld University).
Contributed talk:
Thomas Wagenhofer (TU Berlin, Germany)
Weak error rates for local stochastic volatility models
Local stochastic volatility refers to a popular model class in applied mathematical finance that allows for "calibration-on-the-fly", typically via a particle method, derived from a formal McKean-Vlasov equation. Well-posedness of this limit is a well-known problem in the field with the general case still being largely open, despite recent progress in Markovian situations. Our approach is to start with a well-defined Euler approximation to the formal McKean-Vlasov equation, followed by a newly established "half-step"-scheme, allowing for good approximations of conditional expectations.
We show that this scheme converges with weak rate one regarding the step-size, plus error terms that account for the said approximation. Furthermore, the case of particle approximation is discussed in detail and the weak error rate, in dependence of all parameters used, is derived.
Joint work with Peter K. Friz, Benjamin Jourdain and Alexandre Zhou.
Contributed talk:
Kristof Wiedermann (TU Wien, Austria)
Small-time central limit theorems for stochastic Volterra integral equations and an application towards volatility derivatives
We study small-time central limit theorems for stochastic Volterra integral equations (SVIEs) with Hölder continuous coefficients and general locally square integrable Volterra kernels. In particular, we prove the convergence of the finite-dimensional distributions, a functional CLT, and limit theorems for smooth transformations of the process. We cover a large class of Volterra kernels that includes rough models based on Riemann-Liouville kernels with short- and long-range dependencies. To illustrate our results, we derive asymptotic pricing formulae for digital calls on the realized variance in three different regimes, providing a robust and model-independent pricing method for small maturities in rough volatility models. Finally, for the case of completely monotone kernels, we introduce a flexible framework of Hilbert space-valued Markovian lifts, for which we obtain analogous limit theorems.
Joint work with Martin Friesen and Stefan Gerhold.
Contributed talk:
Anke Wiese (Heriot-Watt University, United Kingdom)
A Chen-Fliess series representation for Levy models
For deterministic differential equations and stochastic differential equations, a Chen-Fliess solution expansion is well-known to play a key role in the design of numerical integration schemes that preserve qualitative properties of the solution to the equation. In this talk, we will derive a Chen-Fliess series representation for the solution of Levy-driven stochastic differential equations, that is a series expansion of the logarithm of the flow map in terms of commutators of vector fields, and we will provide an explicit expression for the components in this series, generalising previous results for continuous deterministic and stochastic differential equations. We will illustrate the new Chen-Fliess series representation in the application to a multi-dimensional stochastic volatility model.
Contributed talk:
Johannes Wiesel (University of Copenhagen, Denmark)
The fast rate of the smooth adapted Wasserstein distance
Approximating a measure μ by its empirical distribution ̂μn in the p-Wasserstein distance presents a significant challenge due to the curse of dimensionality. Convolving measures with Gaussian noise has proven effective in addressing this issue. In this paper, we extend this smoothing technique to the adapted p-Wasserstein distance and show that it achieves the fast rate of convergence for subgaussian measures. Our results improve upon existing dimension-free convergence rates for the smooth adapted p-Wasserstein distance when p>1.
Joint work with Martin Larsson and Jonghwa Park.
Contributed talk:
Michal Wronka (Wroclaw University of Science and Technology, Poland)
Pricing mortgage-backed securities using PDE methods
This talk explores the pricing of a selected Mortgage-Backed Security (MBS) using a Partial Differential Equation (PDE) grid approach. We begin by establishing the theoretical foundation, deriving the pricing PDE for a zero-coupon bond within the Hull-White framework, following a methodology similar to the Black-Scholes model in option pricing. Building on this, we introduce the unique structural components of MBS contracts and implement necessary modifications to the pricing PDE to account for key risk factors, including the prepayment factor (λ), current coupon, and Option-Adjusted Spread (OAS).
Further, we contextualize MBS pricing by incorporating volatility dynamics calibrated under the Hull-White model. A detailed examination of prepayment modeling follows, where we analyze consumer prepayment behavior, emphasizing the significance of the S-shaped prepayment curve, burnout effects, and turnover rates. The study culminates in the development of a comprehensive pricing framework that integrates all these factors, ultimately yielding a prepayment function that directly feeds into the PDE-based valuation model. By synthesizing theoretical derivations with practical risk considerations, this work provides a structured approach to MBS pricing, offering insights into the interplay between prepayment dynamics and PDE-based valuation techniques.
Contributed talk:
Marcus Wunsch (ZHAW, Switzerland)
The market microstructure of Constant Function Market Makers
A major innovation in Decentralized Finance (DeFi), Decentralized Exchanges (DEXs) facilitate the trading of digital assets while preserving user custody. The predominant DEX architecture, Constant Function Market Makers (CFMMs), relies on mathematical formulas, replacing the traditional central limit order book. Despite their simplicity, CFMMs exhibit a rich market microstructure, notably influenced by the presence of transaction fees. In general, liquidity providers - market makers, in DeFi's parlance - face adverse selection risk due to arbitrage activities necessary to align DEX prices with those on other venues. This phenomenon, known as Impermanent Loss, reflects how liquidity providers effectively sell volatility in the absence of transaction costs. However, when transaction costs are present, the burden of adverse selection risk borne by liquidity providers surprisingly aligns with the fees associated with constant-weighted portfolio management. Lastly, I will present a recent result that demonstrates in what sense the exchange mechanism of a Constant Product Market Maker with concentrated liquidity can be considered optimal.
This is based on joint research with Masaaki Fukasawa (Osaka University) and Basile Maire (Quantena AG).
Contributed talk:
Othmane Zarhali (Université Paris Dauphine -CNRS, France)
A multidimensional Log S-fBM model
The Log stationary fractional brownian motion (Log S-fBM) model was introduced by Wu et al.. Its defining feature is that the logvolatility process is a stationary fractional brownian motion (S-fBM) process: a gaussian process whose autocovariance function is parametrized by the intermittency parameter , the correlation limit and the Hurst exponent H. In fact, when the Hurst exponent goes to zero, its multifractal random measure (volatility measure) tends to the multifractal random measure of a multifractal random walk introduced by Bacry et al.. As a result, the Log S-fBM model interpolates between the two formalisms: the multifractal volatility and rough volatility (see for instance Gatheral et al.). Here, we are interested in a multidimensional version of the Log S-fBM model (m-Log S-fBM) where each marginal is a Log S-fBM process and the dependencies between them are encompassed in the covariance structure of there log volatility processes. Thus, the covariance matrix of the marginals is entirely determined by the co-intermittency matrix as well as the coHurst matrix. First, we define a multidimensional version of the S-fBM process (m-S-fBM). Then, we introduce the multidimensional Log S-fBM model. We follow up with the small intermittency approximations of the original Log S-fBM model to derive the small intermittency approximations counterparts in the multidimensional setting. Last but not least, we propose a calibration procedure of the model that is based on the identification of the autocovariance curve with the one derived from observed data, as well as numerical experiments on synthetic and market data.
Joint work with Emmanuel Bacry and Jean-François Muzy.
Contributed talk:
Rouyi Zhang (Humboldt University of Berlin, Germany)
The microstructure of rough volatility models driven by Poisson random measures
We consider a microstructural foundation for rough volatility models driven by Poisson random measures. In our model, volatility evolves through self-exciting arrivals of market orders as well as self-exciting arrivals of limit orders and cancellations. The impact of market orders on future order arrivals is modeled by a Hawkes kernel with power-law decay, leading to persistent effects, while the impact of limit orders remains temporary yet potentially long-lived. Under suitable scaling, the volatility process converges to a fractional Heston model driven by an additional Poisson random measure, which induces occasional spikes and clusters of spikes in volatility. Our results rely on novel existence and uniqueness results for stochastic path-dependent Volterra equations driven by Poisson random measures. Additionally, we study the simulation of the model and analyze its implied volatility surface, demonstrating how the presence of jumps and clustering effects influences market-observable quantities
Joint work with Ulrich Horst and Wei Xu.
Contributed talk:
Brian Zi Qi Zhu (Columbia University, United States of America)
Optimal exiting for liquidity provision in constant function market makers
Providing liquidity to constant function market makers is often less profitable or favorable than simply holding assets, primarily because of impermanent loss. Using an optimal stopping approach, we show the existence of liquidity provision strategies that are profitable (excluding infrastructural fees) relative to holding, in which a liquidity provider (LP) exits the pool when the price ratio hits certain thresholds. We demonstrate that pricing functions can be designed to maximize the expected time an optimally-acting LP exits the pool and backtest our strategy on Uniswap v2 data.
Joint work with Agostino Capponi.
Contributed talk:
Jing Zou (TU Dresden, ScaDS.AI, Germany)
Dynamic hierarchical graph neural networks for spatiotemporal prediction of flood-related claims
The aim of this paper is to develop a dynamic hierarchical Graph Neural Network (GNN) framework for spatiotemporal regression, see, e.g., [1], to predict flood-related insurance claims. Our model utilizes a global graph where nodesrepresenting postal codescontain local graph structures that capture policy-level information while addressing imbalances in property distributions over time. Incorporating geographical covariates, building and content characteristics, and vulnerability levels, we also integrate the excessive rainfall index and wind speed index at the postal code level to enhance predictive performance.
For interpretability, we leverage the contextual embedding-based GNN from [2] to visualize pairwise feature interactions by including an interaction network layer at the policy level. To address the high skewness of zeros in the response variable, we also incorporate a Zero-Inflated Negative Binomial (ZINB) module into our GNN framework, drawing inspiration from methods used in highly sparse single-cell RNA-sequencing analysis, see [3] and [4]. Furthermore, we implement a feature selection process using subnetworks to optimize the ZINB parameters for mean and dispersion.
We apply our GNN-ZINB framework to an empirical study using data from Athens, Greece, spanning 257 postal code areas from 2013 to 2022, demonstrating its potential for improving flood insurance design.
Joint work with George Tzougas and Ostap Okhrin.
Jing Zou and Ostap Okhrin gratefully acknowledge financial support from the Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Dresden/Leipzig, Germany.
References:
[1] Ma, M., Xie, P., Teng, F., Wang, B., Ji, S., Zhang, J., Li, T. (2023). HiSTGNN: Hierarchical spatio-temporal graph neural network for weather forecasting. Information Sciences 648, 119580.
[2] Villaizan-Vallelado, M., Salvatori, M., Carro, B., Sanchez-Esguevillas, A. J. (2024). Graph neural network contextual embedding for deep learning on tabular data. Neural Networks. 173, 106180.
[3] Risso, D., Perraudeau, F., Gribkova, S., Dudoit, S., Vert, J. P. (2018). A general and flexible method for signal extraction from single-cell RNA-seq data. Nature Communications. 9(1), 284.
[4] Gan, Y., Huang, X., Zou, G., Zhou, S., Guan, J. (2022). Deep structural clustering for single-cell RNA-seq data jointly through autoencoder and graph neural network. Briefings in Bioinformatics. 23(2), bbac018.
Further around 15 abstracts for contributed talks t.b.a.
Poster Presentations
More than 40 poster presentations have been accepted - abstracts are online in case the registration fee was paid already:
Poster presentation:
Rahama Sani Abdullahi (Humboldt Universität zu Berlin, Germany)
A non-linear Skorokhod framework for multi-venue limit-order books
We investigate markets in which the same asset is traded on several venues and each incoming order may be executed locally or routed to another venue, but only as long as a cross-venue capacity limit is not exceeded. Examples include the SIDC electricity market, where the finite capacity of cross-border transmission lines limits how many trades can be routed between national markets. To describe the resulting queue dynamics, we build a new Skorokhod reflection framework in which the boundaries are moving and depend on the constraining process itself. We show that for any cadlag driving process, there is a unique solution to the Skorokhod problem, first in the one-dimensional case. Then, we extend the results to the multidimensional setting. The key tool is an extended Skorokhod problem with non-linear, state-dependent boundaries and oblique reflection directions. With existence, uniqueness, and stability in hand, we pass to a diffusion limit: the discrete queues converge to a class of multidimensional SDEs with oblique reflection.
Joint work with Dörte Kreher.
Poster presentation:
Esmaeil Babaei (Manchester Metropolitan University, United Kingdom)
On asset pricing in a binomial model with fixed and proportional transaction costs, portfolio constraints and dividends
This paper extends the classical binomial model proposed by Cox, Ross, and Rubinstein for derivative security pricing to include both fixed and proportional transaction costs, portfolio constraints including margin requirements, and dividend-paying assets. We focus on the problem of option hedging within this extended framework. First, we establish the existence of a hedging strategy under these conditions. Next, we derive the optimal hedging strategy and its associated initial cost by decomposing the problem into a series of sequential hedging subproblems. A numerical example within a 3-period binomial model is provided to demonstrate the applicability of the approach.
Poster presentation:
Bastien Baude (Université Paris-Saclay, CentraleSupélec, France)
Optimal risk-aware interest rates for decentralized lending protocols
Decentralized lending protocols within the decentralized finance ecosystem enable the lending and borrowing of crypto-assets without relying on traditional intermediaries. Interest rates in these protocols are set algorithmically and fluctuate according to the supply and demand for liquidity. In this study, we propose an agent-based model tailored to a decentralized lending protocol and determine the optimal interest rate model. When the responses of the agents are linear with respect to the interest rate, the optimal solution is derived from a system of Riccati-type ODEs. For nonlinear behaviors, we propose a Monte-Carlo estimator, coupled with deep learning techniques, to approximate the optimal solution. Finally, after calibrating the model using block-by-block data, we conduct a risk-adjusted profit and loss analysis of the liquidity pool under industry-standard interest rate models and benchmark them against the optimal interest rate model.
Joint work with Damien Challet and Ioane Muni Toke.
Poster presentation:
Ruizhe Bu (Beijing Normal - Hong Kong Baptist University, People's Republic of China)
Numerical methods for option pricing under dividend barrier strategies
This paper presents a novel approach to pricing European option, when the cumulative log-return of the underlying stock follows a double exponential jump diffusion process under a dividend barrier strategy. The dividend barrier strategy stipulates that dividends are not distributed unless the cumulative log-return exceeds a fixed barrier, ensuring that excess returns are adjusted to maintain the log-return below this threshold. By incorporating a double exponential jump diffusion model, this study effectively captures the leptokurtic nature of asset returns and addresses empirical anomalies. Under a risk-neutral probability measure, the joint probability density function of the jump diffusion model and its peak is employed to derive an analytical solution for pricing European call and put options. The proposed approach improves the theoretical framework for option pricing in scenarios that involve dividend barriers, providing information for financial professionals from a new perspective.
Joint work with Zhengjun Jiang.
Poster presentation:
Tahir Choulli (University of Alberta, Canada)
Log-optimal portfolio under regime switching mechanisms
We consider a pair of families of processes, (X,Y), that models our regime-switching-jump-diffusion-model (RSJD for short hereafter). Precisely, X is a finite family of D "generalized" jump-diffusion processes representing D non-switching market models. Y is a continuous-time Markov Chain (CTMC hereafter for short) taking values among finite states in the space {1,2,...,D}, and which models our regime switching. In the resulting RSJD model, the regime switching effects are numerous, have no assumptions, and are present in many places in the model. Thus, our RSJD model covers many switching models of the literature, namely it covers Norberg (2003), Naik (1993), Siu and Elliott (2022) and beyond to cite a few.
For the resulting RSJD model, we study the numeraire and log-optimal portfolios in different manners, and describe their computations in terms of the parameters of our RSJD model. In particular we single out the types of risks, induced or triggered by the stochasticity of the regime switching Y, that really affect the numeraire portfolio, and address the following questions and their intermediate challenges:
1) What are the conditions on Y (preferably in terms of information theoretic concepts such as entropy) and/or on the non-switching model X that guarantee the existence of log-optimal portfolio of (X,Y)?
2) What are the factors that fully determine the increment in maximum expected logarithmic utility from terminal wealth for the RSJD model (X,Y) and the non-switching model at state one considered as the initial state of our regime switching system? How to quantify these factors, and their economic interpretations as well?
Joint work with Xunbai Yin.
Poster presentation:
Abhinav Das (Ulm University, Germany)
Adaptive Probabilistic modeling with regime-Switching neural processes for electricity price forecasting.
Accurate forecasting of electricity prices presents a significant challenge due to the inherent volatility and complex dependencies on multiple exogenous factors within energy markets. Traditional forecasting models often struggle to capture sudden shifts in market dynamics and fail to model non-stationary behaviors, limiting their predictive accuracy. To address these limitations, we propose a regime-switching Neural Process (RS-NP) framework, which generalizes Gaussian Processes by learning distributions over functions through deep latent variable models. Our approach integrates a Hierarchical Dirichlet Process Hidden Markov Model (HDP-HMM) to identify latent market regimes, segmenting the price time series into structurally distinct patterns without prior knowledge of the number of regimes. For each detected regime, we train a Neural Process to approximate the conditional distribution of electricity prices, leveraging stochastic latent representations for function inference. Unlike traditional models, Neural Processes enable adaptive, data-driven prior learning, enhancing predictive flexibility while maintaining uncertainty quantification. A probabilistic aggregation mechanism then combines the regime-specific Neural Processes to generate uncertainty-aware forecasts. We validate our approach on real-world electricity price data, demonstrating superior predictive performance over other state-of-the-art forecasting models. Our results highlight the effectiveness of Neural Processes in capturing regime-dependent price dynamics, offering a computationally efficient and scalable alternative to traditional kernel-based methods for energy market modeling.
Joint work with Stephan Schlüter.
Poster presentation:
Pavel V. Gapeev (LSE, United Kingdom)
Perpetual American standard and lookback options in models with progressively enlarged filtrations
We derive closed-form solutions to optimal stopping problems related to the pricing of perpetual American standard and lookback put and call options in extensions of the Black-Merton-Scholes model under progressively enlarged filtrations. It is assumed that the information available from the market is modelled by Brownian filtrations progressively enlarged with the random times at which the underlying process attains its global maximum or minimum, that is, the last hitting times for the underlying risky asset price of its running maximum or minimum over the infinite time interval, which are supposed to be progressively observed by the holders of the contracts. We show that the optimal exercise times are the first times at which the asset price process reaches certain lower or upper stochastic boundaries depending on the current values of its running maximum or minimum depending on the occurrence of the random times of the global maximum or minimum of the risky asset price process. The proof is based on the reduction of the original necessarily three-dimensional optimal stopping problems to the associated free-boundary problems and their solutions by means of the smooth-fit and either normal-reflection or normal-entrance conditions for the value functions at the optimal exercise boundaries and the edges of the state spaces of the processes, respectively.
Joint work with Libo Li.
Poster presentation:
Rohan Hobbs (King's College London, United Kingdom)
A neural network approach to Collective Defined Contribution (CDC) pension schemes in the UK
This poster will discuss the use of neural networks to learn optimal pension investment and consumption strategies under inhomogeneous utility, amidst research into collective defined contribution (CDC) funds in the UK. We train a neural network to learn optimal strategies for a range of preference parameters, not only producing strategies that outperform the proposed CDC funds, but also allowing for users to effectively pick the pension outcomes they prefer.
Poster presentation:
Wilfried Kenmoe Nzali (Weierstrass Institute for Applied Analysis and Stochastics (WIAS-Berlin), Germany)
Volatile electricity market and battery storage
The volatility of today’s electricity markets presents significant challenges, making battery storage systems increasingly valuable for reducing consumption costs. These systems enable energy to be stored during periods of low prices and utilized when prices rise, thereby offering a buffer against market fluctuations. In this work, we formulate an optimal control problem that integrates detailed battery models with stochastic electricity price models. Our objective is to identify the most effective charging and discharging strategies for battery storage systems, aiming to minimize electricity consumption costs while maximizing operational efficiency. This approach not only enhances cost-effectiveness but also contributes to greater energy sustainability.
Joint work with Dörte Kreher, Christian Bayer and Manuel Landstorfer.
Poster presentation:
Lamia Lamrani (Université Paris-Saclay, CentraleSupélec, France)
Holdout method error and optimal split for large non-Gaussian covariance matrix estimation using Weingarten Calculus
Covariance matrix estimation is an important topic for financial applications such as risk management or portfolio selection using Markowitz optimization. Cross-validation, one of the most widely used methods for model selection and evaluation, can be used to improve large covariance matrix estimation. However, although its efficiency is recognized for financial applications, little is known about the theoretical behavior of its error.
In this talk, we derive the expected Frobenius error of the holdout method, a particular cross-validation procedure that involves a single train and test split, for a generic rotationally invariant multiplicative noise model, therefore extending a previously obtained result to non-Gaussian data distributions. Our approach involves using Weingarten calculus and the Ledoit-Péché formula to derive the oracle eigenvalues in the high dimension limit. When the population covariance matrix follows an inverse Wishart distribution, we find a closed form for the expected holdout error. Furthermore, we find that the optimal train-test split ratio is proportional to the square root of the order of the matrix to estimate. When we derive the estimation error for different distributions, such as stretched exponential, t-Student, and Pareto, we observe that a higher fourth-order noise moment sharpens the holdout error curve near the optimal split and lowers the ideal train-test ratio.
Joint work with Benoît Collins.
Poster presentation:
Yanyan Lin (Shanghai Jiao Tong University, People's Republic of China)
Equity risk premium prediction_Return decomposition and noise shrinkage
We propose a novel decomposition of stock returns into a fundamental component (FC) and an unexpected capital gain component (UC). The FC, driven by firm’s valuation ratios, reflects long-term growth and exhibits high persistence, while the UC, influenced by market trading, reflects short-term fluctuations and is more random. To predict the UC, we use a predictive regression model with an L multiplier to shrink noise for mitigating estimation errors. Among the 41 monthly predictors examined by Goyal, Welch and Zafirov (2024), we find 33 of them significantly outperform the historical average forecast, compared to only 5 with their method. Aggregating information across the predictors, we re-affirm the predictability of the equity risk premium. Furthermore, our analysis shows that the stock market remains predictable post-2008, even when accounting for publication bias.
Joint work with Chongfeng Wu, Guofu Zhou and Shunwei Zhu.
Poster presentation:
Yushan Liu (Ecole Polytechnique, France)
Meta-modelling paths of simple climate models using Neural Networks and Dirichlet polynomials: an application to DICE
Our study focuses on climate models extensively employed in climate science and economic-climate research, which project temperature outcomes from carbon emission trajectories.
Addressing the need for rapid evaluation in Integrated Assessment Models - critical tools for carbon emission mitigation policy analysis - we design a neural network (NN) meta-model as an efficient surrogate for mapping, in an infinite horizon setting, emission trajectories into temperature trajectories (usually modeled as coupled systems of differential equations). Our approach combines a projection on Generalized Dirichlet polynomials, whose coefficients are inputs of the NN and a suitable time change for handling infinite horizon: we prove that the quantity of interest is, under some assumptions, a smooth function of the inputs and therefore, is prone to accurate NN approximation.
After a training with augmented Shared Socio-economic Pathways scenarios, the NN achieves high-fidelity approximations of the original climate model. Additionally, we establish theoretical accuracy guarantees for both the encoding and neural network approximation. Our numerical experiments demonstrate the framework's computational efficiency and accuracy. For full article, see https://hal.science/hal-04990321v1.
Joint work with Emmanuel Gobet and Gauthier Vermandel.
Poster presentation:
Antonios Marsellos (Hofstra University, United States of America)
A new R package "sima" for enhanced signal detection in high-noise time series
We present a new R package, "sima" (signal detection using moving average), designed to enhance the detection of stationary sinusoidal signals in noisy time series. Building upon advanced computational methods previously outlined—spectral differentiation, Kolmogorov-Zurbenko filtering, zero padding, and dataset repetition—the *sima* package consolidates these techniques into a streamlined workflow for improved frequency detection.
By transforming periodogram data through a fifth-power elevation minus the fourth-power, "sima" amplifies dominant peaks while attenuating noise-induced distortions. The incorporation of a minimal-window Kolmogorov-Zurbenko filter addresses short-term fluctuations without compromising signal integrity, and systematic zero padding refines spectral resolution by effectively increasing the number of frequency bins. Additionally, the repetition of datasets improves the visibility of underlying frequencies, mitigating the masking effect of noise.
Through extensive comparative analyses against ten existing CRAN packages, our integrated approach consistently outperforms standard methods in environments with noise levels up to twentyfold the original signal magnitude (SNR as low as 1/20). Notably, "sima" demonstrates robust behavior at both low and high noise intensities, revealing a linear and systematic offset in higher noise scenarios—where conventional spectral techniques often fail to detect any frequencies.
This work underscores the synergy between advanced filtering, spectral transformations, and signal enhancement strategies. By offering an accessible interface and reproducible workflow, the "sima" package provides a valuable resource for researchers and practitioners who need reliable frequency identification in complex, noise-prone time series. Future directions may include integrating wavelet-based methods to accommodate non-stationary signals and further broaden the applicability of this approach to real-world, dynamically changing datasets.
Joint work with Katerina Tsakiri.
Poster presentation:
Giovanni Masala (University of Cagliari, Italy)
Forecasting Wind–Photovoltaic Energy Production and Income with Traditional and ML Techniques
Hybrid production plants harness diverse climatic sources for electricity generation, playing a crucial role in the transition to renewable energies. This study aims to forecast the profitability of a combined wind-photovoltaic energy system. Here, we develop a model that integrates predicted spot prices and electricity output forecasts, incorporating relevant climatic variables to enhance accuracy. The jointly modeled climatic variables and the spot price constitute one of the innovative aspects of this work. Regarding practical application, we considered a hypothetical wind-photovoltaic plant located in Italy and used the relevant climate series to determine the quantity of energy produced. We forecast the quantity of energy as well as income through machine learning techniques and more traditional statistical and econometric models. We evaluate the results by splitting the dataset into estimation windows and test windows, and using a backtesting technique. In particular, we found evidence that ML regression techniques outperform results obtained with traditional econometric models. Regarding the models used to achieve this goal, the objective is not to propose original models but to verify the effectiveness of the most recent machine learning models for this important application, and to compare them with more classic linear regression techniques.
Joint work with Amelie Schischke.
Poster presentation:
Youssef Ouazzani Chahdi (CentraleSupélec, France)
A theory of passive market impact
While the market impact of aggressive orders has been extensively studied, the impact of passive orders—those executed through limit orders—remains less understood. The goal of this paper is to investigate passive market impact by developing a microstructure model connecting liquidity dynamics and price moves. A key innovation of our approach is to replace the traditional assumption of constant information content for each trade by a function that depends on the available volume in the limit order book. Within this framework, we explore scaling limits and analyze the market impact of passive metaorders. Additionally, we derive useful approximations for the shape of market impact curves, leading to closed-form formulas that can be easily applied in practice.
Joint work with Mathieu Rosenbaum and Grégoire Szymanski.
Poster presentation:
Luna Lisa Rigby (WU Vienna University of Economics and Business, Austria)
Exercise policies for American options under model risk
We examine the impact of model misspecification on the optimal exercise strategy for American options. We work in a continuous-time economy, where the market maker prices options using the Heston model, which is assumed to be the true model. A smaller bank calibrates a misspecified model to European option prices stemming from the true model and then determines when to optimally exercise an American option. We first consider the case where the misspecified model is a Black-Scholes model and then move to the more general Dupire framework. Due to the absence of closed-form solutions for American options, we employ numerical simulations, where we make use of the Longstaff-Schwartz algorithm combined with a randomization procedure to estimate the optimal exercise boundaries. We compare the payoff distribution under the true optimal exercise rule to that of the misspecified rule.
Poster presentation:
Eric Adrian Schauer (Vienna University of Economics and Business, Austria)
Extending deep hedging to illiquid markets with stochastic volatility
Pricing and hedging derivatives have long been central to financial mathematics, with the Black-Scholes model providing the first closed-form solution for European options. Despite its historical significance, the model's assumptions - frictionless markets and constant volatility - fall short when faced with real-world phenomena such as transaction costs, market impact, and the volatility smile observed in option prices.
Advancements in financial modeling have led to the development of more sophisticated frameworks. The Heston model, for example, incorporates stochastic volatility, offering a closer alignment with empirical observations. In parallel, research has extended traditional models by integrating market frictions. More recently, the deep hedging framework introduced by Bühler and colleagues has provided a model-free approach to derivative hedging. This innovative method leverages neural networks to estimate hedging strategies while naturally incorporating transaction costs, stochastic volatility, and other market imperfections, thereby avoiding some of the restrictive assumptions inherent in classical models.
Despite its potential, deep hedging presents challenges such as high computational demands, issues with interpretability, and the need for effective simulation of asset price paths. To address these challenges, recent literature has also focused on signature-based approaches for generating asset paths driven by underlying Brownian motion.
The objective of this paper is to evaluate the performance of deep hedging strategies in incomplete markets with transaction costs by benchmarking against the extended Black-Scholes economy and the Heston model. Through numerical examples—including hedging scenarios for vanilla put and down-and-out put options—the numerical experiments assess how well deep hedging can be deployed in environments with liquidity constraints and stochastic volatility. The paper contributes to the growing body of research on applying machine learning techniques in quantitative finance, highlighting both the potential benefits and inherent limitations of these modern approaches.
Poster presentation:
Wayne Tarrant (Rose-Hulman Institute of Technology, United States of America)
Applying the tools of econophysics to systemic risk in banking
The term econophysics is purported to have first been used thirty years ago at a conference in Kolkata. Since it is such a new subject, researchers continue to find new applications of econophysics. We have applied its tools to the situation of bank failures, which often come in waves. Our work has shown the importance of the initial conditions in the network setup, yielding different results when beginning parameters are varied. This theoretical work highlights the importance of having good data when running calculations. Perhaps it will help other researchers make more compelling arguments when seeking to access proprietary data from financial firms.
Joint work with Curt Lemke.
Poster presentation:
Katerina Tsakiri (Rider University, United States of America)
Refined observations on enhanced sinusoidal signal detection in extremely noisy time series using R
This study revisits the challenge of identifying stationary sinusoidal signals submerged in high-noise time series—extending previous work by emphasizing the broader implications, potentialities, and interesting nuances uncovered during our research. Operating within the R programming environment, our methodology tackles noise magnitudes up to 20 times higher than the target signal (SNR as low as 1/20).
In contrast to standard approaches, we employ an integrated set of techniques—spectral differentiation, minimal-window Kolmogorov-Zurbenko filtering, dataset repetition, and zero padding—that consistently outperforms ten well-known CRAN packages under varying levels of noise intensity. Notably, the synergy among these methods proves pivotal: omitting any single component diminishes the effectiveness of the ensemble. Among our findings is the linear offset observed at high-noise conditions, which, although deviating from the exact frequency, remains consistent and thus offers a predictable pattern for further tuning.
We also discuss potential applications beyond strictly stationary signals, highlighting how future adaptations may integrate wavelet-based or adaptive filtering to address more complex, non-stationary time series. Further exploration of optimal parameterization - such as selecting fewer tapers in multitaper methods - reveals intriguing trade-offs between frequency resolution and leakage prevention. Our results underscore the importance of holistic strategies that unite multiple computational steps, ultimately paving the way for more reliable signal detection in data-intensive fields where noise significantly hinders traditional spectral analysis.
Joint work with Antonios Marsellos.
Poster presentation:
Bilgi Yilmaz (Kahramanmaras Sutcu Imam University, Turkiye)
Resilient housing portfolios for large investors: a worst-case scenario approach during market stress
This study develops a mathematical framework for building robust housing portfolios for large investors, specifically addressing the risk of major market crashes. Unlike traditional models that assume smooth price movements, this research incorporates the possibility of sudden, significant price drops, using a worst-case portfolio optimization strategy. Inspired by Korn and Wilmott (2002), this approach allows for the explicit consideration of extreme market downturns. By focusing on minimizing potential losses under these stressful conditions, the study offers a novel methodology for constructing resilient housing portfolios. Numerical examples demonstrate how investors can dynamically adjust their portfolios to mitigate downside risk during market crises, highlighting the importance of proactive risk management in volatile housing markets. The findings provide valuable guidance for large investors seeking to protect their capital and optimize their housing investments in the face of significant market uncertainty.
Poster presentation:
Brian Zi Qi Zhu (Columbia University, United States of America)
The paradox of just-in-time liquidity in decentralized exchanges: more providers can lead to less liquidity
Passive liquidity providers (PLPs) on automated market makers face an inevitable rent extraction from arbitrageurs. However, if earned trading fees outweigh their positional loss, PLPs may still find it profitable to supply liquidity. Can just-in-time (JIT) liquidity enhance this profitability? Our analysis shows that, if order flows do not sufficiently scale up with pool depth, the second-mover advantage of JIT LPs may result in them cream-skimming uninformed order flow and displacing PLPs. To prevent a liquidity freeze in passive liquidity, we propose a two-tiered fee structure where a portion of JIT LP's fee revenue is transferred to PLPs.
Joint work with Agostino Capponi and Ruizhe Jia.
Poster presentation:
Qinwen {Wendy} Zhu (Shanghai Lixin University of Accounting and Finance, People's Republic of China)
Volatility forecast with the regularity modificaions
The promising empirical results presented using high-frequency data show that the log-volatility behaves essentially as a fractional Brownian motion (fBm) with a Hurst exponent smaller than 0.5. Motivated by these findings, we propose the autoregressive rough volatility (ARRV) model, which combines the fractional Gaussian noise (fGn) process and time series models to forecast volatility. We apply this model to the VIX index by adopting the fBm approximation technique, and our results indicate that the ARRV model can significantly improve VIX out-of-sample forecast accuracy, particularly during turbulent times.
Joint work with Xundi Diao and Chongfeng Wu.
Further around 15 abstracts for posters t.b.a.