Schedule for Spring 2021
Seminars are on Thursdays
Time: 4:10pm – 5:25pm
Attention: All talks are available online, via Zoom. Select talks take place in hybrid mode. In-person participation is only available to Columbia affiliates with building access.
Meeting ID: 934 6125 7216
Organizers: Ioannis Karatzas, Marcel Nutz, Philip Protter, Xiaofei Shi, Johannes Wiesel
Xunyu Zhou (Columbia)
“Curse of optimality, and how do we break it”
We strive to seek optimality, but often find ourselves trapped in bad “optimal” solutions that are either local optimizers, or are too rigid to leave any room for errors, or are simply based on wrong models or erroneously estimated parameters. A way to break this “curse of optimality” is to engage exploration through randomization. Exploration broadens search space, provides flexibility, and facilitates learning via trial and error. We review some of the latest development in this exploratory approach in the stochastic control setting with continuous time and spaces.
Johannes Wiesel (Columbia)
“Data driven robustness and sensitivity analysis”
Abstract: In this talk I consider sensitivity of a generic stochastic optimization problem to model uncertainty, where I take a non-parametric approach and capture model uncertainty using Wasserstein balls around the postulated model. I provide explicit formulae for the first order correction to both the value function and the optimizer and further extend our results to optimization under linear constraints. Then I present applications to statistics, machine learning, mathematical finance and uncertainty quantification. In particular, I prove that LASSO leads to parameter shrinkage, propose measures to quantify robustness of neural networks to adversarial examples and compute sensitivities of optimised certainty equivalents in finance. I also propose extensions of this framework to a multiperiod setting. This talk is based on joint work with Daniel Bartl, Samuel Drapeau and Jan Obloj.
*Start Time: 2:30pm
*End Time: 3:30 pm
Joint with the Applied Probability and Risk seminar.
Soumik Pal (University of Washington Seattle)
“A Gibbs measure perspective on Schrodinger bridges and entropy regularized optimal transport.”
Abstract: Consider the problem of matching two independent sets of N i.i.d. observations from two densities. Such matchings correspond to the set of permutations of N labels. For an arbitrary continuous cost function, the optimal assignment problem looks for that permutation that minimizes the total cost of matching each pair of atoms. The empirical distribution of the matched atoms is known to converge to the solution of the Monge-Kantorovich optimal transport problem.
Suppose instead we take a weighted convex combination of the empirical distribution of every matching, weighted proportional to the exponential of their (negative) total cost. Then the resulting distribution converges to the solution of a variational problem, introduced by Follmer, called the entropy-regularized optimal transport. This weighted combination is a variant of entropy regularization for discrete optimal transport similar to the one due introduced by Cuturi for faster computations. For this variant one can describe limiting Gaussian and non-Gaussian distributions that are useful in statistical estimation.
As a big picture, we will discuss how discrete optimal transport problems can be analyzed by classical tools such as U-statistics, exchangeability and combinatorics of symmetric functions. This avoids the use of analytical machinery on metric measure spaces that are frequently used in such problems for the quadratic cost but are unavailable outside that of the Wasserstein spaces.
Joint with the Applied Probability and Risk seminar.
Julio Backhoff-Veraguas (Vienna)
“The mean field Schrödinger problem: large deviations and ergodic behaviour.”
Abstract: In the classical Schrödinger problem the aim is to minimize a relative entropy cost over the laws of processes with a prescribed initial and terminal marginals. Via large deviations theory, a solution to the Schrödinger problem approximates the distribution of a large system of independent particles conditioned to have a prescribed initial and terminal configuration. In the first part of this talk I will explain how the Schrödinger problem looks like when instead of independent particles we allow for weakly dependent ones. Specializing the discussion to a diffusion model with mean field interactions, I will illustrate in the second part of this talk how the effect of conditioning at initial and terminal times is exponentially small at intermediate times under precise ergodicity assumptions.
Based on joint work with Conforti, Gentil and Leonard.
Guillaume Carlier (Dauphine)
“A mean field game model for the evolution of cities”
Abstract: In this talk, I will present a (toy) MFG model for the evolution of residents and firms densities, coupled both by labour market equilibrium conditions and competition for land use (congestion). This results in a system of two Hamilton-Jacobi-Bellman and two Fokker-Planck equations with a new form of coupling related to optimal transport. This MFG has a convex potential which enables us to find weak solutions by a variational approach. In the case of quadratic Hamiltonians, the problem can be reformulated in Lagrangian terms and solved numerically by an IPFP/Sinkhorn-like scheme. I will present numerical results based on this approach, these simulations exhibit different behaviours with either agglomeration or segregation dominating depending on the initial conditions and parameters. This is a joint work with César Barilla and Jean-Michel Lasry.
Xiaofei Shi (Columbia)
“Equilibrium Asset Pricing with Liquidity Risk”
In a risk-sharing economy we study how the price dynamics of an asset depends on its “liquidity”. An equilibrium is achieved through a system of coupled forward-backward SDEs, whose solution turns out to be amenable to an asymptotic analysis for the practically relevant regime of large liquidity. These tractable approximation formulas make it feasible to calibrate the model to time series of prices and trading volume, and we also discuss how to leverage deep-learning techniques to obtain numerical solutions. (Based on joint works in progress with Agostino Capponi, Lukas Gonon, Johannes Muhle-Karbe and Chen Yang).
Joint with the Applied Probability and Risk seminar.
Wilfrid Gangbo (UCLA)
“Global Wellposedness of Master Equations of Mean Field Games”
We propose a structural condition on Hamiltonians, which we term displacement monotonicity condition, to study second order mean field games master equations. A rate of dissipation of a bilinear form is brought to bear a global (in time) well-posedness theory, based on a–priori uniform Lipschitz estimates on the solution in the measure variable. Displacement monotonicity which sometimes in dichotomy with the widely used Lasry-Lions monotonicity condition, allows to handle non-separable Hamiltonians.
Moritz Voss (UCLA)
“Trading with the crowd”
Abstract: We formulate and solve a multi-player stochastic differential game between financial agents who seek to cost-efficiently liquidate their position in a risky asset in the presence of jointly aggregated transient price impact on the risky asset’s execution price along with taking into account a common general price predicting signal. In contrast to an interaction of the agents through purely permanent price impact as it is typically considered in the literature on multi-player price impact games, accrued transient price impact does not persist but decays over time. The unique Nash-equilibrium strategies reveal how each agent’s liquidation policy adjusts the predictive trading signal for the accumulated transient price distortion induced by all other agents’ price impact; and thus unfolds a direct and natural link in equilibrium between the trading signal and the agents’ trading activity. We also formulate and solve the corresponding mean field game in the limit of infinitely many agents and show how the latter provides an approximate Nash-equilibrium for the finite-player game.
This is joint work in progress with Eyal Neuman (Imperial College London).
Matteo Burzoni (Milano)
“Mean Field Games with absorption and a model of bank run.”
Abstract: We consider a MFG problem obtained as the limit of N-particles systems with an absorbing region. Once a particle hits such a region, it leaves the game and the rest of the system continues to play with N-1 particles. We study existence of equilibria for the limiting problem in a framework with common noise and establish the existence of epsilon Nash equilibria for the N-particles problems. These results are applied to a novel model of bank run. This is a work in progress with L. Campi (University of Milan).
Samuel Drapeau (Shanghai Jiao Tong)
“On Detecting Spoofing Strategies in High Frequency Trading.”
Abstract: The development of high frequency and algorithmic trading allowed to considerably reduce the bid ask spread by increasing liquidity in limit order books. Beyond the problem of optimal placement of market and limit orders, the possibility to cancel orders for free leaves room for price manipulations, one of such being spoofing. Detecting spoofing from a regulatory viewpoint is challenging due to the sheer amount of orders and difficulty to discriminate between legitimate and manipulative flows of orders. However, it is empirical evidence that volume imbalance reflecting offer and demand on both sides of the limit order book has an impact on subsequent price movements. Spoofers use this effect to artificially modify the imbalance by posting limit orders and then execute market orders at subsequent better prices while canceling at a high speed their previous limit orders. In this work we set up a model to determine where a spoofer would place its limit orders to maximize its gains as a function of the imbalance impact on the price movement. We study the solution of this non local optimization problem as a function of the imbalance. With this at hand, we calibrate on real data from TMX the imbalance impact (as a function of its depth) on the resulting price movement. Based on this calibration and theoretical results, we then provide methods and numerical results as how to detect in real time some eventual spoofing behavior based on Wasserstein distances.
Joint work with Tao Xuan (SJTU), Ling Lan (SJTU) and Andrew Day (Western University)
Arnulf Jentzen (Münster)
“Overcoming the curse of dimensionality: from nonlinear Monte Carlo to deep learning”
Partial differential equations (PDEs) are among the most universal tools used in modelling problems in nature and man-made complex systems. For example, stochastic PDEs are a fundamental ingredient in models for nonlinear filtering problems in chemical engineering and weather forecasting, deterministic Schroedinger PDEs describe the wave function in a quantum physical system, deterministic Hamiltonian-Jacobi-Bellman PDEs are employed in operations research to describe optimal control problems where companys aim to minimise their costs, and deterministic Black-Scholes-type PDEs are highly employed in portfolio optimization models as well as in state-of-the-art pricing and hedging models for financial derivatives. The PDEs appearing in such models are often high-dimensional as the number of dimensions, roughly speaking, corresponds to the number of all involved interacting substances, particles, resources, agents, or assets in the model. For instance, in the case of the above mentioned financial engineering models the dimensionality of the PDE often corresponds to the number of financial assets in the involved hedging portfolio. Such PDEs can typically not be solved explicitly and it is one of the most challenging tasks in applied mathematics to develop approximation algorithms which are able to approximatively compute solutions of high-dimensional PDEs. Nearly all approximation algorithms for PDEs in the literature suffer from the so-called “curse of dimensionality” in the sense that the number of required computational operations of the approximation algorithm to achieve a given approximation accuracy grows exponentially in the dimension of the considered PDE. With such algorithms it is impossible to approximatively compute solutions of high-dimensional PDEs even when the fastest currently available computers are used. In the case of linear parabolic PDEs and approximations at a fixed space-time point, the curse of dimensionality can be overcome by means of Monte Carlo approximation algorithms and the Feynman-Kac formula. In this talk we prove that suitable deep neural network approximations do indeed overcome the curse of dimensionality in the case of a general class of semilinear parabolic PDEs and we thereby prove, for the first time, that a general semilinear parabolic PDE can be solved approximatively without the curse of dimensionality.