Statistics Seminar Series

Choose which semester to display:

Schedule for Spring 2024

Seminars are on Mondays
Time: 4:10pm - 5:00pm

Location: Room 903 SSW, 1255 Amsterdam Avenue

1/22/24

 

 Michael Celentano (Miller Fellow in the Statistics Department at the University of California, Berkeley)
 
Title: Debiasing in the inconsistency regime

Abstract: In this talk, I will discuss semi-parametric estimation when nuisance parameters cannot be estimated consistently, focusing in particular on the estimation of average treatment effects, conditional correlations, and linear effects under high-dimensional GLM specifications. In this challenging regime, even standard doubly-robust estimators can be inconsistent. I describe novel approaches which enjoy consistency guarantees for low-dimensional target parameters even though standard approaches fail. For some target parameters, these guarantees can also be used for inference. Finally, I will provide my perspective on the broader implications of this work for designing methods which are less sensitive to biases from high-dimensional prediction models.
 
Bio: Michael Celentano is a Miller Fellow in the Statistics Department at the University of California, Berkeley. He received his PhD in Statistics from Stanford University in 2021, where he was advised by Andrea Montanari. Most of his work focuses on the high-dimensional asymptotics of regression, classification, and matrix estimation problems.

Date:

*Friday 1/26/24

Time: *12:30pm

Location: Room 903 SSW

Ying Jin (Stanford)

Title:  Model-free selective inference: from calibrated uncertainty to trusted decisions

Abstract: AI has shown great potential in accelerating decision-making and scientific discovery pipelines such as drug discovery, marketing, and healthcare. In many applications, predictions from black-box models are used to shortlist candidates whose unknown outcomes satisfy a desired property, e.g., drugs with high binding affinities to a disease target. To ensure the reliability of high-stakes decisions, uncertainty quantification tools such as conformal prediction have been increasingly adopted to understand the variability in black-box predictions. However, we find that the on-average guarantee of conformal prediction can be insufficient for its deployment in decision making which usually has a selective nature. 

In this talk, I will introduce a model-free selective inference framework that allows to select reliable decisions with the assistance of any black-box prediction model. Our framework identifies candidates whose unobserved outcomes exceed user-specified values while controlling the average proportion of falsely selected units (FDR), without any modeling assumptions. Leveraging a set of exchangeable training data, our method constructs conformal p-values that quantify the confidence in large outcomes; it then determines a data-dependent threshold for the p-values as a criterion for drawing confident decisions. In addition, I will discuss new ideas to further deal with covariate shifts between training and new samples. We show that in several drug discovery tasks, our methods narrow down the drug candidates to a manageable size of promising ones while controlling the proportion of falsely discovered. In a causal inference dataset, our methods identify students who benefit from an educational intervention, providing new insights for causal effects.

1/29/24

Tianhao Wang (Yale)

Title: Algorithm Dynamics in Modern Statistical Learning: Universality and Implicit Regularization

Abstract: Modern statistical learning is featured by the high-dimensional nature of data and over-parameterization of models. In this regime, analyzing the dynamics of the used algorithms is challenging but crucial for understanding the performance of learned models. This talk will present recent results on the dynamics of two pivotal algorithms: Approximate Message Passing (AMP) and Stochastic Gradient Descent (SGD). Specifically, AMP refers to a class of iterative algorithms for solving large-scale statistical problems, whose dynamics admit asymptotically a simple but exact description known as state evolution. We will demonstrate the universality of AMP's state evolution over large classes of random matrices, and provide illustrative examples of applications of our universality results. Secondly, for SGD, a workhorse for training deep neural networks, we will introduce a novel mathematical framework for analyzing its implicit regularization. This is essential for SGD's ability to find solutions with strong generalization performance, particularly in the case of over-parameterization. Our framework offers a general method to characterize the implicit regularization induced by gradient noise. Finally, in the context of underdetermined linear regression, we will show that both AMP and SGD can provably achieve sparse recovery, yet they do so from markedly different perspectives.

Bio: Tianhao Wang is a final-year Ph.D. student in the Department of Statistics and Data Science at Yale University, advised by Prof. Zhou Fan. His research focuses on the mathematical foundations of statistics and machine learning.

Date:

*Wednesday 1/31/24

Time: *12:30pm

Location: Room 903 SSW

Sifan Liu (Stanford)

Title: An Exact Sampler for Inference after Polyhedral Selection

Abstract: The exploratory and interactive nature of modern data analysis often introduces selection bias, posing challenges for traditional statistical inference methods. A common strategy to address this bias is by conditioning on the selection event. However, this often results in a conditional distribution that is intractable and requires Markov chain Monte Carlo (MCMC) sampling for inference. Notably, some of the most widely used selection algorithms yield selection events that can be characterized as polyhedra, such as the lasso for variable selection and the epsilon-greedy algorithm for multi-armed bandit problems. This talk will present a method that is tailored for conducting inference following polyhedral selection. The method transforms the variables constrained within a polyhedron into variables within a unit cube, allowing for exact sampling. Compared to MCMC, the proposed method offers superior speed and accuracy, providing a practical and efficient approach for conditional selective inference. Additionally, it facilitates the computation of the selection-adjusted maximum likelihood estimator, enabling MLE-based inference. Numerical results demonstrate the enhanced performance of the proposed method compared to alternative approaches for selective inference.

Bio: Sifan Liu is a fifth-year Ph.D. student in the Department of Statistics at Stanford University. Her research interests are focused on selective inference and statistical computation.

2/5/24

Chris Harshaw (MIT)

Title: Algorithm Design for Randomized Experiments

Abstract:

Randomized experiments are one of the most reliable causal inference methods and are used in a variety of disciplines from clinical medicine, public policy, economics, and corporate A/B testing. Experiments in these disciplines provide empirical evidence which drives some of the most important decisions in our society: what drugs are prescribed? Which social programs are implemented? What corporate strategies to use? Technological advances in measurements and intervention -- including high dimensional data, network data, and mobile devices -- offer exciting opportunities to design new experiments to investigate a broader set of causal questions. In these more complex settings, standard experimental designs (e.g. independent assignment of treatment) are far from optimal. Designing experiments which yield the most precise estimates of causal effects in these complex settings is not only a statistical problem, but also an algorithmic one.

In this talk, I will present my recent work on designing algorithms for randomized experiments. I will begin by presenting Clip-OGD, a new algorithmic experimental design for adaptive sequential experiments. We show that under the Clip-OGD design, the variance of an adaptive version of the Horvitz-Thompson estimator converges to the optimal non-adaptive variance, resolving a 70-year-old problem posed by Robbins in 1952. Our results are facilitated by drawing connections to regret minimization in online convex optimization. Time permitting, I will describe a new unifying framework for investigating causal effects under interference, where treatment given to one subject can affect the outcomes of other subjects. Finally, I will conclude by highlighting open problems and reflecting on future work in these directions.

Bio:

Christopher Harshaw is a FODSI postdoc at MIT and UC Berkeley. He received his PhD from Yale University where he was advised by Dan Spielman and Amin Karbasi. His research lies at the interface of causal inference, machine learning, and algorithm design, with a particular focus on the design and analysis of randomized experiments. His work has appeared in the Journal of the American Statistical Association, Electronic Journal of Statistics, ICML, NeurIPS, and won Best Paper Award at the NeurIPS 2022 workshop, CML4Impact.

Date:

*Wednesday 2/7/24

Time: *12:00pm

Location: Room 903 SSW

Enric Boix-Adsera (MIT)

Title: Beyond the black box: characterizing and improving how neural networks learn

Abstract:
The predominant paradigm in deep learning practice treats neural networks as "black boxes". This leads to economic and environmental costs as brute-force scaling remains the performance driver, and to safety issues as robust reasoning and alignment remain challenging. My research opens up the neural network black box with mathematical and statistical analyses of how networks learn, and yields engineering insights that improve the efficiency and transparency of these models. In this talk I will present characterizations of (1) how large language models can learn to reason with abstract symbols, and (2) how hierarchical structure in data guides deep learning, and will conclude with (3) new tools to distill trained neural networks into lightweight and transparent models.


Speaker: Enric Boix-Adsera is a PhD candidate at MIT, under the supervision of Guy Bresler and Philippe Rigollet. His PhD research has been supported by an NSF Graduate Research Fellowship, a Siebel Fellowship, and an Apple AI/ML fellowship.

2/12/24

Jonathan Niles-Weed (NYU)

Title: Learning Matchings, Maps, and Trajectories

Abstract: This talk will survey some recent advances in the statistical theory of optimal transport. Optimal transport considers the geometrical properties of transformations of probability distributions, making it a suitable framework for many applications in generative modeling, causal inference, and the sciences. We will study estimators for this problem, characterizing their finite-sample behavior and obtaining distributional limits suitable for practical inference. Additionally, we will explore structural assumptions that improve the statistical and computational performance of these estimators in high dimensions.
 
 

2/19/24

Speaker: Jiashun Jin (Carnegie Mellon University)

Title: The Statistics Triangle

 Abstract: In his Fisher’s Lecture in 1996, Efron suggested that there is a philosophical triangle in statistics with “Bayesian”, “Fisherian”, and “Frequentist” being the three vertices, and most of the statistical methods can be viewed as a convex linear combination of the three philosophies. We collected and cleaned a data set consisting of the citation and BibTeX (e.g., title, abstract, author information) data of 83,331 papers published in 36 journals in statistics and related fields, spanning 41 years. Using the data set, we constructed 21  co-citation networks, each for a time window between 1990 and 2015. We propose a dynamic Degree-Corrected Mixed- Membership (dynamic-DCMM) model, where we model the research interests of an author by a low-dimensional weight vector (called the network memberships) that evolves slowly over time. We propose dynamic-SCORE as a new approach to estimating the memberships. We discover a triangle in the spectral domain which we call the Statistical Triangle, and use it to visualize the research trajectories of individual authors. We interpret the three vertices of the triangle as the three primary research areas in statistics: “Bayes”, “Biostatistics,” and “Nonparametrics”. The Statistical Triangle further splits into 15 sub-regions, which we interpret as the 15 representative sub-areas in statistics. These results provide useful insights over the research trend and behavior of statisticians.

Bio: Jiashun Jin is a Professor of Statistics and Data Science at Carnegie Mellon University. He is interested in statistical machine learning, social networks, genomics and genetics, and neuroscience. His primary research interest is analyzing big data with sparse and weak signals. He has been developing methods appropriate for such settings, including large-scale testing, classification, clustering, variable selection, and, more recently, network analysis and low-rank matrix recovery. Jiashun received the NSF CAREER award and IMS Tweedie Award and has been an elected IMS Fellow. He delivered an IMS Medallion Lecture (2015), the IMS Tweedie Lecture (2009), and other plenary or keynote lectures.

2/26/24

Speaker: Annie Qu (UC Irvine)

Title: A Model-Agnostic Graph Neural Network for Integrating Local and Global Information

Abstract: Graph neural networks (GNNs) have achieved promising performance in a variety of graph focused tasks. Despite their success, the two major limitations of existing GNNs are the capability of learning various-order representations and providing interpretability of such deep learning-based black-box models. To tackle these issues, we propose a novel Model-agnostic Graph Neural Network (MaGNet) framework. The proposed framework is able to extract knowledge from high-order neighbors, sequentially integrates information of various orders, and offers explanations for the learned model by identifying influential compact graph structures. In particular, MaGNet consists of two components: an estimation model for the latent representation of complex relationships under graph topology, and an interpretation model that identifies influential nodes, edges, and important node features. Theoretically, we establish the generalization error bound for MaGNet via empirical Rademacher complexity and showcase its power to represent the layer-wise neighborhood mixing. We conduct comprehensive numerical studies using both simulated data and a real-world case study on investigating the neural mechanisms of the rat hippocampus, demonstrating that the performance of MaGNet is competitive with state-of-the-art methods.

Bio: Annie Qu is Chancellor’s Professor, Department of Statistics, University of California, Irvine. She received her Ph.D. in Statistics from the Pennsylvania State University in 1998. Qu’s research focuses on solving fundamental issues regarding structured and unstructured large-scale data and developing cutting-edge statistical methods and theory in machine learning and algorithms for personalized medicine, text mining, recommender systems, medical imaging data, and network data analyses for complex heterogeneous data. The newly developed methods can extract essential and relevant information from large volumes of intensively collected data, such as mobile health data. Her research impacts many fields, including biomedical studies, genomic research, public health research, social and political sciences. Before joining UC Irvine, Dr. Qu was a Data Science Founder Professor of Statistics and the Director of the Illinois Statistics Office at the University of Illinois at Urbana-Champaign. She was awarded the Brad and Karen Smith Professorial Scholar by the College of LAS at UIUC and was a recipient of the NSF Career award from 2004 to 2009. She is a Fellow of the Institute of Mathematical Statistics (IMS), the American Statistical Association, and the American Association for the Advancement of Science. She is also a recipient of IMS Medallion Award and Lecturer in 2024. She serves as Journal of the American Statistical Association Theory and Methods Co-Editor from 2023 to 2025 and as IMS Program Secretary from 2021 to 2027. 

Qu Lab website: https://faculty.sites.uci.edu/qulab/

3/4/24

Speaker: Raaz Dwivedi (Cornell)

TItle: Integrating Double Robustness into Causal Latent Factor Models

Abstract: There is a growing literature on latent factor models with panel data, where multiple measurements across various units under multiple treatments are available. These models are compatible with both observed and unobserved confounding (external variables affecting the treatment and the outcome simultaneously), making them a popular choice for estimating treatment effects. Standard approaches are based on outcome imputation, including nearest neighbors for individual treatment effects (ITE) and generic matrix completion for average treatment effect (ATE). These rely, respectively, on unit similarities or low-rank structure in the outcome matrix, and are consequently known to provide poor performance with diverse units or non-low-rank outcome matrix.

To tackle these challenges, we integrate double robustness principles with factor models, introducing estimators designed to overcome them. First, we propose a doubly robust nearest neighbor approach for ITE, achieving consistent estimates with presence of either similar measurements or similar units, and enhanced error/confidence intervals with presence of both. Next, we introduce a doubly robust matrix completion strategy for ATE despite unobserved confounding, which ensures consistency with either low-rank propensity matrix or low-rank outcome matrix, and offers superior error rates and confidence intervals when both matrices are low-rank.

Bio: Raaz Dwivedi joined Department of Operations Research and Information Engineering and Cornell Tech at Cornell University as an Assistant Professor in Jan 2024. Prior to that, he visited Cornell ORIE in Fall 2023 and spent two years as a FODSI postdoc fellow at Harvard and MIT LIDS, and spent a summer at Microsoft Research New England. He did his Ph. D. In EECS at UC Berkeley in 2021 and bachelors in EE at IIT Bombay in 2014. His research builds statistically and computationally efficient strategies for personalized decision-making with theory and methods spanning the areas of causal inference, reinforcement learning, and distribution compression. He has won a best student paper award for work on optimal compression and teaching awards at Harvard and UC Berkeley, and the President of India Gold Medal at IIT Bombay.

3/11/24

NO SEMINAR

3/18/24

Speaker: Thomas Richardson (University of Washington)

Title: Statistical analysis for the discrete instrumental variable model

Abstract: We consider causal instrumental variable (IV) models containing an instrument (Z), a treatment (X) and a response (Y) in the case where X and Y are binary, while Z is categorical taking k levels. We assume that the instrument Z is randomized and has no direct effect on the outcome Y, except through X.

In the first part of the talk we consider the problem of characterizing those distributions over potential outcomes for Y that are compatible with a given observed distribution P(X,Y | Z). We show that this analysis of identification may be simplified by viewing the observed distribution as arising from a series of observational studies on the same population.   We also show that this approach naturally leads to the restrictions imposed on the observed distribution by the IV model. 

In the second part of the talk we consider statistical inference for this model. We first show that our characterization of the model for the observables leads to a 'transparent’ approach to Bayesian inference under which identified and non-identified parameters are clearly distinguished. We contrast this with the ‘direct' approach that puts priors directly on the distribution of potential outcomes. 

Finally, time permitting, we will describe a frequentist approach to inference for the IV model via a new approach to constructing confidence regions for multinomial data with (non-asymptotic) coverage guarantees via a Chernoff-type tail bound.

[Joint work with  Robin J. Evans (Oxford), F. Richard Guo (UW) and James M. Robins (Harvard).]

3/25/24

Speaker: Yuxin Chen (UPenn)

Title: Breaking the sample size barrier in multi-distribution learning and reinforcement learning

Abstract: Emerging statistical learning problems (e.g., reinforcement learning, multi-distribution learning) necessitate the design of sample-efficient solutions in order to accommodate the explosive growth of problem dimensionality. Despite a number of prior works tackling the statistical limits of these problems, a complete picture of the trade-offs between sample complexity and statistical accuracy is often unsettled. In particular, prior results often suffer from an enormous sample size barrier, in the sense that their claimed statistical guarantees hold only when the sample size exceeds a large threshold.  In this talk, I will present some recent progress towards settling the sample complexity limits in two scenarios: (1) multi-distribution learning, and (2) online reinforcement learning. To the best of our knowledge, our results provide the first minimax-optimal guarantees for these scenarios that accommodates the most sample-hungry regimes. Our findings emphasize the prolific interplay between high-dimensional statistics, online learning, and game theory, and successfully resolve multiple open problems in the field.   

This is based on joint work with Zihan Zhang, Wenhao Zhan, Simon Du, and Jason Lee. 

Paper 1: https://yuxinchen2020.github.io/publications/MDL.pdf

Paper 2: https://arxiv.org/abs/2307.13586

Bio: Yuxin Chen is currently an associate professor of statistics and data science and of electrical and systems engineering at the University of Pennsylvania. Before joining UPenn, he was an assistant professor of electrical and computer engineering at Princeton University. He completed his Ph.D. in Electrical Engineering at Stanford University and was also a postdoc scholar at Stanford Statistics. His current research interests include high-dimensional statistics, nonconvex optimization, and machine learning theory. He has received the Alfred P. Sloan Research Fellowship, the SIAM Activity Group on Imaging Science Best Paper Prize, the ICCM Best Paper Award (gold medal), and was selected as a finalist for the Best Paper Prize for Young Researchers in Continuous Optimization. He has also received the Princeton Graduate Mentoring Award. 

4/1/24

Speaker: Andre Wibisono (Yale)

Title: On Independent Samples along the Langevin Dynamics and Algorithm

Abstract: Sampling from a probability distribution is a fundamental algorithmic task, which can be done via running a Markov chain. The mixing time of a Markov chain characterizes how long we should run the Markov chain until the random variable converges to the stationary distribution. In this talk, we discuss the “independence time”, which is how long we should run a Markov chain until the initial and final random variables are approximately independent, in the sense that they have small mutual information. We study this question for two natural Markov chains: the Langevin dynamics in continuous time, and the Unadjusted Langevin Algorithm in discrete time. When the target distribution is strongly log-concave, we prove that the mutual information between the initial and final random variables decreases exponentially fast along both Markov chains. These convergence rates are tight, and lead to an estimate of the independence time which is similar to the mixing time guarantees of these Markov chains. We illustrate our proofs using the strong data processing inequality and the regularity properties of Langevin dynamics. Based on joint work with Jiaming Liang and Siddharth Mitra, https://arxiv.org/abs/2402.17067.

Bio:  Andre Wibisono is an assistant professor in the Department of Computer Science at Yale University, with a secondary appointment in the Department of Statistics and Data Science. His research interests are in the design and analysis of algorithms for machine learning, in particular for problems in optimization, sampling, and game theory, using tools from dynamical systems, geometry, and information theory. He received his BS degrees in Mathematics and in Computer Science from MIT; his Masters degrees in Computer Science from MIT and in Statistics from UC Berkeley; and his PhD in Computer Science at UC Berkeley. Before joining Yale in 2021, he has done postdoctoral research at UW Madison and Georgia Tech. 

4/8/24

Speaker: Sara Mostafavi (University of Washington)
 
Title: Genomics deep learning models for interpreting personal genomes
 
Abstract: Our genomes contain millions of cis-regulatory elements, whose differential activity determines cellular differentiation. The majority of disease causing genetic variants also reside in these regulatory elements, impacting their regulatory function in a subtle and context-dependent manner.  In this talk, I’ll present recent work from us and others on applying sequence-based deep learning models for predicting and explaining regulatory function(s) from genomic DNA. I'll describe our efforts in adapting these models for studying how natural genetic variation impacts cellular function, highlighting current challenges. Motivated by these results,  I will  describe our ongoing work in improving models' causal interpretation of non-coding genetic variation, which is required to accurately predict differential gene expression across individuals. In summary, our work shows that sequence-based deep learning approaches can uncover regulatory mechanisms while providing a powerful in-silico framework to mechanistically probe the relationship between regulatory sequence and its function.
 
Bio: Sara Mostafavi is an Associate Professor at the Paul Allen School of Computer Science and Engineering at University of Washington (UW). She is also the co-founder of the Machine Learning for Computational Biology (MLCB) Conference. Before joining UW,  she was an Assistant Professor at the Department of Statistics and the Department of Medical Genetics at University of British Columbia (UBC), and a faculty member at the Vector Institute. Sara is the recipient of a Canada Research Chair (CRC II) in Computational Biology, and a Canada CIFAR Chair in Artificial Intelligence. Sara didd her postdoc at Stanford CS working with Daphne Koller, and got her PhD in Computer Science from the University of Toronto in 2011 working with Quaid Morris. Sara's research focuses developing and applying machine learning and statistical methods for understanding genome biology and function. 

4/15/24

Speaker: Venkat Chandrasekaran (California Institute of Technology)

Title: On False Positive Error

Abstract: Controlling the false positive error in model selection is a prominent paradigm for gathering evidence in data-driven science.  In model selection problems such as variable selection and graph estimation, models are characterized by an underlying Boolean structure such as presence or absence of a variable or an edge.  Therefore, false positive error or false negative error can be conveniently specified as the number of variables/edges that are incorrectly included or excluded in an estimated model.  However, the increasing complexity of modern datasets has been accompanied by the use of sophisticated modeling paradigms in which defining false positive error is a significant challenge.  For example, models specified by structures such as partitions (for clustering), permutations (for ranking), directed acyclic graphs (for causal inference), or subspaces (for principal components analysis) are not characterized by a simple Boolean logical structure, which leads to difficulties with formalizing and controlling false positive error.  We present a generic approach to endow a collection of models with partial order structure, which leads to systematic approaches for defining natural generalizations of false positive error and methodology for controlling this error.  (Joint work with Armeen Taeb, Mateo Diaz, Peter Bühlmann, Parikshit Shah)

Bio: Please see at the end of http://users.cms.caltech.edu/~venkatc/

 

4/22/24

Speaker: Aad Van Der Vaart (TU Delft)

Title: Linear methods for nonlinear inverse problems.

Abstract: We consider the recovery of an unknown function f from a noisy observation u_f of the solution to a partial differential equation that has f as a parameter or boundary function. The challenging, but realistic, case is that the forward map f -> u_f is nonlinear, making this into a nonlinear inverse problem. We follow a standard, nonparametric Bayesian approach, thus regularising the solution of the inverse problem through a prior and and basing further inference on the posterior distribution.  To gain computational and theoretical strength, we reformulate the problem as a combination of an embedded Bayesian linear problem and an analytic nonlinear problem, thus making it possible to obtain the posterior distribution using known and computationally efficient approaches for linear inverse problems in combination with numerical methods to map back to the original nonlinear problem. We consider several examples, including the Schrödinger and Darcy equations, and Gaussian process priors. After reviewing results for linear problems, we present contraction rates for the posterior distribution and coverage of credible sets for the nonlinear problems. We also discuss distributed posteriors to further alleviate the computational burden. [Joint work with Geerten Koers (TU Delft) and Botond Szabó (Bocconi, Milano).]

Bio: Aad van der Vaart is professor of statistics at the Institute of Applied Mathematics of TU Delft (the Netherlands). His main research topics are semiparametric models, empirical processes,  asymptotic statistics, and nonparametric Bayesian methods. He has also collaborated on topics in applied statistics (e.g. genomics, imaging, finance). Among his current interests are inverse problems and causal inference. His research output includes several books. He served in many administrative roles, including as head of two mathematical institutes, as president of the Netherlands Statistical Society, and (currently) as president of the International Society of Bayesian Analysis.  [ https://diamhomes.ewi.tudelft.nl/~avandervaart/  ]

 

4/29/24

Speaker: Alexandre Tsybakov (CREST-ENSAE Paris)

Title: Gradient-free stochastic optimization under adversarial noise

Abstract:  We study the problem of estimating the minimizer or the minimal value of a smooth function by exploration of its values under possibly adversarial noise. We consider active (sequential) and passive settings of the problem and several approximations of the gradient descent algorithm, where the gradient is estimated by procedures involving function evaluations in randomized points and a smoothing kernel based on the ideas from nonparametric regression. The objective function is assumed to have Hölder smoothness index greater than or equal to 2, and possibly satisfying additional assumptions such as strong convexity or  Polyak--Łojasiewicz condition. In all scenarios, we suggest polynomial time algorithms achieving non-asymptotic minimax optimal or near minimax optimal rates of convergence under adversarial noise. Based on a joint work with Arya Akhavan, Evgenii Chzhen, Davit Gogolashvili and Massimiliano Pontil. 

Bio: Alexandre Tsybakov is a Professor at ENSAE Paris and at Sorbonne University, Paris. From 1993 to 2017 he was a Professor at University Pierre and Marie Curie (Paris 6), and from 2009 to 2015 a Professor at Ecole Polytechnique. He was a member of the 

Institute for Information Transmission Problems, Moscow, from 1980 to 2007. His research interests include high-dimensional statistics, nonparametric function estimation, statistical machine learning, stochastic optimization, statistical inverse problems.

Prof. Tsybakov is an author of 3 books and of more than 150 journal papers. He is a Fellow of the Institute of Mathematical Statistics  and he has been awarded a Le Cam's Lecture by the French Statistical Society, a Miller Professorship by the University of California, Berkeley, a Medallion Lecture by the Institute of Mathematical Statistics, a Gay-Lussac-Humboldt Prize, an Invited Lecture at the International Congress of Mathematicians and several other distinctions. He is a member of editorial boards of several journals.

 
 
5/6/24