Wasserstein Distributional Learning via Majorization-Minimization

Abstract

Learning function-on-scalar predictive models for conditional densities and identifying factors that influence the entire probability distribution are vital tasks in many data-driven applications. We present an efficient Majorization-Minimization optimization algorithm, Wasserstein Distributional Learning (WDL), that trains Semi-parametric Conditional Gaussian Mixture Models (SCGMM) for conditional density functions and uses the Wasserstein distance W2 as a proper metric for the space of density outcomes. We further provide theoretical convergence guarantees and illustrate the algorithm using boosted machines. Experiments on the synthetic data and real-world applications demonstrate the effectiveness of the proposed WDL algorithm.

Tian Zheng
Tian Zheng
Professor of Statistics & Department Chair

Tian Zheng develops novel methods for exploring and understanding patterns in complex data from different application domains. She is passionate about education and mentoring, and how they hold great potential in driving convergence and broadening participation.