Adji Bousso Dieng

I am a Ph.D student in the department of Statistics at Columbia University where I am jointly being advised by David Blei and John Paisley. My work at Columbia is about combining probabilistic graphical modeling and deep learning to design better sequence models. I develop these models within the framework of variational inference which enables efficient and scalable learning. My hope is that my research can be applied to many real world applications particularly to natural language understanding.

Prior to joining Columbia I worked as a Junior Professional Associate at the World Bank. I did my undergraduate training in France where I attended Lycee Henri IV and Telecom ParisTech--France's Grandes Ecoles system. I hold a Diplome d'Ingenieur from Telecom ParisTech and spent the third year of Telecom ParisTech's curriculum at Cornell University where I earned a Master in Statistics.

Linkedin          Github          Curriculum Vitae          Google Scholar           Twitter


News

May 2018: I am excited to be interning at Facebook AI Research this summer.

May 2018: Our paper "Augment and Reduce: Stochastic Inference for Large Categorical Distributions" is at ICML.

May 2018: Our paper "Noisin: Unbiased Regularization for Recurrent Neural Networks" has been accepted at ICML.

Feb 2018: I will be part of the Women Techmakers 2018 Summit panel at Google, New York.

Feb 2018: I will be giving a spotlight talk at the NYAS ML Symposium.



Selected Publications

Noisin: Unbiased Regularization for Recurrent Neural Networks

Adji B. Dieng, Rajesh Ranganath, Jaan Altosaar, and David M. Blei

International Conference on Machine Learning, 2018

Paper         Slides

Augment and Reduce: Stochastic Inference for Large Categorical Distributions

Francisco J. R. Ruiz, Michalis Titsias, Adji B. Dieng, and David M. Blei

International Conference on Machine Learning, 2018

Paper         Slides

TopicRNN: A Recurrent Neural Network With Long-Range Semantic Dependency

Adji B. Dieng, Chong Wang, Jianfeng Gao, and John Paisley

International Conference on Learning Representations, 2017

Paper         Poster         Slides

Variational Inference via Chi Upper Bound Minimization

Adji B. Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, and David M. Blei

Neural Information Processing Systems, 2017

Paper         Poster         Slides


Talks

Tufts University CS Colloquium, Medford, MA, April 2018

Harvard University NLP Group Meeting, Cambridge, MA, April 2018

Stanford University NLP Seminar, Stanford, CA, April 2018

New York Academy of Science ML Symposium, NY, March 2018

Machine Learning and Friends Seminar, UMass, Amherst, MA, February 2018

Black in AI Workshop, Long Beach, CA, December 2017

MSR AI, Microsoft Research, Redmond, WA, August 2017

SSLI Lab, University of Washington, Seattle, WA, August 2017

DeepLoria, Loria Laboratory, Nancy, France, April 2017

AI With The Best, Online, April 2017

OpenAI, San Francisco, CA, January 2017

IBM TJ Watson Research, Yorktown Heights, NY, December 2016

Microsoft Research, Redmond, WA, August 2016