Schedule for Fall 2015
Information for speakers: For information about schedule, direction, equipment, reimbursement and hotel, please click here.
Previous semesters’ schedules can be viewed here.
Swupnil Sahai, Columbia Statistics
“Data Science at Tesla Motors”
Tesla Motors produces the Model S, which has been rated as the world’s best car by Consumer Reports for the past two years. But one area in which the Model S is only rated at the industry average is its Reliability. We thus explore how data from component failures in the field and the testing labs can be used to model the reliability of Model S components. In particular, we show how Stan can be used to model a variety of Tesla’s data sets, from censored Weibull failure data to data comprised of multiple Weibull mixture components. We also show how stochastic simulations combined with matplotlib animations can be used to intuitively visualize survival analysis over time.
Sept 16, 2015
Tim Leung, Columbia IEOR
“Exchange-Traded Funds and Related Trading Strategies”
In this student seminar, I’ll discuss a number of static and dynamic portfolios related to exchange-traded funds (ETFs). Models for the price dynamics of equity-based and futures-based ETFs and leveraged ETFs are presented. This leads to the development of futures-based strategies for the objective of leverage replication, with applications to VIX and commodity (L)ETFs. Another class of trading strategies involves multiple leveraged ETFs, accounting for their leverage ratios, volatility decays, expense ratios, and tracking errors. The performance and risk characteristics of these portfolios are studied both analytically and empirically.
|Summer Intern Workshop|
|Sept 30, 2015||Rob Lane, Columbia University High Performance Computing (HPC) team|
|Oct 07, 2015||
Shang Li, Columbia EE
“Multi-Sensor Sequential Composite Hypothesis Testing Based on One-Bit Communication”
Our work investigates the generalized sequential probability ratio test (GSPRT) with multiple sensors. Focusing on the communication-constrained scenario, where sensors transmit one-bit messages to the fusion center, we propose a decentralized GSRPT based on level-triggered sampling scheme (LTS-GSPRT). The proposed LTS-GSPRT amounts to the algorithm where each sensor successively reports the decisions of local GSPRTs to the fusion center. Interestingly, at the significantly lower cost of communication overhead, LTS-GSPRT provably preserves the same asymptotic performance of the centralized GSPRT as the local thresholds and global thresholds grow large at different rates.
|Oct 14, 2015||
Marcel Nutz, Columbia Statistics
“Martingale Optimal Transport and Beyond”
We study the optimal transport between two probability measures, where the transport plans are subject to a probabilistic constraint. For instance, in the martingale optimal transport problem, the transports are laws of one-step martingales. Interesting new couplings emerge as optimizers in such problems.
Constrained transport problems arise in the context of robust semi-static hedging in mathematical finance via linear programming duality. We formulate a complete duality theory for general reward (cost) functions, including the existence of optimal hedges. This duality leads to an analytic monotonicity principle which describes the geometry of optimal transports. Joint work with Mathias Beiglboeck, Florian Stebegg and Nizar Touzi.
|Oct 21, 2015||
Yuan Zhong, Columbia IEOR
“Large-scale stochastic dynamic bin packing”
We present a new class of bin packing models, so-called stochastic dynamic bin packing, which are primarily motivated by the problem of virtual machine placement into physical servers in cloud computing clusters.
A key performance objective is to minimize the total number of occupied servers. In this talk, we describe several placement policies and establish their performance and scalability properties. In particular, we propose Greedy-Random (GRAND), a class of extremely simple policies, and show that versions of GRAND are asymptotically optimal, as the system scale goes to infinity. We then complement the theoretical results with simulation studies, and conclude with some open problems.
This talk is based on joint works with Sasha Stolyar of Lehigh University.
|Oct 28, 2015||
Akshay Krishnamurthy, Microsoft Research
“Efficient Contextual Semibandits”
I will describe a variant of the contextual bandit problem, known as contextual semibandits, where in each round the learner receives a context, plays a sequence of actions, observes a feature for each of the played actions, and observes reward that is linearly related to those features. This setting is motivated by problems in personalized search and recommendation, where many common performance metrics are linearly related to observable document-specific click information. I will describe two algorithms for this problem, one for the case where the linear transformation is known and one for the case where it is unknown. Both algorithms have low regret guarantees and can be efficiently implemented with an appropriate optimization oracle. I will also present some preliminary empirical findings on these algorithms.
This is joint work with Alekh Agarwal and Miro Dudik.
|Nov 04, 2015||
Chuanren Liu, Drexel Univeristy
“Temporal Correlation in Sequential Pattern Analysis“
Sequential pattern analysis aims at finding statistically relevant temporal structures where the values are delivered in sequences. This is a fundamental problem in data mining with diversified applications in many science and business fields. Given the overwhelming scale and the dynamic nature of the sequential data, new visions and strategies for sequential pattern analysis are required to derive competitive advantages and unlock the power of the big data. To this end, in this talk, we present novel approaches for sequential pattern analysis using temporal correlation. Particularly, we will focus on the “temporal skeletonization”, our approach to identifying the meaningful granularity for sequential pattern mining. We first show that a large number of symbols in a sequence can “dilute” useful patterns which themselves exist at a different level of granularity. This is so-called “curse of cardinality”, which can impose significant challenges to the design of sequential analysis methods. To address this challenge, our key idea is to summarize the temporal correlations in an undirected graph, and use the “skeleton” of the graph as a higher granularity on which hidden temporal patterns are more likely to be identified. In the meantime, the embedding topology of the graph allows us to translate the rich temporal content into a metric space. This opens up new possibilities to explore, quantify, and visualize sequential data.
|Nov 11, 2015||
Bodhisattva Sen, Columbia Statistics
“Adaptation in Shape Constrained Regression”
We consider nonparametric least squares estimation of a shape constrained (e.g., monotonic/convex) regression function, both with univariate and multivariate predictors. We discuss the characterization, computation and consistency of the least squares estimator (LSE) in these problems. An appealing property of the LSE is that it is completely tuning parameter-free.
To quantify the accuracy of the LSE we consider the behavior of its risk, under the squared error loss. We derive worst case risk (upper) bounds in these problems and highlight the adaptive behavior of the LSE. In particular, we show that the LSE automatically adapts to “sparsity” in the underlying true regression function. Another interesting feature of the LSE in the multi-dimensional examples is that it adapts to the intrinsic dimension of the problem.
|Nov 18, 2015||
Peter Orbanz, Columbia Statistics
“Random graphs and random measures”
Suppose you observe a small subgraph of a very large graph or network. What can you learn about the large graph by statistical analysis of your observations? If you formulate a property of graphs as a statistic, does its value on the small graph approximate the value on the entire graph? If the large, unobserved graph is random, can we estimate expectations? I will explain how problems like this can save you from being overwhelmed by your lavish spare time. Patience of my audience permitting, I may also briefly sketch another line of work, on random measures and their applications in Bayesian nonparametrics.
|Nov 25, 2015||
John Paisley, Columbia Electrical Engineering, Columbia Data Science Institute
“Big Models for Big Data: Structured Probabilistic Topic Models”
Advances in scalable machine learning have made it possible to learn highly structured models of large data sets. In this talk, I will discuss some of our recent work in this direction. I will first briefly review probabilistic topic modeling using latent Dirichlet allocation, followed by scalable extensions of the variational inference framework. I will then then discuss two structured developments of the LDA model in the form of tree-structured topic models and graph-structured topic models. I will present our recent work in each of these areas.
|Dec 02, 2015||
Qiwei He, Educational Testing Service (ETS)
“Exploring Process Data in Problem-Solving Items in Large Scale Assessments”
In computer-based assessments, test taker performance is recorded accompanied by a variety of timing and process data. Log-file data hold the promise to provide new insights into behavioral processes regarding completion of a task that cannot be easily observed in paper-based assessments. This presentation draws on process data collected from problem solving items in technology-rich environment in two large scale assessments, the Programme for International Assessment of Adult Competencies (PIAAC) and the Programme for International Student Assessment (PISA) to address how sequences of actions are related to task performance. By analyzing the process data produced by test takers in different performance groups, we were able to obtain insights into how these action sequences are associated with different ways of cognitive processing and to identify key actions that lead to success or failure.
|Dec 09, 2015||
Michael Catalano-Johnson, Quantitative Research at Susquehanna International Group (SIG)
“Fairness of Delayed Midquote and a Theta Function Identity”
We will present a simplified model for evaluating the unbiasedness of using a delayed midpoint as an estimate of the current fair value of a stock. The crux lies in the evaluation of a particular infinite series, which turns out to be a theta function. Along the way we will see how several elementary results from Fourier series and complex analysis play a role in evaluating the sum. Finally, a surprising conclusion is reached that serves to warn us about believing too much in numerical coincidences.