Statistics Seminar – Fall 2019

Schedule for Fall 2019

Seminars are on Mondays
Time: 4:10pm – 5:00pm
Location: Room 903, 1255 Amsterdam Avenue

Tea and Coffee will be served before the seminar at 3:30 PM, 10th Floor Lounge SSW

Cheese and Wine reception will follow the seminar at 5:10 PM in the 10th Floor Lounge SSW

For an archive of past seminars, please click here.

9/9/2019

 

Alberto Abadie (MIT, Department of Economics)

Title: Statistical Non-Significance in Empirical Economics

Abstract: Statistical significance is often interpreted as providing greater information than non-significance. In this article we show, however, that rejection of a point null often carries very little information, while failure to reject may be highly informative. This is particularly true in empirical contexts that are common in economics, where data sets are large and there are rarely reasons to put substantial prior probability on a point null. Our results challenge the usual practice of conferring point null rejections a higher level of scientific significance than non-rejections. Therefore, we advocate a visible reporting and discussion of non-significant results.

9/16/19

Jason Klasowski (Rutgers)

Title: “Path-Based Compression and Generalization for Deep ReLU Networks”

Abstract: “The ability of modern neural networks to generalize well despite having many more parameters than training samples has been a
widely studied topic in the deep learning community. A recently proposed approach for improving generalization guarantees involves showing that a
given network can be `compressed’ to a sparser network with fewer and discrete parameters. We study a path-based approach in which the
compressed network is formed from empirical counts of paths drawn at random from a Markov distribution induced by the weights of the original
network. This method leads to a generalization bound depending on the complexity of the path structure in the network. In addition, by
exploiting certain invariance properties of neural networks, the generalization bound does not depend explicitly on the intermediate
layer dimensions, allowing for very large networks. Finally, we study empirically the relationship between compression and generalization, and
find that networks that generalize well can indeed be compressed more effectively than those that do not generalize.”

9/23/19

 

Subhasis Ghosal (North Carolina State University)

“Posterior Contraction and Credible Sets for Filaments of Regression Functions”

Abstract:
The filament of a smooth function f consists of local maximizers of f when moving in a certain direction. The filament is an important geometrical feature of the surface of the graph of a function. It is also considered as an important lower-dimensional summary in analyzing multivariate data. There have been some recent theoretical studies on estimating filaments of a density function using a nonparametric kernel density estimator. In this talk, we consider a Bayesian approach and concentrate on the nonparametric regression problem. We study the posterior contraction rates for filaments using a finite random series of tensor products of B-splines prior on the regression function. Compared with the kernel method, this has the advantage that the bias can be better controlled when the function is smoother, which allows obtaining better rates. Under an isotropic Holder smoothness condition, we obtain the posterior contraction rate for the filament under two different metrics — a distance of separation along an integral curve, and the Hausdorff distance between sets. Moreover, we construct credible sets for the filament having an optimal size with sufficient frequentist coverage. We study the performance of our proposed method through a simulation study and apply on a dataset on California earthquakes to assess the fault-line of the maximum local earthquake intensity.

Based on joint work with my former graduate student, Dr. Wei Li, Assistant Professor, Syracuse University, New York.

9/30/19

Ruobin Gong (Rutgers)

“Private Data + Approximate Computation = Exact Inference”

From data collection to model building and to computation, statistical inference at every stage must reconcile with imperfections. I discuss a serendipitous result that two apparently imperfect components mingle to produce the “perfect’’ inference. Differentially private data protect individuals’ confidential information by subjecting themselves to carefully designed noise mechanisms, trading off statistical efficiency for privacy. Approximate Bayesian computation (ABC) allows for sampling from approximate posteriors of complex models with intractable likelihoods, trading off exactness for computational efficiency. Finding the right alignment between the two tradeoffs liberates one from the other, and salvages the exactness of inference in the process. A parallel result for maximum likelihood inference on private data using Monte Carlo Expectation-Maximization is also discussed.

10/7/19

Jeff Leek (JHU)

10/14/19

Joshua Loftus (NYU)

10/21/19

Richard Nickl (Cambridge)

10/28/19

Yihong Wu (Yale)

11/4/19

University Holiday

11/11/19

Boaz Nadler (Weizmann)

11/18/19
Jun Liu (Harvard)
11/25/19 Jelena Bradic(University of California San Diego)
12/2/19

 

12/9/19