A new look at state-space models for neural data
Liam Paninski, Yashar Ahmadian, Daniel Gil Ferreira, Shinsuke
Koyama, Kamiar Rahnama Rad, Michael Vidne, Joshua Vogelstein, and Wei
Wu
In press, Journal of Computational Neuroscience (special issue on
statistical analysis of neural data)
State space methods have proven indispensable in neural data
analysis. However, common methods for performing inference in
state-space models with non-Gaussian observations rely on certain
approximations which are not always accurate. Here we review direct
optimization methods that avoid these approximations, but that
nonetheless retain the computational efficiency of the approximate
methods. We discuss a variety of examples, applying these direct
optimization techniques to problems in spike train smoothing, stimulus
decoding, parameter estimation, and inference of synaptic
properties. Along the way, we point out connections to some related
standard statistical methods, including spline smoothing and isotonic
regression. Finally, we note that the computational methods reviewed
here do not in fact depend on the state-space setting at all; instead,
the key property we are exploiting involves the bandedness of certain
matrices. We close by discussing some applications of this more
general point of view, including Markov chain Monte Carlo methods for
neural decoding and efficient estimation of spatially-varying firing
rates.
Preprint | Liam
Paninski's research page