Convergence properties of three
spike-triggered analysis techniques
Published as
Network: Computation in Neural Systems 14: 437-464
We analyse the convergence properties of three spike-triggered data
analysis techniques. Our results are obtained in the setting of a
probabilistic linear-nonlinear (LN) cascade neural encoding model;
this model has recently become popular in the study of the neural
coding of natural signals. We start by giving exact
rate-of-convergence results for the common spike-triggered average
technique. Next, we analyse a spike-triggered covariance method,
variants of which have been recently exploited successfully by Bialek,
Simoncelli and colleagues. Unfortunately, the conditions that
guarantee that these two estimators will converge to the correct
parameters are typically not satisfied by natural signal
data. Therefore, we introduce an estimator for the LN model parameters
which is designed to converge under general conditions to the correct
model. We derive the rate of convergence of this estimator, provide an
algorithm for its computation and demonstrate its application to
simulated data as well as physiological data from the primary motor
cortex of awake behaving monkeys. We also give lower bounds on the
convergence rate of any possible LN estimator. Our results should
prove useful in the study of the neural coding of high-dimensional
natural signals.
Reprint (500K, pdf) | Related code
Related work on spike-triggered analysis
techniques | Liam
Paninski's home