Spike-Timing Error Backpropagation in Theta Neuron Networks
Résumé
The main contribution of this paper is the derivation of a steepest gradient descent learning rule for a multi-layer network of theta neurons; a one-dimensional non-linear neuron model. Central to our model is the assumption that the intrinsic neuron dynamics are sufficient to achieve consistent time coding, with no need to involve the precise shape of post-synaptic currents; this assumption departs from other related models such as SpikeProp and Tempotron learning. Our results clearly show that it is possible to perform complex computations by applying supervised learning techniques to the spike times and time response properties of non-linear integrate and fire neurons. Networks trained with our multi-layer training rule are shown to have similar generalization abilities for spike latency pattern classification as Tempotron learning. The rule is also able to train networks to perform complex regression tasks that neither SpikeProp or Tempotron learning appear to be capable of.