Asymptotic description of stochastic neural networks. II. Characterization of the limit law

Olivier Faugeras 1 James Maclaurin 1
1 NEUROMATHCOMP - Mathematical and Computational Neuroscience
CRISAM - Inria Sophia Antipolis - Méditerranée , JAD - Laboratoire Jean Alexandre Dieudonné : UMR6621
Abstract : We continue the development, started in [7], of the asymptotic description of certain stochastic neural networks. We use the Large Deviation Principle (LDP) and the good rate function H announced there to prove that H has a unique minimum e, a stationary measure on the set of trajectories T Z. We characterize this measure by its two marginals, at time 0, and from time 1 to T. The second marginal is a stationary Gaussian measure. With an eye on applications, we show that its mean and covariance operator can be inductively computed. Finally we use the LDP to establish various convergence results, averaged and quenched.
Liste complète des métadonnées

Cited literature [2 references]  Display  Hide  Download

https://hal.inria.fr/hal-01074836
Contributor : Olivier Faugeras <>
Submitted on : Wednesday, October 15, 2014 - 3:50:20 PM
Last modification on : Thursday, February 7, 2019 - 5:16:09 PM
Document(s) archivé(s) le : Friday, April 14, 2017 - 4:48:18 PM

File

paper2.pdf
Files produced by the author(s)

Identifiers

Citation

Olivier Faugeras, James Maclaurin. Asymptotic description of stochastic neural networks. II. Characterization of the limit law. Comptes rendus de l'Académie des sciences. Série I, Mathématique, Elsevier, 2014, 352, pp.847 - 852. ⟨10.1016/j.crma.2014.08.017⟩. ⟨hal-01074836⟩

Share

Metrics

Record views

474

Files downloads

96