IMAGINE/LIGM, Université Paris Est., Paris, France

NeuroMathComp team, INRIA, CNRS, ENS Paris, Paris, France

Abstract

In this paper, we consider neural field equations with space-dependent delays. Neural fields are continuous assemblies of mesoscopic models arising when modeling macroscopic parts of the brain. They are modeled by nonlinear integro-differential equations. We rigorously prove, for the first time to our knowledge, sufficient conditions for the stability of their stationary solutions. We use two methods 1) the computation of the eigenvalues of the linear operator defined by the linearized equations and 2) the formulation of the problem as a fixed point problem. The first method involves tools of functional analysis and yields a new estimate of the semigroup of the previous linear operator using the eigenvalues of its infinitesimal generator. It yields a sufficient condition for stability which is independent of the characteristics of the delays. The second method allows us to find new sufficient conditions for the stability of stationary solutions which depend upon the values of the delays. These conditions are very easy to evaluate numerically. We illustrate the conservativeness of the bounds with a comparison with numerical simulation.

1 Introduction

Neural fields equations first appeared as a spatial-continuous extension of Hopfield networks with the seminal works of Wilson and Cowan, Amari

The purpose of this article is to propose a solid mathematical framework to characterize the dynamical properties of neural field systems with propagation delays and to show that it allows us to find sufficient delay-dependent bounds for the linear stability of the stationary states. This is a step in the direction of answering the question of how much delays can be introduced in a neural field model without destabilization. As a consequence one can infer in some cases without much extra work, from the analysis of a neural field model without propagation delays, the changes caused by the finite propagation times of signals. This framework also allows us to prove a linear stability principle to study the bifurcations of the solutions when varying the nonlinear gain and the propagation times.

The paper is organized as follows: in Section 2 we describe our model of delayed neural field, state our assumptions and prove that the resulting equations are well-posed and enjoy a unique bounded solution for all times. In Section 3 we give two different methods for expressing the linear stability of stationary cortical states, that is, of the time independent solutions of these equations. The first one, Section 3.1, is computationally intensive but accurate. The second one, Section 3.2, is much lighter in terms of computation but unfortunately leads to somewhat coarse approximations. Readers not interested in the theoretical and analytical developments can go directly to the summary of this section. We illustrate these abstract results in Section 4 by applying them to a detailed study of a simple but illuminating example.

2 The model

We consider the following neural field equations defined over an open

We give an interpretation of the various parameters and functions that appear in (1).

Ω is a **r** and

The function

It describes the relation between the firing rate **V** the

The

The

The

The

The

Finally the

We also introduce the function

A difference with other studies is the intrinsic dynamics of the population given by the linear response of chemical synapses. In

For the sake of generality, the propagation delays are not assumed to be identical for all populations, hence they are described by a matrix **r**. The reason for this assumption is that it is still unclear from physiology if propagation delays are independent of the populations. We assume for technical reasons that **
τ
** is continuous, that is,

In order to compute the righthand side of (1), we need to know the voltage **V** on some interval

Hence we choose

2.1 The propagation-delay function

What are the possible choices for the propagation-delay function **r** is connected to another neuron located at

where

2.2 Mathematical framework

A convenient functional setting for the non-delayed neural field equations (see

To give a meaning to (1), we define the

where

is the linear continuous operator satisfying (the notation

We first recall the following proposition whose proof appears in

**Proposition 2.1**

1.

2.

3.

Notice that this result gives existence on

2.3 Boundedness of solutions

A valid model of neural networks should only feature bounded membrane potentials. We find a bounded attracting set in the spirit of our previous work with non-delayed neural mass equations. The proof is almost the same as in

**Theorem 2.2**

We note

Thus, if

Let us show that the open ball of

If

Because

Finally we consider the case

3 Stability results

When studying a dynamical system, a good starting point is to look for invariant sets. Theorem 2.2 provides such an invariant set but it is a very large one, not sufficient to convey a good understanding of the system. Other invariant sets (included in the previous one) are stationary points. Notice that delayed and non-delayed equations share exactly the same stationary solutions, also called persistent states. We can therefore make good use of the harvest of results that are available about these persistent states which we note

From now on we note

We can identify at least three ways to do this:

1. to derive a Lyapunov functional,

2. to use a fixed point approach,

3. to determine the spectrum of the infinitesimal generator associated to the linearized equation.

Previous results concerning stability bounds in delayed neural mass equations are ‘absolute’ results that do not involve the delays: they provide a sufficient condition, independent of the delays, for the stability of the fixed point (see

These authors also provide a delay-dependent sufficient condition to guarantee that no oscillatory instabilities can appear, that is, they give a condition that forbids the existence of solutions of the form

We use the second method cited above, the fixed point method, to prove a more general result which takes into account the delay terms. We also use both the second and the third method above, the spectral method, to

We write the linearized version of (3) as follows. We choose a persistent state

where the linear operator

It is also convenient to define the following operator:

3.1 Principle of linear stability analysis

We derive the stability of the persistent state **C** (such an operator is said to be sectorial). The ‘principle of linear stability’ is the fact that the linear stability of **U** is inherited by the state

Following **A** its infinitesimal generator. By definition, if **U** is the solution of (4) we have **A** which ensures that

Such a ‘principle’ of linear stability was derived in **T** to the spectrum of **A**. This is not the case here (see Proposition 3.4).

When the spectrum of the infinitesimal generator does not only contain eigenvalues, we can use the result in **A**:

Thus, **U** is uniformly exponentially stable for (4) if and only if

We prove in Lemma 3.6 (see below) that **A**.

3.1.1 Computation of the spectrum of **A**

In this section we use

**Definition 3.1**

We now apply results from the theory of delay equations in Banach spaces (see

The spectrum

**Definition 3.2** (Characteristic values (CV))

**A**

It is easy to see that the CV are the eigenvalues of **A**.

There are various ways to compute the spectrum of an operator in infinite dimensions. They are related to how the spectrum is partitioned (for example, continuous spectrum, point spectrum…). In the case of operators which are compact perturbations of the identity such as Fredholm operators, which is the case here, there is no continuous spectrum. Hence the most convenient way for us is to compute the point spectrum and the essential spectrum (see Appendix A). This is what we achieve next.

**Remark 1**
**A**

Notice that most papers dealing with delayed neural field equations only compute the CV and

- numerically

We now show that we can link the spectral properties of **A** to the spectral properties of

**Lemma 3.3**

Let us now prove the lemma. We already know that

Suppose that

Suppose that

Lemma 3.3 is the key to obtain **L** and could be applied to other types of delays in neural field equations. We now prove the important following proposition.

**Proposition 3.4**
**A**

1.

2.

3.

4.

1.

Let us show that

Then

2. We apply

3. We apply again

4. Because

As an example, Figure

Plot of the first 200 eigenvalues of **A** in the scalar case (

Plot of the first 200 eigenvalues of **A** in the scalar case (

Last but not least, we can prove that the CVs are almost all, that is, except for possibly a finite number of them, located on the left part of the complex plane. This indicates that the unstable manifold is always finite dimensional for the models we are considering here.

**Corollary 3.5**

But

Hence, for

3.1.2 Stability results from the characteristic values

We start with a lemma stating regularity for

**Lemma 3.6**

Using the spectrum computed in Proposition 3.4, the previous lemma and the formula (5), we can state the asymptotic stability of the linear equation (4). Notice that because of Corollary 3.5, the

**Corollary 3.7** (Linear stability)

We conclude by showing that the computation of the characteristic values of **A** is enough to state the stability of the stationary solution

**Corollary 3.8**

**T** should act on non-continuous functions as shown by the formula

It is however possible (note that a regularity condition has to be verified but this is done easily in our case) to extend (see

where

Now we choose

Finally, we can use the CVs to derive a sufficient stability result.

**Proposition 3.9**

3.1.3 Generalization of the model

In the description of our model, we have pointed out a possible generalization. It concerns the linear response of the chemical synapses, that is, the lefthand side **J** is small, the network is stable. We obtain a diagonal matrix

Introducing the classical variable

where **P** and

This indicates that the essential spectrum

**Proposition 3.10**

Using the same proof as in

**Proposition 3.11**

3.2 Principle of linear stability analysis

The idea behind this method (see

- the

In order to be able to derive our bounds we make the further assumption that there exists a

Note that the notation

**Remark 2**

We rewrite (4) in two different integral forms to which we apply the fixed point method. The first integral form is obtained by a straightforward use the variation-of-parameters formula. It reads

The second integral form is less obvious. Let us define

Note the slight abuse of notation, namely

Lemma B.3 in Appendix B.2 yields the upperbound

Hence we propose the second integral form:

We have the following lemma.

**Lemma 3.12**

By the variation-of-parameters formula we have:

We then use an integration by parts:

which allows us to conclude. □

Using the two integral formulations of (4) we obtain sufficient conditions of stability, as stated in the following proposition:

**Proposition 3.13**

1.

2.

The problem (4) is equivalent to solving the fixed point equation

We define

For all

**1.**
**tends to zero at infinity.**

Choose

Using Corollary B.3, we have

Let

For the first term we write:

Similarly, for the second term we write

Now for a given

From (9), it follows that

Since

**2.**
**is contracting on**
**.**

Using (9) for all

We conclude from Picard theorem that the operator

There remains to link this fixed point to the definition of stability and first show that

where

Let us choose

and

We already know that

Thus

As

The proof of the second property is straightforward. If 0 is asymptotically stable for (4) all the CV are negative and Corollary 3.8 indicates that

The second condition says that

The asymptotic stability follows using the same arguments as in the case of

We next simplify the first condition of the previous proposition to make it more amenable to numerics.

**Corollary 3.14**

Notice that

• If

• If

• If

**Remark 3**

•

•

To conclude, we have found an easy-to-compute formula for the stability of the persistent state

The conditions in Proposition 3.13 and Corollary 3.14 define a set of parameters for which

Condition 2 is not very useful as it is independent of the delays: no matter what they are, the stable point

3.3 Summary of the different bounds and conclusion

The next proposition summarizes the results we have obtained in Proposition 3.13 and Corollary 3.14 for the stability of a stationary solution.

**Proposition 3.15**

1.

2.

The only general results known so far for the stability of the stationary solutions are those of Atay and Hutt (see, for example, **J** and it was derived using the CVs in the same way as we did in the previous section. Thus our contribution with respect to condition 2 is that, once it is satisfied, the stationary solution is asymptotically stable: up until now this was numerically inferred on the basis of the CVs. We have

Condition 1 is of interest, because it allows one to find the minimal propagation delay that does not destabilize. Notice that this bound, though very easy to compute, overestimates the minimal speed. As mentioned above, the bounds in condition 1 are sufficient conditions for the stability of the stationary state

4 Numerical application: neural fields on a ring

In order to evaluate the conservativeness of the bounds derived above we compute the CVs in a numerical example. This can be done in two ways:

• Solve numerically the nonlinear equation satisfied by the CVs. This is possible when one has an explicit expression for the eigenvectors and periodic boundary conditions. It is the method used in

• Discretize the history space **A**: the CVs are approximated by the eigenvalues of

The

In order to make the computation of the eigenvectors very straightforward, we study a network on a ring, but notice that all the tools (analytical/numerical) presented here also apply to a generic cortex. We reduce our study to scalar neural fields

We therefore consider the scalar equation with axonal delays defined on

where the sigmoid

Remember that (13) has a Lyapunov functional when

We are looking at the local dynamics near the trivial solution

where **A** are given by the functions

The bifurcation diagram depends on the choice of the delay function

Left: Example of a periodic delay function, the saw-function.

Left: Example of a periodic delay function, the saw-function. Right: plot of the CVs in the plane

The first bound gives the minimal velocity

In Figure

Plot of the solution of (13) for different parameters corresponding to the points shown as 1, 2 and 3 in the righthand part of Figure

Plot of the solution of (13) for different parameters corresponding to the points shown as 1, 2 and 3 in the righthand part of Figure

Notice that the graph of the CVs shown in the righthand part of Figure

These numerical simulations reveal that the Lyapunov function derived in

Let us comment on the tightness of the delay-dependent bound: as shown in Proposition 3.13, this bound involves the maximum delay value

This suggests another way to attack the problem of the stability of fixed points: one could look for connectivity functions **
τ
**, the linearized equation (4) does not possess ‘unstable solutions’, that is, for all delay function

5 Conclusion

We have developed a theoretical framework for the study of neural field equations with propagation delays. This has allowed us to prove the existence, uniqueness, and the boundedness of the solutions to these equations under fairly general hypotheses.

We have then studied the stability of the stationary solutions of these equations. We have proved that the CVs are sufficient to characterize the linear stability of the stationary states. This was done using the semigroups theory (see

By formulating the stability of the stationary solutions as a fixed point problem we have found delay-dependent sufficient conditions. These conditions involve all the parameters in the delayed neural field equations, the connectivity function, the nonlinear gain and the delay function. Albeit seemingly very conservative they are useful in order to avoid the numerically intensive computation of the CV.

From the numerical viewpoint we have used two algorithms

By providing easy-to-compute sufficient conditions to quantify the impact of the delays on neural field equations we hope that our work will improve the study of models of cortical areas in which the propagation delays have so far been somewhat been neglected due to a partial lack of theory.

Appendix A: Operators and their spectra

We recall and gather in this appendix a number of definitions, results and hypotheses that are used in the body of the article to make it more self-sufficient.

**Definition A.1**

**Definition A.2**

**Definition A.3**

**Definition A.4**

**Definition A.5**

**Definition A.6**

**Definition A.7**

**Remark 4**

Appendix B: The Cauchy problem

B.1 Boundedness of solutions

We prove Lemma B.2 which is used in the proof of the boundedness of the solutions to the delayed neural field equations (1) or (3).

**Lemma B.1**

• We first check that ^{2}. Furthermore

• We now show that

Noting that

and

□

**Lemma B.2**

B.2 Stability

In this section we prove Lemma B.3 which is central in establishing the first sufficient condition in Proposition 3.13.

**Lemma B.3**

and if we set

Again, from the Cauchy-Schwartz inequality applied to

Then, from the discrete Cauchy-Schwartz inequality:

which gives as stated:

and allows us to conclude. □

Competing interests

The authors declare that they have no competing interests.

Acknowledgements

We wish to thank Elias Jarlebringin who provided his program for computing the CV.

This work was partially supported by the ERC grant 227747 - NERVI and the EC IP project #015879 - FACETS.