NeuroMathComp Laboratory, INRIA, Sophia Antipolis, CNRS, ENS Paris, Paris, France

Dept. of Mathematics, University of Nice Sophia-Antipolis, JAD Laboratory and CNRS, Parc Valrose, 06108, Nice Cedex 02, France

Abstract

We study the neural field equations introduced by Chossat and Faugeras to model the representation and the processing of image edges and textures in the hypercolumns of the cortical area V1. The key entity, the structure tensor, intrinsically lives in a non-Euclidean, in effect hyperbolic, space. Its spatio-temporal behaviour is governed by nonlinear integro-differential equations defined on the Poincaré disc model of the two-dimensional hyperbolic space. Using methods from the theory of functional analysis we show the existence and uniqueness of a solution of these equations. In the case of stationary, that is, time independent, solutions we perform a stability analysis which yields important results on their behavior. We also present an original study, based on non-Euclidean, hyperbolic, analysis, of a spatially localised bump solution in a limiting case. We illustrate our theoretical results with numerical simulations.

**Mathematics Subject Classification: **
30F45, 33C05, 34A12, 34D20, 34D23, 34G20, 37M05, 43A85, 44A35, 45G10, 51M10, 92B20, 92C20.

1 Introduction

The selectivity of the responses of individual neurons to external features is often the basis of neuronal representations of the external world. For example, neurons in the primary visual cortex (V1) respond preferentially to visual stimuli that have a specific orientation ^{2} of cortical surface, is assumed to consist of subgroups of inhibitory and excitatory neurons each of which is tuned to a particular feature of an external stimulus. These subgroups are the so-called Hubel and Wiesel hypercolumns of V1. We have introduced in

Our present investigations were motivated by the work of Bressloff, Cowan, Golubitsky, Thomas and Wiener

The aim of this paper is to present a rigorous mathematical framework for the modeling of the representation of the structure tensor by neuronal populations in V1. We would also like to point out that the mathematical analysis we are developing here, is general and could be applied to other integro-differential equations defined on the set of structure tensors, so that even if the structure tensor were found to be not represented in a hypercolumn of V1, our framework would still be relevant. We then concentrate on the occurence of localized states, also called bumps. This is in contrast to the work of

2 The model

By definition, the structure tensor is based on the spatial derivatives of an image in a small area that can be thought of as part of a receptive field. These spatial derivatives are then summed nonlinearly over the receptive field. Let

The gradient ^{
T
} indicates the transpose of a vector. The set of

where we have set for example:

Since the computation of derivatives usually involves a stage of scale-space smoothing, the definition of the structure tensor requires two scale parameters. The first one, defined by

By construction,

where

We assume that a hypercolumn of V1 can represent the structure tensor in the receptive field of its neurons as the average membrane potential values of some of its membrane populations. Let

The nonlinearity

where

The set

It is well-known

The isometries of

As shown in Proposition B.0.1 of Appendix B it is possible to express the volume element

We note

We get rid of the constant

In

where

The space

where

where we normalize to 1 the volume element for the

Let now

In

This equation is richer than the ring model of orientation as it contains an additional information on the contrast of the image in the orthogonal direction of the prefered orientation. If one wants to recover the ring model of orientation tuning in the visual cortex as it has been presented and studied by

and ii) to look at semi-homogeneous solutions of equation (4), that is, solutions which do not depend upon the variable

where:

It follows from the above discussion that the structure tensor contains, at a given scale, more information than the local image intensity gradient at the same scale and that it is possible to recover the ring model of orientations from the structure tensor model.

The aim of the following sections is to establish that (3) is well-defined and to give necessary and sufficient conditions on the different parameters in order to prove some results on the existence and uniqueness of a solution of (3).

3 The existence and uniqueness of a solution

In this section we provide theoretical and general results of existence and uniqueness of a solution of (2). In the first subsection (Section 3.1) we study the simpler case of the homogeneous solutions of (2), that is, of the solutions that are independent of the tensor variable

3.1 Homogeneous solutions

A homogeneous solution to (2) is a solution

where:

Hence necessary conditions for the existence of a homogeneous solution are that:

• the double integral (6) is convergent,

•

In the special case where

the second condition is automatically satisfied. The proof of this fact is given in Lemma D.0.2 of Appendix D. To summarize, the homogeneous solutions satisfy the differential equation:

3.1.1 A first existence and uniqueness result

Equation (3) defines a Cauchy’s problem and we have the following theorem.

**Theorem 3.1.1**

It is clear that

where

Since,

□

We can extend this result to the whole time real line if

**Proposition 3.1.1**

Then

where

This implies that the maximal solution

3.1.2 Simplification of (6) in a special case

**Invariance** In the previous section, we have stated that in the special case where

**Lemma 3.1.1**

**Mexican hat connectivity** In this paragraph, we push further the computation of

In detail, we have:

where:

with

In this case we can obtain a very simple closed-form formula for

**Lemma 3.1.2**

**
erf
**

3.2 General solution

We now present the main result of this section about the existence and uniqueness of solutions of equation (2). We first introduce some hypotheses on the connectivity function

• (**H1**):

• (**H2**): **W** is defined as

• (**H3**):

Equivalently, we can express these hypotheses in

• (**H1**bis):

• (**H2**bis): **W** is defined as

• (**H3**bis):

3.2.1 Functional space setting

We introduce the following mapping

Our aim is to find a functional space

**Proposition 3.2.1**
**H**1bis)-(**H**3bis)

□

3.2.2 The existence and uniqueness of a solution of (3)

We rewrite (3) as a Cauchy problem:

**Theorem 3.2.1**
**H**1bis)-(**H**3bis),

and therefore

Because of condition (**H2**) we can choose

with

**Remark 3.2.1**

**Proposition 3.2.2**
**H**1bis)-(**H**3bis)

then Theorem C.0.3 of Appendix C gives the conclusion. □

3.2.3 The intrinsic boundedness of a solution of (3)

In the same way as in the homogeneous case, we show a result on the boundedness of a solution of (3).

**Proposition 3.2.3**
**H**1bis)-(**H**3bis)

Let us set:

where

The following upperbound holds

We can rewrite (11) as:

If

and hence

□

The following corollary is a consequence of the previous proposition.

**Corollary 3.2.1**

3.3 Semi-homogeneous solutions

A semi-homogeneous solution of (3) is defined as a solution which does not depend upon the variable Δ. In other words, the populations of neurons is not sensitive to the determinant of the structure tensor, that is to the contrast of the image intensity. The neural mass equation is then equivalent to the neural mass equation for tensors of unit determinant. We point out that semi-homogeneous solutions were previously introduced in

where

We have implicitly made the assumption, that

Let

• (**C1**):

• (**C2**):

• (**C3**):

Note that conditions (**C1**)-(**C2**) and Lemma 3.1.1 imply that for all

From now on,

**Theorem 3.3.1**
**C**1)-(**C**3),

This solution, defined on the subinterval

**Proposition 3.3.1**
**C**1)-(**C**3)

We can also state a result on the boundedness of a solution of (13):

**Proposition 3.3.2**

4 Stationary solutions

We look at the equilibrium states, noted **H1**bis)-(**H2**bis). We redefine for convenience the sigmoidal function to be:

so that a stationary solution (independent of time) satisfies:

We define the nonlinear operator from

Finally, (14) is equivalent to:

4.1 Study of the nonlinear operator

We recall that we have set for the Banach space

**Proposition 4.1.1**

•

•

□

We denote by

and

where

It is straightforward to show that both operators are well-defined on

**Proposition 4.1.2**

□

4.2 The convolution form of the operator

It is convenient to consider the functional space

where the nonlinear operator

We define the associated operators,

We rewrite the operator

for all functions of

We recall the notation

**Proposition 4.2.1**

and for all

□

Let

for a function

**Lemma 4.2.1**

We recall that for all

□

We now introduce two functions that enjoy some nice properties with respect to the Hyperbolic Fourier transform and are eigenfunctions of the linear operator

**Proposition 4.2.2**

•

•

By rotation, we obtain the property for all

For the second property

□

A consequence of this proposition is the following lemma.

**Lemma 4.2.2**

Let

If we assume further that

If

4.3 The convolution form of the operator

We adapt the ideas presented in the previous section in order to deal with the general case. We recall that if

We recall that we have set by definition:

**Proposition 4.3.1**

□

We next assume further that the function **W** is separable in

**Proposition 4.3.2**

•

•

□

A straightforward consequence of this proposition is an extension of Lemma 4.2.2 to the general case:

**Lemma 4.3.1**

4.4 The set of the solutions of (14)

Let

We have the following proposition.

**Proposition 4.4.1**

**Remark 4.4.1**

4.5 Stability of the primary stationary solution

In this subsection we show that the condition

**Theorem 4.5.1**

where

If we set:

and

and the conclusion follows. □

5 Spatially localised bumps in the high gain limit

In many models of working memory, transient stimuli are encoded by feature-selective persistent neural activity. Such stimuli are imagined to induce the formation of a spatially localised bump of persistent activity which coexists with a stable uniform state. As an example, Camperi and Wang

In order to construct exact bump solutions and to compare our results to previous studies

We have introduced a threshold

The theoretical study of equation (20) has been done in

5.1 Existence of hyperbolic radially symmetric bumps

From equation (20) a general stationary pulse satisfies the equation:

For convenience, we note

**Definition 5.1.1**

From symmetry arguments there exists a hyperbolic radially symmetric stationary-pulse solution

where

The existence of such a bump can then be established by finding solutions to (23) The function

Plot of

Plot of

We end this subsection with the usefull and technical following formula.

**Theorem 5.1.1**

**Remark 5.1.1**

**Remark 5.1.2**

**Remark 5.1.3**

We now show that for a general monotonically decreasing weight function

**Proposition 5.1.1**

We have to compute

It is result of elementary hyperbolic trigonometry that

we let

It follows that

and

We conclude that if

which implies

To see that it is also negative for

The following formula holds for the hypergeometric function (see Erdelyi in

It implies

Substituting in the previous equation giving

implying that:

Consequently,

As a consequence, for our particular choice of exponential weight function (21), the radially symmetric bump is monotonically decreasing in

5.2 Proof of Theorem 5.1.1

The proof of Theorem 5.1.1 goes in four steps. First we introduce some notations and recall some basic properties of the Fourier transform in the Poincaré disk. Second we prove two propositions. Third we state a technical lemma on hypergeometric functions, the proof being given in Lemma F.0.4 of Appendix F. The last step is devoted to the conclusion of the proof.

5.2.1 First step

In order to calculate

**Proposition 5.2.1**

In **W**:

the last equality is a direct application of Lemma 4.2.1 and we can deduce that

Finally we have:

which is the desired formula. □

It appears that the study of

**Proposition 5.2.2**

for all

□

5.2.2 Second step

In this part, we prove two results:

• the mapping

• the following equality holds for

**Proposition 5.2.3**

which, as announced, is only a function of

We now give an explicit formula for the integral

**Proposition 5.2.4**

**Lemma 5.2.1** For all

It follows immediately that for all

We integrate this formula over the hyperbolic ball

and we exchange the order of integration:

We note that the integral

and indeed the integral does not depend upon the variable

Finally, we can write:

because

This completes the proof that:

□

5.2.3 Third step

We state a useful formula.

**Lemma 5.2.2**

5.2.4 The main result

At this point we have proved the following proposition thanks to Propositions 5.2.1 and 5.2.4.

**Proposition 5.2.5**

We are now in a position to obtain the analytic form for

Indeed, in hyperbolic polar coordinates, we have:

On the other hand:

This yields

and we use Lemma 5.2.2 to establish (24).

5.3 Linear stability analysis

We now analyse the evolution of small time-dependent perturbations of the hyperbolic stationary-pulse solution through linear stability analysis. We use classical tools already developped in

5.3.1 Spectral analysis of the linearized operator

Equation (20) is linearized about the stationary solution

This leads to the linear equation:

We separate variables by setting

Introducing the hyperbolic polar coordinates

we obtain:

Note that we have formally differentiated the Heaviside function, which is permissible since it arises inside a convolution. One could also develop the linear stability analysis by considering perturbations of the threshold crossing points along the lines of Amari

With a slight abuse of notation we are led to study the solutions of the integral equation:

where the following equality derives from the definition of the hyperbolic distance in equation (25):

**Essential spectrum** If the function

then equation (28) reduces to:

yielding the eigenvalue:

This part of the essential spectrum is negative and does not cause instability.

**Discrete spectrum** If we are not in the previous case we have to study the solutions of the integral equation (28).

This equation shows that

The solutions of this equation are exponential functions

By the requirement that

Hence,

We can state the folliwing proposition:

**Proposition 5.3.1**

We now derive a reduced condition linking the parameters for the stability of hyperbolic stationary pulse.

**Reduced condition** Since

Stability of the hyperbolic stationary pulse requires that for all

Using the fact that

where

From (22) we have:

where

We have previously established that

By substitution we obtain another form of the reduced stability condition:

We also have:

and

showing that the stability condition (29) is satisfied when

**Proposition 5.3.2** (Reduced condition)

6 Numerical results

The aim of this section is to numerically solve (13) for different values of the parameters. This implies developing a numerical scheme that approaches the solution of our equation, and proving that this scheme effectively converges to the solution.

Since equation (13) is defined on

We have divided this section into four parts. The first part is dedicated to the study of the discretization scheme of equation (13). In the following two parts, we study the solutions for different connectivity functions: an exponential function, Section 6.2, and a difference of Gaussians, Section 6.3.

6.1 Numerical schemes

Let us consider the modified equation of (13):

We assume that the connectivity function satisfies the conditions (**C1**)-(**C2**). Moreover we express

We define

6.1.1 Discretization scheme

We discretize

and obtain the

which define the discretization of (30):

where

We end up with the following numerical scheme, where

with

6.1.2 Discussion

We discuss the error induced by the rectangular rule for the quadrature. Let

Four our numerical experiments we use the specific function ode45 of Matlab which is based on an explicit Runge-Kutta formula (see

We can also establish a proof of the convergence of the numerical scheme which is exactly the same as in

6.2 Purely excitatory exponential connectivity function

In this subsection, we give some numerical solutions of (13) in the case where the connectivity function is an exponential function,

6.2.1 Constant input

We fix the external input

In all experiments we set

We show in Figure

Plots of the solution of equation (13) at

Plots of the solution of equation (13) at

6.2.2 Variable input

In this paragraph, we allow the external current to depend upon the time variable. We have:

where _{0} around the circle of radius

Plots of the solution of equation (13) in the case of an exponential connectivity function with

Plots of the solution of equation (13) in the case of an exponential connectivity function with

6.2.3 High gain limit

We consider the high gain limit

with

Plot of a bump solution of equation (22) for the values

Plot of a bump solution of equation (22) for the values

6.3 Excitatory and inhibitory connectivity function

We give some numerical solutions of (13) in the case where the connectivity function is a difference of Gaussians, which features an excitatory center and an inhibitory surround:

We illustrate the behaviour of the solutions when increasing the slope

For small values of the slope

Plots of the solutions of equation (13) in the case where the connectivity function is the difference of two Gaussians at time

Plots of the solutions of equation (13) in the case where the connectivity function is the difference of two Gaussians at time

7 Conclusion

In this paper, we have studied the existence and uniqueness of a solution of the evolution equation for a smooth neural mass model called the structure tensor model. This model is an approach to the representation and processing of textures and edges in the visual area V1 which contains as a special case the well-known ring model of orientations (see

We have completed our study by constructing and analyzing spatially localised bumps in the high-gain limit of the sigmoid function. It is true that networks with Heaviside nonlinearities are not very realistic from the neurobiological perspective and lead to difficult mathematical considerations. However, taking the high-gain limit is instructive since it allows the explicit construction of stationary solutions which is impossible with sigmoidal nonlinearities. We have constructed what we called a hyperbolic radially symmetric stationary-pulse and presented a linear stability analysis adapted from

Finally, we illustrated our theoretical results with numerical simulations based on rigorously defined numerical schemes. We hope that our numerical experiments will lead to new and exciting investigations such as a thorough study of the bifurcations of the solutions of our equations with respect to such parameters as the slope of the sigmoid and the width of the connectivity function.

Appendix A: Isometries of

We briefly descrbies the isometries of

an element of

Orientation reversing isometries of

Let us now describe the different kinds of direct isometries acting in

Note that

The group

The orbits of

The orbits of

A.1 Iwasawa decomposition

The following decomposition holds, see

This theorem allows us to decompose any isometry of

Appendix B: Volume element in structure tensor space

Let

Δ^{2} its determinant,

where

**Proposition B.0.1**

We note that

and the metric is given by:

The determinant

We then use the relations:

where

The determinant of the Jacobian of the transformation

Hence, the volume element in

□

Appendix C: Global existence of solutions

**Theorem C.0.1**

**Lemma C.0.1**

This lemma shows the existence of a larger interval

**Theorem C.0.2**

**Theorem C.0.3**

Appendix D: Proof of Lemma 3.1.1

**Lemma D.0.1**

The change of variable

And it establishes that

where

We express

With the change of variable

The relation

with

Appendix E: Proof of Lemma 3.1.2

In this section we prove the following lemma.

**Lemma E.0.1**

**
erf
**

so that:

Since the variables are separable, we have:

One can easily see that:

We now give a simplified expression for

The change of variable

then we have a simplified expression for

□

Appendix F: Proof of Lemma 5.2.2

**Lemma F.0.1**

Because of the above definition of

In

with

Using some simple hyperbolic trigonometry formulae we obtain:

from which we deduce

Finally we use the equality shown in

In our case we have:

Since Hypergeometric functions are symmetric with respect to the first two variables:

we write

which yields the announced formula

□

Competing interests

The authors declare that they have no competing interests.

Acknowledgements

This work was partially funded by the ERC advanced grant NerVi number 227747.