analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · keywords: neural...

41
RESEARCH Open Access Analysis of a hyperbolic geometric model for visual texture perception Grégory Faye 1* , Pascal Chossat 1,2 and Olivier Faugeras 1 * Correspondence: gregory. [email protected] Full list of author information is available at the end of the article Abstract We study the neural field equations introduced by Chossat and Faugeras to model the representation and the processing of image edges and textures in the hypercolumns of the cortical area V1. The key entity, the structure tensor, intrinsically lives in a non-Euclidean, in effect hyperbolic, space. Its spatio-temporal behaviour is governed by nonlinear integro-differential equations defined on the Poincaré disc model of the two-dimensional hyperbolic space. Using methods from the theory of functional analysis we show the existence and uniqueness of a solution of these equations. In the case of stationary, i.e. time independent, solutions we perform a stability analysis which yields important results on their behavior. We also present an original study, based on non-Euclidean, hyperbolic, analysis, of a spatially localised bump solution in a limiting case. We illustrate our theoretical results with numerical simulations. AMS subject classifications: 30F45, 33C05, 34A12, 34D20, 34D23, 34G20, 37M05, 43A85, 44A35, 45G10, 51M10, 92B20, 92C20. Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean analysis, stability analysis, hyperbolic geometry, hypergeometric func- tions, bumps 1 Introduction The selectivity of the responses of individual neurons to external features is often the basis of neuronal representations of the external world. For example, neurons in the primary visual cortex (V1) respond preferentially to visual stimuli that have a specific orientation [1-3], spatial frequency [4], velocity and direction of motion [5], color [6]. A local network in the primary visual cortex, roughly 1 mm 2 of cortical surface, is assumed to consist of subgroups of inhibitory and excitatory neurons each of which is tuned to a particular feature of an external stimulus. These subgroups are the so-called Hubel and Wiesel hypercolumns of V1. We have introduced in [7] a new approach to model the processing of image edges and textures in the hypercolumns of area V1 that is based on a nonlinear representation of the image first order derivatives called the structure tensor [8,9]. We suggested that this structure tensor was represented by neu- ronal populations in the hypercolumns of V1. We also suggested that the time evolu- tion of this representation was governed by equations similar to those proposed by Wilson and Cowan [10]. The question of whether some populations of neurons in V1 can represent the structure tensor is discussed in [7] but cannot be answered in a Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4 http://www.mathematical-neuroscience.com/content/1/1/4 © 2011 Faye et al; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License

Upload: others

Post on 22-Jul-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

RESEARCH Open Access

Analysis of a hyperbolic geometric model forvisual texture perceptionGrégory Faye1*, Pascal Chossat1,2 and Olivier Faugeras1

* Correspondence: [email protected] list of author information isavailable at the end of the article

Abstract

We study the neural field equations introduced by Chossat and Faugeras to modelthe representation and the processing of image edges and textures in thehypercolumns of the cortical area V1. The key entity, the structure tensor, intrinsicallylives in a non-Euclidean, in effect hyperbolic, space. Its spatio-temporal behaviour isgoverned by nonlinear integro-differential equations defined on the Poincaré discmodel of the two-dimensional hyperbolic space. Using methods from the theory offunctional analysis we show the existence and uniqueness of a solution of theseequations. In the case of stationary, i.e. time independent, solutions we perform astability analysis which yields important results on their behavior. We also present anoriginal study, based on non-Euclidean, hyperbolic, analysis, of a spatially localisedbump solution in a limiting case. We illustrate our theoretical results with numericalsimulations.AMS subject classifications: 30F45, 33C05, 34A12, 34D20, 34D23, 34G20, 37M05,43A85, 44A35, 45G10, 51M10, 92B20, 92C20.

Keywords: Neural fields nonlinear integro-differential equations, functional analysis,non-Euclidean analysis, stability analysis, hyperbolic geometry, hypergeometric func-tions, bumps

1 IntroductionThe selectivity of the responses of individual neurons to external features is often the

basis of neuronal representations of the external world. For example, neurons in the

primary visual cortex (V1) respond preferentially to visual stimuli that have a specific

orientation [1-3], spatial frequency [4], velocity and direction of motion [5], color [6].

A local network in the primary visual cortex, roughly 1 mm2 of cortical surface, is

assumed to consist of subgroups of inhibitory and excitatory neurons each of which is

tuned to a particular feature of an external stimulus. These subgroups are the so-called

Hubel and Wiesel hypercolumns of V1. We have introduced in [7] a new approach to

model the processing of image edges and textures in the hypercolumns of area V1 that

is based on a nonlinear representation of the image first order derivatives called the

structure tensor [8,9]. We suggested that this structure tensor was represented by neu-

ronal populations in the hypercolumns of V1. We also suggested that the time evolu-

tion of this representation was governed by equations similar to those proposed by

Wilson and Cowan [10]. The question of whether some populations of neurons in V1

can represent the structure tensor is discussed in [7] but cannot be answered in a

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

© 2011 Faye et al; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons AttributionLicense

Page 2: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

definite manner. Nevertheless, we hope that the predictions of the theory we are devel-

oping will help deciding on this issue.

Our present investigations were motivated by the work of Bressloff, Cowan, Golu-

bitsky, Thomas and Wiener [11,12] on the spontaneous occurence of hallucinatory pat-

terns under the influence of psychotropic drugs, and its extension to the structure

tensor model. A further motivation was the following studies of Bressloff and Cowan

[13,14,4] where they study a spatial extension of the ring model of orientation of Ben-

Yishai [1] and Hansel, Sompolinsky [2]. To achieve this goal, we first have to better

understand the local model, that is the model of a “texture” hypercolumn isolated

from its neighbours.

The aim of this paper is to present a rigorous mathematical framework for the mod-

eling of the representation of the structure tensor by neuronal populations in V1. We

would also like to point out that the mathematical analysis we are developing here, is

general and could be applied to other integro-differential equations defined on the set

of structure tensors, so that even if the structure tensor were found to be not repre-

sented in a hypercolumn of V1, our framework would still be relevant. We then con-

centrate on the occurence of localized states, also called bumps. This is in contrast to

the work of [7] and [15] where “spatially” periodic solutions were considered. The

structure of this paper is as follows. In section 2 we introduce the structure tensor

model and the corresponding equations. We also link our model to the ring model of

orientations. In section 3 we use classical tools of evolution equations in functional

spaces to analyse the problem of the existence and uniqueness of the solutions of our

equations. In section 4 we study stationary solutions which are very important for the

dynamics of the equation by analysing a nonlinear convolution operator and making

use of the Haar measure of our feature space. In section 5, we push further the study

of stationary solutions in a special case and we present a technical analysis involving

hypergeometric functions of what we call a hyperbolic radially symmetric stationary-

pulse in the high gain limit. Finally, in section 6, we present some numerical simula-

tions of the solutions to verify the findings of the theoretical results.

2 The modelBy definition, the structure tensor is based on the spatial derivatives of an image in a

small area that can be thought of as part of a receptive field. These spatial derivatives

are then summed nonlinearly over the receptive field. Let I(x, y) denote the original

image intensity function, where x and y are two spatial coordinates. Let Iσ1 denote

the scale-space representation of I obtained by convolution with the Gaussian

kernel gσ (x, y) =1

2πσ 2e−

x2 + y2

2σ 2 :

Iσ1 = I � gσ1

The gradient ∇Iσ1 is a two-dimensional vector of coordinates Iσ1x , Iσ1

y which empha-

sizes image edges. One then forms the 2 × 2 symmetric matrix of rank one

T0 = ∇Iσ1(∇Iσ1)T, where T indicates the transpose of a vector. The set of 2 × 2 sym-

metric positive semidefinite matrices of rank one will be noted S+(1, 2) throughout the

paper (see [16] for a complete study of the set S+(p, n) of n × n symmetric positive

semidefinite matrices of fixed-rank p < n). By convolving T0 componentwise with a

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 2 of 41

Page 3: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Gaussian gσ2 we finally form the tensor structure as the symmetric matrix:

T = T0 � gσ2 =

(〈(Iσ1

x )2〉σ2

〈Iσ1x Iσ1

y 〉σ2

〈Iσ1x Iσ1

y 〉σ2

〈(Iσ1y )2〉

σ2

)

where we have set for example:

〈(Iσ1x )2〉σ2 = (Iσ1

x )2 � gσ2

Since the computation of derivatives usually involves a stage of scale-space smooth-

ing, the definition of the structure tensor requires two scale parameters. The first one,

defined by s1, is a local scale for smoothing prior to the computation of image deriva-

tives. The structure tensor is insensitive to noise and details at scales smaller than s1.

The second one, defined by s2, is an integration scale for accumulating the nonlinear

operations on the derivatives into an integrated image descriptor. It is related to the

characteristic size of the texture to be represented, and to the size of the receptive

fields of the neurons that may represent the structure tensor.

By construction, T is symmetric and non negative as det(T ) ≥ 0 by the inequality of

Cauchy-Schwarz, then it has two orthonormal eigenvectors e1, e2 and two non negative

corresponding eigenvalues l1 and l2 which we can always assume to be such that l1 ≥

l2 ≥ 0. Furthermore the spatial averaging distributes the information of the image over

a neighborhood, and therefore the two eigenvalues are always positive. Thus, the set of

the structure tensors lives in the set of 2 × 2 symmetric positive definite matrices,

noted SPD(2, ℝ) throughout the paper. The distribution of these eigenvalues in the

(l1, l2) plane reflects the local organization of the image intensity variations. Indeed,

each structure tensor can be written as the linear combination:

T = λ1e1eT1 + λ2e2eT

2 = (λ1 − λ2)e1eT1 + λ2(e1eT

1 + e2eT2) = (λ1 − λ2)e1eT

1 + λ2I2 (1)

where I2 is the identity matrix and e1eT1 ∈ S+(1, 2). Some easy interpretations can

be made for simple examples: constant areas are characterized by l1 = l2 ≈ 0, straight

edges are such that l1 ≫ l2 ≈ 0, their orientation being that of e2, corners yield l1 ≥

l2 ≫ 0. The coherency c of the local image is measured by the ratio c =λ1 − λ2

λ1 + λ2, large

coherency reveals anisotropy in the texture.

We assume that a hypercolumn of V1 can represent the structure tensor in the

receptive field of its neurons as the average membrane potential values of some of its

membrane populations. Let T be a structure tensor. The time evolution of the average

potential V(T , t) for a given column is governed by the following neural mass equa-

tion adapted from [7] where we allow the connectivity function W to depend upon the

time variable t and we integrate over the set of 2 × 2 symmetric definite-positive

matrices:⎧⎪⎨⎪⎩∂tV(T , t) = −αV(T , t) +

∫SPD(2)

W(T ,T ′, t)S(V(T ′, t))dT ′ + Iext(T , t) ∀t > 0

V(T , 0) = V0(T )(2)

The nonlinearity S is a sigmoidal function which may be expressed as:

S(x) =1

1 + e−μx

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 3 of 41

Page 4: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

where μ describes the stiffness of the sigmoid. Iext is an external input.

The set SPD(2) can be seen as a foliated manifold by way of the set of special sym-

metric positive definite matrices SSPD(2) = SPD(2)∩SL(2, ℝ). Indeed, we have:

SPD(2)hom= SSPD(2) × R+

∗. Furthermore, SSPD(2)isom= D where D is the Poincaré Disk,

see e.g. [7]. As a consequence we use the following foliation of

SPD(2) : SPD(2)hom= D × R+

∗, which allows us to write for all T ∈ SPD(2), T = (z, �)

with (z, �) ∈ D × R+∗. T , z and Δ are related by the relation det(T ) = �2 and the fact

that z is the representation in D of T ∈ SSPD(2) with T = �T .

It is well-known [17] that D (and hence SSPD(2)) is a two-dimensional Riemannian

space of constant sectional curvature equal to -1 for the distance noted d2 defined by

d2(z, z′) = arctanh|z − z′||1 − zz′| .

The isometries of D, that are the transformations that preserve the distance d2 are

the elements of unitary group U(1, 1). In appendix A we describe the basic structure

of this group. It follows, e.g. [18,7], that SDP(2) is a three-dimensional Riemannian

space of constant sectional curvature equal to -1 for the distance noted d0 defined by

d0(T ,T ′) =√

2(log� − log�′)2 + d22(z, z′)

As shown in proposition B.0.1 of appendix B it is possible to express the volume ele-

ment dT in (z1, z2, Δ) coordinates with z = z1 + iz2:

dT = 8√

2d�

dz1dz2

(1 − |z|2)2

We note dm(z) =dz1dz2

(1 − |z|2)2 and equation (2) can be written in (z, Δ) coordinates:

∂tV(z, �, t) = −αV(z, �, t)+8√

2∫ +∞

0

∫D

W(z, �, z′, �′, t)S(V(z′, �′, t))d�′

�′ dm(z) + Iext(z, �, t)

We get rid of the constant 8√

2 by redefining W as 8√

2W.⎧⎨⎩ ∂tV(z, �, t) = −αV(z, �, t) +

∫ +∞

0

∫D

W(z, �, z′, �′, t)S(V(z′, �′, t))d�′

�′ dm(z) + Iext(z, �, t) ∀t > 0

V(z, �, 0) = V0(z, �)(3)

In [7], we have assumed that the representation of the local image orientations and tex-

tures is richer than, and contains, the local image orientations model which is conceptually

equivalent to the direction of the local image intensity gradient. The richness of the struc-

ture tensor model has been expounded in [7]. The embedding of the ring model of orien-

tation in the structure tensor model can be explained by the intrinsic relation that exists

between the two sets of matrices SPD(2, ℝ) and S+(1, 2). First of all, when s2 goes to zero,

that is when the characteristic size of the structure becomes very small, we haveT0 � gσ2 → T0, which means that the tensor T ∈ SPD(2, R) degenerates to a tensor

T0 ∈ S+ (1, 2), which can be interpreted as the loss of one dimension. We can write each

T0 ∈ S+ (1, 2) as T0 = xxT = r2uuT, where u = (cos θ, sin θ)T and (r, θ) is the polar repre-

sentation of x. Since, x and -x correspond to the same T0, θ is equated to θ + kπ, k ∈ Z.

Thus S+(1, 2) = R+∗ × P1, where ℙ1 is the real projective space of dimension 1 (lines of ℝ2).

Then the integration scale s2, at which the averages of the estimates of the image

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 4 of 41

Page 5: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

derivatives are computed, is the link between the classical representation of the local

image orientations by the gradient and the representation of the local image textures by

the structure tensor. It is also possible to highlight this explanation by coming back to the

interpretation of straight edges of the previous paragraph. When l1 ≫ l2 ≈ 0 then

T ≈ (λ1 − λ2)e1eT1 ∈ S+(1, 2) and the orientation is that of e2. We denote by ℙ the pro-

jection of a 2 × 2 symmetric definite positive matrix on the set S+(1, 2) defined by:

P :

{SPD(2, R) → S+(1, 2)

T �→ τ = (λ1 − λ2)e1eT1

where T is as in equation (1). We can introduce a metric on the set S+(1, 2) which is

derived from a well-chosen Riemannian quotient geometry (see [16]). The resulting

Riemannian space has strong geometrical properties: it is geodesically complete and

the metric is invariant with respect to all transformations that preserve angles (ortho-

gonal transformations, scalings and pseudoinversions). Related to the decomposition

S+(1, 2) = R+∗ × P1, a metric on the space S+(1, 2) is given by:

ds2 = 2(

dr

r

)2

+ dθ2

The space S+(1, 2) endowed with this metric is a Riemannian manifold (see [16]).

Finally, the distance associated to this metric is given by:

d2S+(1,2)(τ1, τ2) = 2log2

(r1

r2

)+ |θ1 − θ2|2

where τ1 = xT1x1, τ2 = xT

2x2 and (ri, θi) denotes the polar coordinates of xi for i = 1, 2.

The volume element in (r, θ) coordinates is:

dτ =drr

π

where we normalize to 1 the volume element for the θ coordinate.

Let now τ = P(T ) be a symmetric positive semidefinite matrix. The average potential

V(τ, t) of the column has its time evolution that is governed by the following neural

mass equation which is just a projection of equation (2) on the subspace S+(1, 2):

∂tV(τ , t) = −αV(τ , t) +∫

S+(1,2)W

(τ , τ ′, t

)S(V(τ ′, t))dτ ′ + Iext(τ , t) ∀t > 0 (4)

In (r, θ) coordinates, (4) is rewritten as:

∂tV(r, θ , t) = −αV(r, θ , t) +∫ +∞

0

∫ π

0W

(r, θ , r′ , θ ′, t

)S(V(r′, θ ′, t))

dθ ′

π

dr′

r′ + Iext(r, θ , t)

This equation is richer than the ring model of orientation as it contains an additional

information on the contrast of the image in the orthogonal direction of the prefered

orientation. If one wants to recover the ring model of orientation tuning in the visual

cortex as it has been presented and studied by [1,2,19], it is sufficient i) to assume that

the connectivity function is time-independent and has a convolutional form:

W(τ , τ ′, t

)= w

(dS+(1,2)(τ , τ ′)

)= w

(√2log2

( rr′)

+ |θ − θ ′|2)

,

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 5 of 41

Page 6: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

and ii) to look at semi-homogeneous solutions of equation 4, i.e., solutions which do

not depend upon the variable r. We finally obtain:

∂tV(θ , t) = −αV(θ , t) +∫ π

0 wsh(θ − θ ′) S(V(θ ′, t))

dθ ′

π+ Iext(θ , t) (5)

where:

wsh(θ) =∫ +∞

0 w(√

2log2 (r) + θ2) dr

r

It follows from the above discussion that the structure tensor contains, at a given scale,

more information than the local image intensity gradient at the same scale and that it is

possible to recover the ring model of orientations from the structure tensor model.

The aim of the following sections is to establish that (3) is well-defined and to give

necessary and sufficient conditions on the different parameters in order to prove some

results on the existence and uniqueness of a solution of (3).

3 The existence and uniqueness of a solutionIn this section we provide theoretical and general results of existence and uniqueness

of a solution of (2). In the first subsection 3.1 we study the simpler case of the homo-

geneous solutions of (2), i.e. of the solutions that are independent of the tensor vari-

able T . This simplified model allows us to introduce some notations for the general

case and to establish the useful lemma 3.1.1. We then prove in 3.2 the main result of

this section, that is the existence and uniqueness of a solution of (2). Finally we

develop the useful case of the semi-homogeneous solutions of (2), i.e. of solutions that

depend on the tensor variable but only through its z coordinate in D.

3.1 Homogeneous solutions

A homogeneous solution to (2) is a solution V that does not depend upon the tensor

variable T for a given homogenous input I(t) and a constant initial condition V0. In (z,

Δ) coordinates, a homogeneous solution of (3) is defined by:

V(t) = −αV(t) + W(z, �, t)S(V(t)) + Iext(t)

where:

W(z, �, t)def=∫ +∞

0

∫D W(z, �, z′, �′, t)

d�′

�′dz′

1dz′2

(1 − |z′|2)2 (6)

Hence necessary conditions for the existence of a homogeneous solution are that:

• the double integral (6) is convergent,

• W(z, �, t) does not depend upon the variable (z, Δ). In that case, we write W(t)

instead of W(z, �, t).

In the special case where W(z, Δ, z’, Δ’, t) is a function of only the distance d0between (z, Δ) and (z’, Δ’):

W(z, �, z′, �′, t) ≡ w(√

2(log� − log�′)2 + d22(z, z′), t

)

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 6 of 41

Page 7: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

the second condition is automatically satisfied. The proof of this fact is given in

lemma D.0.2 of appendix D. To summarize, the homogeneous solutions satisfy the dif-

ferential equation:{V(t) = −αV(t) + W(t)S(V(t)) + Iext(t) t > 0

V(0) = V0(7)

3.1.1 A first existence and uniqueness result

Equation (3) defines a Cauchy’s problem and we have the following theorem.

Theorem 3.1.1. If the external input Iext(t) and the connectivity function W(t)are con-

tinuous on some closed interval J containing 0, then for all V0 in ℝ, there exists a

unique solution of (7) defined on a subinterval J0 of J containing 0 such that V (0) = V0.

Proof. It is a direct application of Cauchy’s theorem on differential equations. We

consider the mapping f : J × ℝ ® ℝ defined by:

f (t, x) = −αx + W(t)S(x) + Iext(t)

It is clear that f is continuous from J × ℝ to ℝ. We have for all x, y Î ℝ and t Î J:

|f (t, x) − f (t, y)| ≤ α|x − y| + |W(t)|S′m|x − y|

where S′m = supx∈R|S′(x)|.

Since, W is continuous on the compact interval J, it is bounded there by C > 0 and:

|f (t, x) − f (t, y)| ≤ (α + CS′m)|x − y|

We can extend this result to the whole time real line if I and W are continuous on

ℝ.

□Proposition 3.1.1. If Iext and Ware continuous on ℝ+, then for all V0 in ℝ, there

exists a unique solution of (7) defined on ℝ+ such that V (0) = V0.

Proof. We have already shown the following inequality:

|f (t, x) − f (t, y)| ≤ α|x − y| + |W(t)|S′m|x − y|

Then f is locally Lipschitz with respect to its second argument. Let V be a maximal

solution on J0 and we denote by b the upper bound of J0. We suppose that b < + ∞.

Then we have for all t ≥ 0:

V(t) = e−αtV0 +∫ t

0e−α(t−u)W(u)S(V(u))du +

∫ t

0e−α(t−u)Iext(u)du

⇒ |V(t)| ≤ |V0| + Sm∫ β

0eαu|W(u)|du +

∫ β

0eαu|Iext(u)|du ∀t ∈ [0, β]

where Sm = supxÎℝ |S(x)|.

This implies that the maximal solution V is bounded for all t Î [0, b], but theoremC.0.2 of appendix C ensures that it is impossible. Then, it follows that necessarily b =

+ ∞. □3.1.2 Simplification of (6) in a special case

Invariance In the previous section, we have stated that in the special case where W

was a function of the distance between two points in D × R+∗, then W(z, �, t) did not

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 7 of 41

Page 8: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

depend upon the variables (z, Δ). As already said in the previous section, the following

result holds (see proof of lemma D.0.2 of appendix D).

Lemma 3.1.1. Suppose that W is a function of d0 (T ,T ′) only. Then W does not

depend upon the variable T .

Mexican hat connectivity In this paragraph, we push further the computation of W in

the special case where W does not depend upon the time variable t and takes the spe-

cial form suggested by Amari in [20], commonly referred to as the “Mexican hat” con-

nectivity. It features center excitation and surround inhibition which is an effective

model for a mixed population of interacting inhibitory and excitatory neurons with

typical cortical connections. It is also only a function of d0 (T ,T ′).In detail, we have:

W(z, �, z′�′) = w(√

2(log� − log�′)2 + d22(z, z′)

)where:

w(x) =1√

2πσ 21

e−

x2

σ 21 − A√

2πσ 22

e−

x2

σ 22

with 0 ≤ s1 ≤ s2 and 0 ≤ A ≤ 1.

In this case we can obtain a very simple closed-form formula for W as shown in the

following lemma.

Lemma 3.1.2. When W is the specific Mexican hat function just defined then:

W =π

32

2

(σ1e2σ 2

1 erf(√

2σ1

)− Aσ2e2σ 2

2 erf(√

2σ2

)) (8)

where erf is the error function defined as:

erf(x) =2√π

∫ x0 e−u2

du

Proof. The proof is given in lemma E.0.3 of appendix E. □

3.2 General solution

We now present the main result of this section about the existence and uniqueness of

solutions of equation (2). We first introduce some hypotheses on the connectivity

function W. We present them in two ways: first on the set of structure tensors consid-

ered as the set SPD(2), then on the set of tensors seen as D × R+∗. Let J be a subinterval

of ℝ. We assume that:

• (H1) : ∀(T ,T ′, t) ∈ SPD(2) × SPD(2) × J, W(T ,T ′, t) ≡ W(d0(T ,T ′), t),

• (H2) : W ∈ C(J, L1 (SPD(2)

))where W is defined as W(T , t) = W

(d0(T , Id2), t

)for all (T , t) ∈ SPD(2) × J where Id2 is the identity matrix of ℳ2(ℝ),

• (H3):∀t ∈ J, supt∈J||W(t)||L1 < +∞ where

||W(t)||L1def=∫

SPD(2)|W(d0(T , Id2), t)|dT .

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 8 of 41

Page 9: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Equivalently, we can express these hypotheses in (z, Δ) coordinates:

• (H1bis) : ∀(z, z′, �, �′, t) ∈ D2×(R+∗)2×R , W(z, �, z′, �′, t) ≡ W(d2(z, z′), |log(�)−log(�′)|, t),

• (H2bis) : W ∈ C(J, L1(D × R+∗)) where W is defined as W(z, Δ, t) = W (d2(z, 0), |

log(Δ)|, t) for all ∀(z, �, t) ∈ D × R+∗ × J,

• (H3bis) : ∀t ∈ J , supt∈J||W(t)||L1 < +∞ where

||W (t)||L1def= ∫D×R+∗

∣∣W (d2(z, 0),

∣∣log(�)∣∣ , t

)∣∣ d�

�dm(z)

3.2.1 Functional space setting

We introduce the following mapping f g : (t, j) ® f g(t, j) such that:

f g(t, φ)(z, �) = ∫D×R+∗ W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣ , t

)S(φ(z′, �′)

) d�′

�′ dm(z′) (9)

Our aim is to find a functional space F where (3) is well-defined and the function

f gmaps F to F for all ts. A natural choice would be to choose j as a Lp(D × R+∗)-integr-

able function of the space variable with 1 ≤ p < +∞. Unfortunately, the homogeneous solu-

tions (constant with respect to (z, Δ)) do not belong to that space. Moreover, a valid model

of neural networks should only produce bounded membrane potentials. That is why we

focus our choice on the functional space F = L∞(D × R+∗). As D × R+

∗ is an open set of

ℝ3, F is a Banach space for the norm: ||φ||F = supz∈Dsup�∈R+∗ |φ(z, �)|.Proposition 3.2.1. If Iext ∈ C(J,F) with supt∈J||Iext(t)||F < +∞ and W satisfies

hypotheses (H1bis)-(H3bis) then f g is well-defined and is from J × F to F .

Proof. ∀(z, �, t) ∈ D × R+∗ × R, we have:∣∣∣∣∣ ∫

D×R+∗W

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣ , t

)S(φ(z′, �′))

d�′

�′ dm(z′)

∣∣∣∣∣ ≤ Sm supt∈J

||W(t)||L1 < +∞

□3.2.2 The existence and uniqueness of a solution of (3)

We rewrite (3) as a Cauchy problem:⎧⎪⎨⎪⎩∂tV(z, �, t) = −αV(z, �, t) + ∫

D×R+∗W(d2(z, z′), |log(

�′ )|, t)S(V(z′, �′, t))d�′

�′ dm(z′) + Iext(z, �, t)

V(z, �, 0) = V0(z, �)(10)

Theorem 3.2.1. If the external current Iext belongs to C(J,F) with J an open interval

containing 0 and W satisfies hypotheses (H1bis)-(H3bis), then for all V0 ÎF , there

exists a unique solution of (10) defined on a subinterval J0 of J containing 0 such that V

(z, Δ, 0) = V0(z, Δ) for all (z, �) ∈ D × R+∗.

Proof. We prove that f g is continuous on J × F . We have

(f g(t, φ) − f g(s, ψ)

)(z, �) =

∫D×R+∗

W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣ , t

) (S(φ(z′, �′)) − S(ψ(z′, �′))

) d�′

�′ dm(z′)

+∫

D×R+∗

(W

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣ , t

)− W

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣ , s

))S(ψ(z′, �′))

d�′

�′ dm(z′),

and therefore

||f g(t, φ) − f g(s, ψ)||F ≤ S′m sup

t∈J||W(t)||L1 ||φ − ψ ||F + Sm||W(t) − W(s)||L1

Because of condition (H2) we can choose |t -s| small enough so that

is arbitrarily small. This proves the continuity of f g. Moreover it follows from the

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 9 of 41

Page 10: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

previous inequality that:

||f g(t, φ) − f g(t, ψ)||F ≤ S′m Wg

0||φ − ψ ||F

with Wg0 = supt∈J||W(t)||L1. This ensures the Lipschitz continuity of f g with respect to

its second argument, uniformly with respect to the first. The Cauchy-Lipschitz theorem

on a Banach space yields the conclusion. □Remark 3.2.1. Our result is quite similar to those obtained by Potthast and Graben

in [21]. The main differences are that, first, we allow the connectivity function to

depend upon the time variable t and, second, that our space features is no longer a ℝn

but a Riemanian manifold. In their article, Potthast and Graben also work with a dif-

ferent functional space by assuming more regularity for the connectivity function W and

then obtain more regularity for their solutions.

Proposition 3.2.2. If the external current Iext belongs to C(R+,F) and W satisfies

hypotheses (H1bis)-(H3bis) with J = ℝ+, then for all V0 ÎF , there exists a unique solu-

tion of (10) defined on ℝ+ such that V (z, Δ, 0) = V0(z, Δ) for all (z, �) ∈ D × R+∗.

Proof. We have just seen in the previous proof that f g is globally Lipschitz with

respect to its second argument:

||f g(t, φ) − f g(t, ψ)||F ≤ S′mW

g0||φ − ψ ||F

then theorem C.0.3 of the appendix C gives the conclusion.

3.2.3 The intrinsic boundedness of a solution of (3)

In the same way as in the homogeneous case, we show a result on the boundedness of

a solution of (3).

□Proposition 3.2.3. If the external current Iext belongs to C(R+,F)and is bounded in

time supt∈R+||Iext(t)||F < +∞ and W satisfies hypotheses (H1bis)-(H3bis) with J = ℝ+,

then the solution of (10) is bounded for each initial condition V0 ÎF .

Let us set:

ρg def=

(SmWg0 + sup

t∈R+||Iext(t)||F )

where Wg0 = supt∈R+||W(t)||L1.

Proof. Let V be a solution defined on ℝ+. Then we have for all t Î ℝ+*:

V(z, �, t) = e−αtV0(z, �) +∫ t

0e−α(t−u)

∫D×R+∗

W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣ , u

)S(V(z′, �′, u))

d�′

�′ dm(z′)du

+∫ t

0e−α(t−u)Iext(z, �, u)du

The following upperbound holds

||V(t)||F ≤ e−αt||V0||F +1α

(Sm Wg

0 + supt∈R+

‖ Iext(t)‖F) (

1 − e−αt) , (11)

We can rewrite (11) as:

||V(t)||F ≤ e−αt(

||V0||F − 1α

(SmWg

0 + supt∈R+

||Iext(t)||F))

+1α

(SmW0+g sup

t∈R+‖ Iext(t)‖F

)=

e−αt(

||V0||F − ρg

2

)+

ρg

2

(12)

If V0 ∈ Bgρ this implies ||V(t)||F ≤ ρg

2(1 + e−αt) for all t > 0 and hence ||V(t)||F< rg

for all t > 0, proving that Bp is stable. Now assume that ||V(t)||F < rg for all t ≥ 0.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 10 of 41

Page 11: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

The inequality (12) shows that for t large enough this yields a contradiction. Therefore

there exists t0 > 0 such that ||V(t0)||F < rg. At this time instant we have

ρg ≤ e−αt0

(||V0||F − ρg

2

)+

ρg

2,

and hence

t0 ≤ 1α

log(

2||V0||F − ρg

ρg

)□The following corollary is a consequence of the previous proposition.

Corollary 3.2.1. If V0 /∈ Bρg and Tg = inf{t > 0 such that V(t) ∈ Bρg} then:

Tg ≤ 1α

log(

2||V0||F − ρg

ρg

)

3.3 Semi-homogeneous solutions

A semi-homogeneous solution of (3) is defined as a solution which does not depend

upon the variable Δ. In other words, the populations of neurons is not sensitive to the

determinant of the structure tensor, that is to the contrast of the image intensity. The

neural mass equation is then equivalent to the neural mass equation for tensors of unit

determinant. We point out that semi-homogeneous solutions were previously intro-

duced in [7] where a bifurcation analysis of what they called H-planforms was per-

formed. In this section, we define the framework in which their equations make sense

without giving any proofs of our results as it is a direct consequence of those proven

in the general case. We rewrite equation (3) in the case of semi-homogeneous solu-

tions:{

∂tV(z, t) = −αV(z, t) + ∫DWsh(z, z′, t)S(V(z′, t))dm(z′) + Iext(z, t) t > 0V(z, 0) = V0(z) (13)

where

Wsh(z, z′, t) =∫ +∞

0 W(z, �, z′, �′, t)d�′

�′

We have implicitly made the assumption, that Wsh does not depend on the coordi-

nate Δ. Some conditions under which this assumption is satisfied are described below

and are the direct transductions of those of the general case in the context of semi-

homogeneous solutions.

Let J be an open interval of ℝ. We assume that:

• (C1) : ∀(z, z′, t) ∈ D2 × J , Wsh(z, z′, t) ≡ wsh(d2(z, z′), t),

• (C2) : Wsh ∈ C(J, L1(D)) where Wsh is defined as Wsh (z, t) = wsh(d2(z, 0), t) for

all (z, t) ∈ D × J,

• (C3) : supt∈J||Wsh(t)||L1 < +∞ where ||Wsh(t)||L1def=∫

D|Wsh(d2(z, 0), t)| dm(z).

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 11 of 41

Page 12: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Note that conditions (C1)-(C2) and lemma 3.1.1 imply that for all z ∈ D,∫D | ∣∣Wsh(z, z′, t)

∣∣dm(z′) = ||Wsh(t)||L1. And then, for all z ∈ D, the mapping z’ ® Wsh

(z, z’, t) is integrable on D.

From now on, F = L∞(D) and the Fischer-Riesz’s theorem ensures that L∞(D) is a

Banach space for the norm: ||ψ ||∞ = inf{C ≥ 0, |ψ(z)| ≤ C for almost every z ∈ D}.Theorem 3.3.1. If the external current Iext belongs to C(J,F)with J an open interval

containing 0 and Wsh satisfies conditions (C1)-(C3), then for all V0 ÎF , there exists a

unique solution of (13) defined on a subinterval J0 of J containing 0.

This solution, defined on the subinterval J of ℝ can in fact be extended to the whole

real line, and we have the following proposition.

Proposition 3.3.1. If the external current Iext belongs to C(R+,F)and Wsh satisfies

conditions (C1)-(C3) with J = ℝ+, then for all V0 ÎF , there exists a unique solution of

(13) defined on ℝ+.

We can also state a result on the boundedness of a solution of (13):

Proposition 3.3.2. Let ρdef= 2

α

(SmW sh

0 + supt∈R+ ||I(t)||F), with

W sh0 = supt∈J ‖ Wsh(t)‖L1. The open ball Bp of F of center 0 and radius p is stable

under the dynamics of equation (13). Moreover it is an attracting set for this dynamics

and if V0 ∉ Br and T = inf{t > 0 such that V(t) Î BP} then:

T ≤ 1α

log(

2||V0||F − ρ

ρ

)

4 Stationary solutions

We look at the equilibrium states, noted V0μ of (3), when the external input I and the

connectivity W do not depend upon the time. We assume that W satisfies hypotheses

(H1bis)-(H2bis). We redefine for convenience the sigmoidal function to be:

S(x) =1

1 + e−x,

so that a stationary solution (independent of time) satisfies:

0 = −αV0μ(z, �)+

∫D×R+∗

W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣) S(μV0

μ(z′, �′))d�′

�′ dm(z′)+Iext(z, �) (14)

We define the nonlinear operator from F to F , noted Gμ, by:

Gμ(V)(z, �) = ∫D×R+∗ W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣) S(μV(z′, �′))

d�′

�′ dm(z′) (15)

Finally, (14) is equivalent to:

αV0μ(z, �) = Gμ(V)(z, �) + Iext(z, �)

4.1 Study of the nonlinear operator Gμ

We recall that we have set for the Banach space F = L∞(D × R+∗) and proposition 3.2.1

shows that Gμ : F → F . We have the further properties:

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 12 of 41

Page 13: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Proposition 4.1.1. Gμsatisfies the following properties:

• ||Gμ(V1) − Gμ(V2)||F ≤ μWg0S′

m||V1 − V2||F for all μ ≥ 0,

• μ → Gμis continuous on ℝ+,

Proof. The first property was shown to be true in the proof of theorem 3.3.1. The

second property follows from the following inequality:

||Gμ1 (V) − Gμ2 (V)||F ≤ |μ1 − μ2|Wg0S′

m||V||F□

We denote by Gl and G∞ the two operators from F to F defined as follows for all

V Î F and all (z, �) ∈ D × R+∗:

Gl(V)(z, �) = ∫D×R+∗ W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣)V(z′, �′)

d�′

�′ dm(z′), (16)

and

G∞(V)(z, �) = ∫D×R+∗W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣)H(V(z′, �′))

d�′

�′ dm(z′)

where H is the Heaviside function.

It is straightforward to show that both operators are well-defined on F and map Fto F . Moreover the following proposition holds.

Proposition 4.1.2. We have

Gμ →μ→∞G∞

Proof. It is a direct application of the dominated convergence theorem using the fact

that:

S(μy) →μ→∞ H(y) a.e. y ∈ R

4.2 The convolution form of the operator Gμin the semi-homogeneous case

It is convenient to consider the functional space F sh = L∞(D) to discuss semi-homoge-

neous solutions. A semi-homogeneous persistent state of (3) is deduced from (14) and

satisfies:

αV0μ(z) = Gsh

μ (V0μ)(z) + Iext(z) (17)

where the nonlinear operator Gshμ from F sh to F sh is defined for all V ÎF shand z ∈ D;

by:

Gshμ (V)(z) = ∫DWsh(d2(z, z′))S(μV(z′))dm(z′)

We define the associated operators, Gshl , Gsh

∞:

Gshl (V)(z) = ∫DWsh(d2(z, z′))V(z′)dm(z′)

Gsh∞(V)(z) = ∫

DWsh(d2(z, z′))H(V(z′))dm(z′)

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 13 of 41

Page 14: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

We rewrite the operator Gshμ in a convenient form by using the convolution in the

hyperbolic disk. First, we define the convolution in a such space. Let O denote the cen-

ter of the Poincaré disk that is the point represented by z = 0 and dg denote the Haar

measure on the group G = SU(1, 1) (see [22] and appendix A), normalized by:∫G f (g · O)dg

def= ∫Df (z)dm(z),

for all functions of L1(D). Given two functions f1, f2 in L1(D) we define the convolu-

tion * by:

(f1 ∗ f2)(z) =∫

Gf1(g · O)f2(g−1 · z)dg

We recall the notation Wsh(z)def= Wsh(d2(z, O)).

Proposition 4.2.1. For all μ ≥ 0 and V ÎF sh we have:

Gshμ (V) = Wsh ∗ S(μV), Gsh

l (V) = Wsh ∗ V and Gsh∞(V) = Wsh ∗ H(V) (18)

Proof. We only prove the result forGμ. Let z ∈ D, then:

Gshμ (V)(z) = ∫

DWsh(d2(z, z′))S(μV(z′))dm(z′) =

∫G

Wsh(d2(z, g · O))S(μV(g · O))dg

=∫

GWsh(d2(gg−1 · z, g · O))S(μV(g · O))dg

and for all g Î SU(1, 1), d2(z, z’) = d2(g·z, g·z’) so that:

Gshμ (V)(z) =

∫G

Wsh(d2(g−1 · z, O))S(μV(g · O))dg = Wsh ∗ S(μV)(z)

□Let b be a point on the circle ∂D. For z ∈ D, we define the “inner product” < z, b > to

be the algebraic distance to the origin of the (unique) horocycle based at b through z

(see [7]). Note that < z, b > does not depend on the position of z on the horocycle.

The Fourier transform in D is defined as (see [22]):

h(λ, b) = ∫Dh(z)e(−iλ+1)<z,b>dm(z) ∀(λ, b) ∈ R × ∂D

for a function h : D → C such that this integral is well-defined.

Lemma 4.2.1. The Fourier transform in D, Wsh(λ, b) of Wsh does not depend upon the

variable b ∈ ∂D. Proof. For all l Î ℝ and b = eiθ ∈ ∂Db = eiθ ∈ ∂D,

Wsh(λ, b) =∫D

Wsh(z)e(−iλ+1)<z,b>dm(z).

We recall that for all j Î ℝ rj is the rotation of angle j and we have Wsh(rj ·z) =

Wsh(z), dm(z) = dm(rj ·z) and < z, b > = < rj ·z, rj ·b >, then:

Wsh(λ, b) =∫

DWsh(r−θ ·z)e(−iλ+1)<r−θ ·z,1>dm(z) =

∫D

Wsh(z)e(−iλ+1)<z,1>dm(z)def= Wsh(λ)

□We now introduce two functions that enjoy some nice properties with respect to the

Hyperbolic Fourier transform and are eigenfunctions of the linear operator Gshl .

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 14 of 41

Page 15: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Proposition 4.2.2. Let el, b(z) = e(-il+1)<z, b> and �λ(z) =∫∂D e(iλ+1)<z,b>db then:

• Gshl (eλ,b) = Wsh(λ)eλ,b

• Gshl (�λ) = Wsh(λ)�λ

Proof. We begin with b = 1 ∈ ∂D and use the horocyclic coordinates. We use the

same changes of variables as in lemma 3.1.1:

Gshl (eλ,1)(nsat · O) =

∫R2

Wsh(d2(nsat · O, ns′ at′ · O))e(−iλ−1)t′dt′ds′

=∫

R2Wsh (d2(ns−s′ at · O, at′ · O))e(−iλ−1)t′dt′ds′

=∫R2

Wsh(d2(atn−x · O, at′ · O))e(−iλ−1)t′+2tdt′dx =∫

R2Wsh(d2(O, nxat′−t · O))e(−iλ−1)t′+2tdt′dx

=∫

R2Wsh(d2(O, nxay · O))e(−iλ−1)(y+t)+2tdydx = e(−iλ+1)<nsat·O,1>W

sh(λ)

By rotation, we obtain the property for all b ∈ ∂D.

For the second property [[22], Lemma 4.7] shows that:

Wsh ∗ �λ(z) =∫

∂De(iλ+1)<z,b>Wsh(λ)db = �λ(z)Wsh(λ)

A consequence of this proposition is the following lemma.

□Lemma 4.2.2. The linear operator Gsh

l is not compact and for all μ ≥ 0, the nonlinear

operator Gshμ is not compact.

Proof. The previous proposition 4.2.2 shows that Gshl has a continuous spectrum

which iimplies that is not a compact operator.

Let U be in F sh, for all V ÎF sh we differentiate Gshμ and compute its Frechet derivative:

D(Gshμ )U(V)(z) =

∫D

Wsh(d2(z, z′))S′(U(z′))V(z′)dm(z′)

If we assume further that U does not depend upon the space variable z, U(z) = U0

we obtain:

D(Gshμ )U0 (V)(z) = S′(U0)Gsh

l (V)(z)

If Gshμ was a compact operator then its Frechet derivative D(Gsh

μ )U0 would also be a

compact operator, but it is impossible. As a consequence, Gshμ is not a compact operator.

4.3 The convolution form of the operator Gμin the general case

We adapt the ideas presented in the previous section in order to deal with the general

case. We recall that if H is the group of positive real numbers with multiplication as

operation, then the Haar measure dh is given bydxx. For two functions f1, f2 in

L1(D × R+∗) we define the convolution ⋆ by:

(f1 � f2)(z, �)def=∫

G

∫H f1(g · O, h · 1)f2(g−1 · z, h−1 · �)dgdh

We recall that we have set by definition: W(z, Δ) = W(d2(z, 0), |log(Δ)|).

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 15 of 41

Page 16: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Proposition 4.3.1. For all μ ≥ 0 and V ÎF we have:

Gμ(V) = W � S(μV), Gl(V) = W � V and G∞(V) = W � H(V) (19)

Proof. Let (z, Δ) be in D × R+∗. We follow the same ideas as in proposition 4.2.1 and

prove only the first result. We have

Gμ(V)(z, �) =∫D×R+∗

W(d2(z, z′), | log(�

�′ )|) S (μV(z′, �′))d�′�′ dm(z′)

=∫

G

∫R+∗

W (d2(g−1 · z, O), | log(�

�′ )|) S (μV(g · O, �′))dgd�′�′

=∫

G

∫H

W (d2(g−1 · z, O), | log(h−1 · �) |) S (μV(g · O, h · 1))dg dh

= W � S(μV)(z, �)

□We next assume further that the function W is separable in z, Δ and more precisely

that W(z, Δ) = W1(z)W2(log(Δ)) where W1(z) = W1(d2(z, 0)) and W2(log(Δ)) = W2(|

log(Δ)|) for all (z, �) ∈ D × R+∗. The following proposition is an echo to proposition

4.2.2.

Proposition 4.3.2. Let el,b(z) = e(-il+1)<z, b>, �λ(z) =∫

∂De(iλ+1)<z,b>db and hξ(Δ) = eiξ

log(Δ) then:

• Gl(eλ,bhξ ) = W1(λ)W2(ξ)eλ,bhξ

• Gl(�λhξ ) = W1(λ)W2(ξ)�λhξ

Where W2 is the usual Fourier transform of W2.

Proof. The proof of this proposition is exactly the same as for proposition 4.2.2.

Indeed:

Gl(eλ,bhξ )(z, �) = W1 ∗ eλ,b(z)∫

R+∗W2(log(

�′ ))eiξ log(�′) d�′�′

= W1 ∗ eλ,b(z)(∫

RW2(y)e−iξydy)eiξ log(�)

□A straightforward consequence of this proposition is an extension of lemma 4.2.2 to

the general case:

Lemma 4.3.1. The linear operator Gshl is not compact and for all μ ≥ 0, the nonlinear

operator Gshμ is not compact.

4.4 The set of the solutions of (14)

Let Bμ be the set of the solutions of (14) for a given slope parameter μ:

Bμ = {V ∈ F | − αV + Gμ(V) + Iext = 0}

We have the following proposition.

Proposition 4.4.1. If the input current Iext is equal to a constant I0ext, i.e. does not

depend upon the variables (z, Δ) then for all μ ∈ R+, Bμ �=� 0. In the general case

Iext ∈ F , if the condition μS′mWg

0 < α is satisfied, then Card (Bμ) = 1.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 16 of 41

Page 17: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Proof. Due to the properties of the sigmoid function, there always exists a constant

solution in the case where Iext is constant. In the general case where Iext ∈ F , the state-

ment is a direct application of the Banach fixed point theorem, as in [23]. □Remark 4.4.1. If the external input does not depend upon the variables (z, Δ) and if the

condition μS′mWg

0 < α is satisfied, then there exists a unique stationary solution by applica-

tion of proposition 4.4.1. Moreover, this stationary solution does not depend upon the vari-

ables (z, Δ) because there always exists one constant stationary solution when the external

input does not depend upon the variables (z, Δ). Indeed equation (14) is then equivalent to:

0 = −αV0 + βS(V0) + I0ext where β =

∫D×R+∗

W(

d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣) d�′

�′ dm(z′)

and b does not depend upon the variables (z, Δ) because of lemma 3.1.1. Because of

the property of the sigmoid function S, the equation 0 = −αV0 + βS(V0) + I0ext has

always one solution.

If on the other hand the input current does depend upon these variables, is invariant

under the action of a subgroup of U(1, 1), the group of the isometries of D(see appendix

A), and the condition μS′mWg

0 < αis satisfied, then the unique stationary solution will

also be invariant under the action of the same subgroup. We refer the interested reader

to our work [15] on equivariant bifurcation of hyperbolic planforms on the subject.

When the condition μS′mWg

0 < αis satisfied we call primary stationary solution the

unique solution in Bμ.

4.5 Stability of the primary stationary solution

In this subsection we show that the condition μS′mWg

0 < α guarantees the stability of

the primary stationary solution to (3).

Theorem 4.5.1. We suppose that I ∈ F and that the condition μS′mWg

0 < αis satis-

fied, then the associated primary stationary solution of (3) is asymtotically stable.

Proof. Let V0μ be the primary stationary solution of (3), as μS′

mWg0 < α is satisfied. Let also

Vμ be the unique solution of the same equation with some initial condition Vμ(0) = φ ∈ F ,

see theorem 3.3.1. We introduce a new function X = Vμ − V0μ which satisfies:

⎧⎪⎨⎪⎩∂tX(z, �, t) = −αX(z, �, t) +

∫D×R+∗

Wm(d2(z, z′), | log(�

�′ ) |)�(X(z′, �′, t))d�′

�′ dm(z′)

X(z, �, 0) = φ(z, �) − V0μ(z, �)

where Wm(d2(z, z′), | log(�

�′ ) |) = S′mW(d2(z, z′), | log(

�′ ) |) and the vector Θ(X(z, Δ,

t)) is given by �(X(z, �, t)) = S−(μVμ(z, �, t)) − S−(μV0μ(z, �)) with S− = (S′

m)−1S. We

note that, because of the definition of Θ and the mean value theorem |Θ(X(z, Δ, t))| ≤

μ|X(z, Δ, t)|. This implies that |Θ(r)| ≤ |r| for all r Î ℝ.

∂tX(z, �, t) = −αX(z, �, t) +∫

D×R+∗Wm

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣)�(X(z′, �′, t))

d�′�′ dm(z′)

⇒ ∂t(eαtX(z, �, t)) = eαt∫

D×R+∗Wm

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣)�(X(z′, �′, t))

d�′�′ dm(z′)

⇒ X(z, �, t) = e−αtX(z, �, 0) +∫ t

0e−α(t−u)

∫D×R+∗

Wm

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣)�(X(z′, �′, u))

d�′�′ dm(z′)du

⇒ | X(z, �, t) | ≤ e−αt| X(z, �, 0) | + μ

∫ t

0e−α(t−u)

∫D×R+∗

∣∣∣∣Wm

(d2(z, z′),

∣∣∣∣log(�

�′ )∣∣∣∣)∣∣∣∣| X(z′, �′, u) |d�′

�′ dm(z′)du

⇒ ‖ X(t)‖∞ ≤ e−αt ‖ X(0)‖∞ + μWg0S′

m

∫ t

0e−α(t−u) ‖ X(u)‖∞du

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 17 of 41

Page 18: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

If we set: G(t) = eat||X(t)||∞, then we have:

G(t) ≤ G(0) + μWg0S′

m

∫ t

0G(u)du

and G is continuous for all t ≥ 0. The Gronwall inequality implies that:

G(t) ≤ G(0)eμ Wg0S′

mt

⇒ ‖ X(t)‖∞ ≤ e(μWg0S′

m−α)t ‖ X(0)‖∞,

and the conclusion follows. □

5 Spatially localised bumps in the high gain limitIn many models of working memory, transient stimuli are encoded by feature-selective

persistent neural activity. Such stimuli are imagined to induce the formation of a spa-

tially localised bump of persistent activity which coexists with a stable uniform state.

As an example, Camperi and Wang [24] have proposed and studied a network model

of visuo-spatial working memory in prefontal cortex adapted from the ring model of

orientation of Ben-Yishai and colleagues [1]. Many studies have emerged in the past

decades to analyse these localised bumps of activity [25-29], see the paper by Coombes

for a review of the domain [30]. In [25,26,28], the authors have examined the existence

and stability of bumps and multi-bumps solutions to an integro-differential equation

describing neuronal activity along a single spatial domain. In [27,29] the study is

focused on the two-dimensional model and a method is developed to approximate the

integro-differential equation by a partial differential equation which makes possible the

determination of the stability of circularly symmetric solutions. It is therefore natural

to study the emergence of spatially localized bumps for the structure tensor model in a

hypercolumn of V1. We only deal with the reduced case of equation (13) which means

that the membrane activity does not depend upon the contrast of the image intensity,

keeping the general case for future work.

In order to construct exact bump solutions and to compare our results to previous

studies [25-29], we consider the high gain limit μ ® ∞ of the sigmoid function. As

above we denote by H the Heaviside function defined by H(x) = 1 for x ≥ 0 and H(x)

= 0 otherwise. Equation (13) is rewritten as:

∂tV(z, t) = −αV(z, t) +∫

DW(z, z′)H(V(z′, t) − κ) dm (z′) + I(z, t)

= −αV(z, t) +∫

{z′∈D|V(z′,t)≥κ}W(z, z′) dm (z′) + I(z)

(20)

We have introduced a threshold � to shift the zero of the Heaviside function. We

make the assumption that the system is spatially homogeneous that is, the external

input I does not depend upon the variables t and the connectivity function depends

only on the hyperbolic distance between two points of D : W(z, z′) = W(d2(z, z′)). For

illustrative purposes, we will use the exponential weight distribution as a specific exam-

ple throughout this section:

W(z, z′) = W(d2(z, z′)) = exp(−d2(z, z′)b

) (21)

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 18 of 41

Page 19: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

The theoretical study of equation (20) has been done in [21] where the authors have

imposed strong regularity assumptions on the kernel function W, such as Hölder con-

tinuity, and used compactness arguments and integral equation techniques to obtain a

global existence result of solutions to (20). Our approach is very different, we follow

that of [25-28,31,29] by proceeding in a constructive fashion. In a first part, we define

what we call a hyperbolic radially symmetric bump and present some preliminary

results for the linear stability analysis of the last part. The second part is devoted to

the proof of a technical theorem 5.1.1 which is stated in the first part. The proof uses

results on the Fourier transform introduced in section 4, hyperbolic geometry and

hypergeometric functions. Our results will be illustrated in the following section 6.

5.1 Existence of hyperbolic radially symmetric bumps

From equation (20) a general stationary pulse satisfies the equation:

αV(z) =∫

{z′∈D |V(z′)≥κ}W(z, z′)dm(z′) + Iext(z)

For convenience, we note M(z, K) the integral ∫K W(z, z’)dm(z’) with

K = {z ∈ D |V(z) ≥ κ}. The relation V (z) = � holds for all z Î ∂K.

Definition 5.1.1. V is called a hyperbolic radially symmetric stationary-pulse solution

of (20) if V depends only upon the variable r and is such that:

V(r) > κ, r ∈ [0, ω [,

V(ω) = κ,

V(r) < κ, r ∈]ω, ∞[,

V(∞) = 0.

and is a fixed point of equation (20):

αV(r) = M(r, ω) + Iext(r) (22)

where Iext(r) = Ie− r2

2σ 2 is a Gaussian input and M(r, ω) is defined by the following

equation:

M(r, ω)def= M(z, Bh(0, ω))

and Bh(0, ω) is a hyperbolic disk centered at the origin of hyperbolic radius ω.

From symmetry arguments there exists a hyperbolic radially symmetric stationary-

pulse solution V(r) of (20), furthermore the threshold � and width ω are related

according to the self-consistency condition

ακ = M(ω) + Iext(ω)def= N(ω) (23)

where

M(ω)def= M(ω, ω)

The existence of such a bump can then be established by finding solutions to (23)

The function N(ω) is plotted in Figure 1 for a range of the input amplitude I . Thehorizontal dashed lines indicate different values of a�, the points of intersection deter-

mine the existence of stationary pulse solutions. Qualitatively, for sufficiently large

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 19 of 41

Page 20: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

input amplitude I we have N’(0) < 0 and it is possible to find only one solution branch

for large a�. For small input amplitudes I we have N’(0) > 0 and there always exists

one solution branch for ab < gc ≈ 0.06. For intermediate values of the input amplitude

I , as ab varies, we have the possiblity of zero, one or two solutions. Anticipating the

stability results of section 5.3, we obtain that when N’(ω) < 0 then the corresponding

solution is stable.

We end this subsection with the usefull and technical following formula.

Theorem 5.1.1. For all (r, ω) ∈ R+ × R+:

M(r, ω) =14

sinh (ω)2 cosh (ω)2∫

RW(λ)�(0,0)

λ (r)�(1,1)λ (ω)λ tanh(

π

2λ)dλ (24)

Where W(λ) is the Fourier Helgason transform of W(z)def= W(d2(z, 0)) and

�(α,β)λ (ω) = F(

12

(ρ + iλ),12

(ρ − iλ); α + 1; − sinh (ω)2),

with a + b + 1 = r and F is the hypergeometric function of first kind.

Remark 5.1.1. We recall that F admits the integral representation [32]:

F(α, β ; γ ; z) =�(α)

�(β)�(γ − β)

∫ 1

0tβ−1(1 − t)γ−β−1(1 − tz)−αdt

with ℜ(g) > ℜ(b) > 0.

Remark 5.1.2. In section 4 we introduced the function �λ(z) =∫

∂De(iλ+1)<z,b>db. In

[22], it is shown that:

�(0,0)λ (r) = �λ(tanh(r)) if z = tanh(r)eiθ

Remark 5.1.3. Let us point out that this result can be linked to the work of Folias

and Bressloff in [31] and then used in [29]. They constructed a two-dimensional pulse

for a general, radially symmetric synaptic weight function. They obtain a similar formal

Figure 1 Plot of N(ω) defined in (23) as a function of the pulse width ω for several values of theinput amplitude I and for a fixed input width s = 0.05. The horizontal dashed lines indicate differentvalues of a�. The connectivity function is given in equation (21) and the parameter b is set to b = 0.2.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 20 of 41

Page 21: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

representation of the integral of the connectivity function w over the disk B(O, a) cen-

tered at the origin O and of radius a. Using their notations,

M(a, r) =∫ 2π

0

∫ a

0w(| r − r’ |)r′dr′dθ = 2πa

∫ ∞

0

w(ρ)J0(rρ)J1(aρ)dρ

where Jν(x) is the Bessel function of the first kind and �ω is the real Fourier transform

of w. In our case, instead of the Bessel function, we find �(ν,ν)λ (r) which is linked to the

hypergeometric function of the first kind.

We now show that for a general monotonically decreasing weight function W, the

function M(r, ω) is necessarily a monotonically decreasing function of r. This will

ensure that the hyperbolic radially symmetric stationary-pulse solution (22) is also a

monotonically decreasing function of r in the case of a Gaussian input. The demon-

stration of this result will directly use theorem 5.1.1.

Proposition 5.1.1. V is a monotonically decreasing function in r for any monotoni-

cally decreasing synaptic weight function W.

Proof. Differentiating ℳ with respect to r yields:

∂M∂r

(r, ω) =12

∫ ω

0

∫ 2π

0

∂r

(W(d2(tanh(r), tanh(r′)eiθ ))

)sinh(2r′)dr′dθ

We have to compute

∂r

(W(d2(tanh(r), tanh(r′)eiθ ))

)= W ′(d2(tanh(r), tanh(r′)eiθ ))

∂r

(d2(tanh(r), tanh(r′)eiθ)

).

It is result of elementary hyperbolic trigonometry that

d2(tanh(r), tanh(r′)eiθ) = tanh−1

⎛⎝√ tanh (r)2 + tanh (r′)2 − 2 tanh(r) tanh(r′) cos(θ)

1 + tanh (r)2 tanh (r′)2 − 2 tanh(r) tanh(r′) cos(θ)

⎞⎠ (25)

we let r = tanh(r), r’ = tanh(r’) and define

Fρ′,θ (ρ) =ρ2 + ρ ′2 − 2ρρ ′ cos(θ)

1 + ρ2ρ ′2 − 2ρρ ′ cos(θ)

It follows that

∂ρtanh−1

(√Fρ′,θ (ρ)

)=

∂ρFρ′ ,θ(ρ)

2(1 − Fρ′ ,θ(ρ))√

Fρ′ ,θ(ρ),

and

∂ρFρ′ ,θ(ρ) =

2(ρ − ρ ′ cos(θ)) + 2ρρ ′(ρ ′ − ρ cos(θ))

(1 + ρ2ρ ′2 − 2ρρ ′ cos(θ))2

We conclude that if r > tanh(ω) then for all 0 ≤ r’ ≤ tanh(ω) and 0 ≤ θ ≤ 2π

2(ρ − ρ ′ cos(θ)) + 2ρρ ′(ρ ′ − ρ cos(θ)) > 0,

which implies M(r, ω) < 0 for r > ω, since W’ < 0.

To see that it is also negative for r < ω, we differentiate equation (24) with respect to r:

∂M∂r

(r, ω) =14

sinh (ω)2 cosh (ω)2∫R

W(λ)∂

∂r�

(0,0)λ (r)�(1,1)

λ (ω)λ tanh(π

2λ)dλ

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 21 of 41

Page 22: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

The following formula holds for the hypergeometric function (see Erdelyi in [32]):

ddz

F(a, b; c; z) =abc

F(a + 1, b + 1; c + 1; z)

It implies

∂r�

(0,0)λ (r) = −1

2sinh(r) cosh(r)(1 + λ2)�(1,1)

λ (r).

Substituting in the previous equation giving∂M∂r

we find:

∂M∂r

(r, ω) = − 164

sinh (2ω)2 sinh(2r)∫

RW(λ)(1+λ2)�(1,1)

λ (r)�(1,1)λ (ω)λ tanh(

π

2λ)dλ,

implying that:

sgn(∂M∂r

(r, ω)) = sgn(∂M∂r

(ω, r)).

Consequently,∂M∂r

(r, ω) < 0 for r < ω. Hence V is monotonically decreasing in r for

any monotonically decreasing synaptic weight function W.

□As a consequence, for our particular choice of exponential weight function (21), the

radially symmetric bump is monotonically decreasing in r, as it will be recover in our

numerical experiments in section 6.

5.2 Proof of theorem 5.1.1

The proof of theorem 5.1.1 goes in four steps. First we introduce some notations and

recall some basic properties of the Fourier transform in the Poincaré disk. Second we

prove two propositions. Third we state a technical lemma on hypergeometric func-

tions, the proof being given in lemma F.0.4 of the appendix F. The last step is devoted

to the conclusion of the proof.

5.2.1 First step

In order to calculate M(r, ω), we use the Fourier transform in D which has already

been introduced in section 4. First we rewrite M(r, ω) as a convolution product:

Proposition 5.2.1. For all (r, ω) ∈ R+ × R+:

M(r, ω) =1

∫R

W(λ)�λ ∗ 1Bh(0,ω)(z)λ tanh(π

2λ)dλ (26)

Proof. We start with the definition of M(r, ω) and use the convolutional form of the

integral:

M(r, ω) = M(z, Bh(0, ω)) =∫

Bh(0,ω)W(z, z′)dm(z′) =

∫D

W(z, z′)]1Bh(0,ω)(z′)dm(z′) = W∗1Bh(0,ω)(z)

In [22], Helgason proves an inversion formula for the hyperbolic Fourier transform

and we apply this result to W:

W(z) =1

∫R

∫∂D

W(λ, b)e(iλ+1)<z,b>λ tanh(π

2λ)dλdb

=1

∫R

W(λ)(∫

∂De(iλ+1)<z,b>db

)λ tanh(

π

2λ)dλ

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 22 of 41

Page 23: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

the last equality is a direct application of lemma 4.2.1 and we can deduce that

W(z) =1

∫R

W(λ)�λ(z)λ tanh(π

2λ)dλ (27)

Finally we have:

M(r, ω) = W ∗ 1Bh(0,ω)(z) =1

∫R

W(λ)�λ ∗ 1Bh(0,ω)(z)λ tanh(π

2λ)dλ

which is the desired formula. □It appears that the study of M(r, ω) consists in calculating the convolution product

�λ ∗ 1Bh(0,ω)(z).

Proposition 5.2.2. For all z = k ·O for k Î G = SU(1, 1) we have:

�λ ∗ 1Bh(0,ω)(z) =∫

Bh(0,ω)�λ(k−1 · z′)dm(z′)

Proof. Let z = k ·O for k Î G we have:

�λ ∗ 1Bh(0,ω)(z) = ∫G

1Bh(0,ω)(g · O)�λ(g−1 · z)dg

= ∫G1Bh(0,ω)(g · O)�λ(g−1k · O)dg

for all g, k Î G, Fl(g-1k ·O) = Fl(k

-1g ·O) so that:

�λ ∗ 1Bh(0,ω)(z) = ∫G1Bh(0,ω)(g · O)�λ(k−1g · O)dg = ∫

D1Bh(0,ω)(z′)�λ(k−1 · z′)dm(z′)

=∫

Bh(0,ω)�λ(k−1 · z′)dm(z′)

□5.2.2 Second step

In this part, we prove two results:

• the mapping z = k · O = tanh(r)eiθ → �λ ∗ 1Bh(0,ω)(z) is a radial function, i.e. it

depends only upon the variable r.

• the following equality holds for z = tanh(r)eiθ:

�λ ∗ 1Bh(0,ω)(z) = �λ(ar · O)∫

Bh(0,ω)e(iλ+1)<z′,1>dm(z′)

Proposition 5.2.3. If z = k ·O and z is written tanh(r)eiθ with r = d2(z, O) in hyper-

bolic polar coordinates the function �λ ∗ 1Bh(0,ω)(z) depends only upon the variable r.

Proof. If z = tanh(r)eiθ, then z = rotθ ar ·O and k-1 = a-rrot-θ. Similarly z’ = rotθ’ ar’·O.

We can write thanks to the previous proposition 5.2.2:

�λ ∗ 1Bh(0,ω)(z) =∫

Bh(0,ω)�λ(k−1 · z′)dm(z′) =

12

∫ ω

0

∫ 2π

0�λ(a−rrotθ ′−θar′ · O) sinh(2r′)dr′dθ ′

=12

∫ ω

0

∫ 2π

0�λ(a−rrotψar′ · O) sinh(2r′)dr′dψ =

∫Bh(0,ω)

�λ(a−r · z′)dm(z′),

which, as announced, is only a function of r. □

We now give an explicit formula for the integral

∫Bh(0,ω)

�λ(a−r · z′)dm(z′).

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 23 of 41

Page 24: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Proposition 5.2.4. For all z = tanh(r)eiθ we have:

�λ ∗ 1Bh(0,ω)(z) = �λ(ar · O)∫

Bh(0,ω)e(iλ+1)<z′ ,1>dm(z′)

Proof. We first recall a formula from [22].

Lemma 5.2.1. For all g Î G the following equation holds:

�λ(g−1 · z) =∫

∂De(−iλ+1)<g·O,b>e(iλ+1)<z,b>db

Proof. See [22]. □It follows immediately that for all z ∈ D and r Î ℝ we have:

�λ(a−r · z) =∫

∂De(−iλ+1)<ar ·O,b>e(iλ+1)<z,b>db

We integrate this formula over the hyperbolic ball Bh(0, ω) which gives:∫Bh(0,ω)

�λ(a−r · z′)dm(z′) =∫

Bh(0,ω)

(∫∂D

e(−iλ+1)<ar ·O,b>e(iλ+1)<z′ ,b>db)

dm(z′),

and we exchange the order of integration:

=∫

∂De(−iλ+1)<ar ·O,b>

(∫Bh(0,ω)

e(iλ+1)<z′,b>dm(z′))

db.

We note that the integral

∫Bh(0,ω)

e(iλ+1)<z′,b>dm(z′) does not depend upon the vari-

able b = eij. Indeed:

∫Bh(0,ω)

e(iλ+1)<z′,b>dm(z′) =12

∫ ω

0

∫ 2π

0

(1 − tanh (x)2

| tanh(x)eiθ − eiφ |2) iλ + 1

2sinh(2x)dxdθ

=12

∫ ω

0

∫ 2π

0

(1 − tanh (x)2

| tanh(x)ei(θ−φ) − 1|2) iλ + 1

2sinh(2x)dxdθ

=12

∫ ω

0

∫ 2π

0

(1 − tanh (x)2

| tanh(x)eiθ ′ − 1|2) iλ + 1

2sinh(2x)dxdθ ′,

and indeed the integral does not depend upon the variable b:∫Bh(0,ω)

e(iλ+1)<z′,b>dm(z′) =∫

Bh(0,ω)e(iλ+1)<z′,1>dm(z′).

Finally, we can write:∫Bh(0,ω)

�λ(a−r · z′)dm(z′) =∫

∂De(−iλ+1)<ar ·O,b>db

∫Bh(0,ω)

e(iλ+1)<z′,1>dm(z′)

= �−λ(ar · O)∫

Bh(0,ω)e(iλ+1)<z′,1>dm(z′) = �λ(ar · O)

∫Bh(0,ω)

e(iλ+1)<z′,1>dm(z′),

because Fl = F-l (as solutions of the same equation).

This completes the proof that:

�λ(g−1 · z) =∫

Bh(0,ω)�λ(a−r · z′)dm(z′) = �λ(ar · O)

∫Bh(0,ω)

e(iλ+1)<z′,1>dm(z′)

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 24 of 41

Page 25: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

5.2.3 Third step

We state a useful formula.

Lemma 5.2.2. For all ω > 0 the following formula holds:∫Bh(0,ω)

�λ(z)dm(z) = π sinh (ω)2 cosh (ω)2�(1,1)λ (ω)

Proof. See lemma F.0.4 of appendix F. □5.2.4 The main result

At this point we have proved the following proposition thanks to propositions 5.2.1

and 5.2.4.

Proposition 5.2.5. If z = tanh(r)eiθ ∈ Bh(0, ω), M(r, ω)is given by the following for-

mula:

M(r, ω) =1

∫R

W(λ)�λ(ar·O)�λ(ω)λ tanh(π

2λ)dλ =

14π

∫R

W(λ)�(0,0)λ (r)�λ(ω)λ tanh(

π

2λ)dλ

where

�λ(ω)def=∫

Bh(0,ω)e(iλ+1)<z′ ,1>dm(z′)

We are now in a position to obtain the analytic form for M(r, ω) of theorem 5.1.1.

We prove that

�λ(ω) =∫

Bh(0,ω)�λ(z)dm(z).

Indeed, in hyperbolic polar coordinates, we have:

�λ(ω) =12

∫ ω

0

∫ 2π

0e(iλ+1)<rotθ ar ·O,1> sinh(2r) dr dθ

=12

∫ ω

0

∫ 2π

0e(iλ+1)<ar ·O,e−iθ > sinh(2r) dr dθ

= π

∫ ω

0

∫∂D

e(iλ+1)<ar ·O,b>db sinh(2r) dr

= π

∫ ω

0�λ(ar · O) sinh(2r) dr

On the other hand:∫Bh(0,ω)

�λ(z)dm(z) =12

∫ ω

0

∫ 2π

0�λ(ar · O) sinh(2r) dr dθ

= π

∫ ω

0�λ(ar · O) sinh(2r)dr

This yields

�λ(ω) =∫

Bh(0,ω)�λ(z)dm(z) = π sinh (ω)2 cosh (ω)2�

(1,1)λ (ω),

and we use lemma (5.2.2) to establish (24).

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 25 of 41

Page 26: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

5.3 Linear stability analysis

We now analyse the evolution of small time-dependent perturbations of the hyperbolic

stationary-pulse solution through linear stability analysis. We use classical tools already

developped in [31,29].

5.3.1 Spectral analysis of the linearized operator

Equation (20) is linearized about the stationary solution V(r) by introducing the time-

dependent perturbation:

v(z, t) = V(r) + φ(z, t)

This leads to the linear equation:

∂tφ(z, t) = −αφ(z, t) +∫

DW(d2(z, z′))H′(V(r′) − κ)φ(z′, t)dm(z′).

We separate variables by setting j(z, t) = j(z)ebt to obtain the equation:

(β + α)φ(z) =∫

DW(d2(z, z′))H′(V(r′) − κ)φ(z′)dm(z′)

Introducing the hyperbolic polar coordinates z = tanh(r)eiθ and using the result:

H′(V(r) − κ) = δ(V(r) − κ) =δ(r − ω)| V ′(ω) |

we obtain:

(β + α)φ(z) =12

∫ ω

0

∫ 2π

0W

(d2

(tanh(r)eiθ , tanh(r′)eiθ ′)) δ(r′ − ω)

| V ′(ω) | φ(tanh(r′)eiθ ′) sinh(2r′)dr′dθ ′

=sinh(2ω)2| V ′(ω) |

∫ 2π

0W

(d2

(tanh(r)eiθ , tanh(r′)eiθ ′))

φ(tanh(ω)eiθ ′)dθ ′

Note that we have formally differentiated the Heaviside function, which is permissible

since it arises inside a convolution. One could also develop the linear stability analysis

by considering perturbations of the threshold crossing points along the lines of Amari

[20]. Since we are linearizing about a stationary rather than a traveling pulse, we can

analyze the spectrum of the linear operator without the recourse to Evans functions.

With a slight abuse of notation we are led to study the solutions of the integral equa-

tion:

(β + α)φ(r, θ) =sinh(2ω)2| V ′(ω) |

∫ 2π

0W(r, ω; θ ′ − θ)φ(ω, θ ′)dθ ′ (28)

where the following equality derives from the definition of the hyperbolic distance in

equation (25):

W(r, ω; ϕ)def= W ◦ tanh−1

⎛⎝√ tanh (r)2 + tanh (ω)2 − 2 tanh(r) tanh(ω) cos(ϕ)

1 + tanh (r)2 tanh (ω)2 − 2 tanh(r) tanh(ω) cos(ϕ)

⎞⎠Essential spectrum If the function j satisfies the condition∫ 2π

0W(r, ω; θ ′)φ(ω, θ − θ ′)dθ ′ = 0 ∀r,

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 26 of 41

Page 27: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

then equation (28) reduces to:

β + α = 0

yielding the eigenvalue:

β = −α < 0

This part of the essential spectrum is negative and does not cause instability.

Discrete spectrum If we are not in the previous case we have to study the solutions

of the integral equation (28).

This equation shows that j(r, θ) is completely determined by its values j(ω, θ) onthe circle of equation r = ω. Hence, we need only to consider r = ω, yielding the inte-

gral equation:

(β + α)φ(ω, θ) =sinh(2ω)2| V ′(ω) |

∫ 2π

0W(ω, ω; θ ′)φ(ω, θ − θ ′)dθ ′

The solutions of this equation are exponential functions egθ, where g satisfies:

(β + α) =sinh(2ω)2| V ′(ω)|

∫ 2π

0W(ω, ω; θ ′)e−γ θ ′

dθ ′

By the requirement that j is 2π-periodic in θ, it follows that g = in, where n Î ℤ.

Thus the integral operator with kernel W has a discrete spectrum given by:

(βn + α) =sinh(2ω)2| V ′(ω) |

∫ 2π

0W(ω, ω; θ ′)e−inθ ′

dθ ′

=sinh(2ω)2| V ′(ω) |

∫ 2π

0W ◦ tanh−1

⎛⎝√ 2 tanh (ω)2(1 − cos(θ ′))

1 + tanh (ω)4 − 2 tanh (ω)2 cos(θ ′)

⎞⎠ e−inθ ′dθ ′

=sinh(2ω)| V ′(ω) |

∫ π

0W ◦ tanh−1

⎛⎜⎝ 2 tanh(ω) sin(θ ′)√(1 − tanh (ω)2)

2+ 4 tanh (ω)2 sin (θ ′)2

⎞⎟⎠ e−i2nθ ′dθ ′

bn is real since:

�(βn) = − sinh(2ω)|V ′(ω)|

∫ π

0W◦tanh−1

⎛⎜⎝ 2 tanh(ω) sin(θ ′)√(1 − tanh (ω)2)

2+ 4 tanh (ω)2 sin (θ ′)2

⎞⎟⎠ sin(2nθ ′)dθ ′ = 0.

Hence,

βn = �(βn) = −α+sinh(2ω)|V ′(ω)|

∫ π

0W ◦ tanh−1

⎛⎜⎝ 2 tanh(ω) sin(θ ′)√(1 − tanh (ω)2)

2+ 4 tanh (ω)2 sin (θ ′)2

⎞⎟⎠ cos(2nθ ′)dθ ′

We can state the following proposition:

Proposition 5.3.1. Provided that for all n ≥ 0, bn < 0 then the hyperbolic stationary

pulse is stable.

We now derive a reduced condition linking the parameters for the stability of hyper-

bolic stationary pulse.

Reduced condition Since W ◦ tanh−1(r) is a positive function of r, it follows that:

Stability of the hyperbolic stationary pulse requires that for all n ≥ 0, bn < 0. This can

be rewritten as:

sinh(2ω)|V ′(ω)|

∫ π

0W ◦ tanh−1

⎛⎜⎝ 2 tanh(ω) sin(θ ′)√(1 − tanh (ω)2)

2+ 4 tanh (ω)2 sin (θ ′)2

⎞⎟⎠ cos(2nθ ′)dθ ′ < α n ≥ 0

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 27 of 41

Page 28: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Using the fact that bn ≤ b0 for all n ≥ 1, we obtain the reduced stability condition:

W0(ω)|V ′(ω)| < α

Where

W0(ω)def= sinh(2ω)

∫ π

0W ◦ tanh−1

⎛⎜⎝ 2 tanh(ω) sin(θ ′)√(1 − tanh (ω)2)

2+ 4 tanh (ω)2 sin (θ ′)2

⎞⎟⎠ dθ ′

From (22) we have:

V ′(ω) =1α

(−Mr(ω) + I′(ω))

Where

Mr(ω)def= −∂M

∂r(ω, ω) =

164

sinh (2ω)3∫

RW(λ)(1+λ2)�(1,1)

λ (ω)�(1,1)λ (ω)λ tanh(

π

2λ)dλ

We have previously established that Mr(ω) > 0 and I’(ω) is negative by definition.

Hence, letting D(ω) = |I′(ω)|, we have

|V ′(ω)| =1α

(Mr(ω) + D(ω)).

By substitution we obtain another form of the reduced stability condition:

D(ω) > W0(ω) − Mr(ω) (29)

We also have:

M′(ω) =d

dωM(ω, ω) =

∂M∂r

(ω, ω) +∂M∂ω

(ω, ω) = W0(ω) − Mr(ω),

and

N′(ω) = M′(ω) + I′(ω) = W0(ω) − Mr(ω) − D(ω),

showing that the stability condition (29) is satisfied when N’(ω) > 0 and is not satis-

fied when N’(ω) > 0.

Proposition 5.3.2 (Reduced condition). If N’(ω) > 0 then for all n ≥ 0, bn < 0 and

the hyperbolic stationary pulse is stable.

6 Numerical resultsThe aim of this section is to numerically solve (13) for different values of the para-

meters. This implies developing a numerical scheme that approaches the solution of

our equation, and proving that this scheme effectively converges to the solution.

Since equation (13) is defined on D, computing the solutions on the whole hyperbolic

disk has the same complexity as computing the solutions of usual Euclidean neural field

equations defined on ℝ2. As most authors in the Euclidean case [31,27,26,29], we reduce

the domain of integration to a compact region of the hyperbolic disk. Practically, we

work in the Euclidean ball of radius a = 0.5 and center 0. Note that a Euclidean ball cen-

tered at the origin is also a centered hyperbolic ball, their radii being different.

We have divided this section into four parts. The first part is dedicated to the study

of the discretization scheme of equation (13). In the following two parts, we study the

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 28 of 41

Page 29: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

solutions for different connectivity functions: an exponential function, section 6.2, and

a difference of Gaussians, section 6.3.

6.1 Numerical schemes

Let us consider the modified equation of (13):⎧⎪⎨⎪⎩∂tV(z, t) = −αV(z, t) +

∫B(0,a)

W(z, z′)S(V(z′, t))dm(z′) + I(z, t) t ∈ J

V(z, 0) = V0(z)(30)

We assume that the connectivity function satisfies the conditions (C1)-(C2). More-

over we express z in (Euclidean) polar coordinates such that z = reiθ , V(z, t) = V(r, θ , t)

and W(z, z′) = W(r, θ , r′, θ ′). The integral in equation (30) is then:∫

B(0,a)W(z, z′)S(V(z′, t))dm(z′) =

∫ a

0

∫ 2π

0W(r, θ , r′, θ ′)S(V(r′, θ ′, t))

r′dr′dθ ′

(1 − r′2)2

We define R to be the rectangle R def= [0, a] × [0, 2π].

6.1.1 Discretization scheme

We discretize R in order to turn (30) into a finite number of equations. For this pur-

pose we introduce h1 =aN

, N ∈ N∗ = N\{0} and h2 =2π

M, M ∈ N∗,

∀i ∈ [[1, N + 1]] ri = (i − 1)h1,

∀j ∈ [[1, M + 1]] θj = (j − 1)h2,

and obtain the (N + 1) (M +1) equations:

dVdt

(ri, θj, t) = −αV(ri, θj, t) +∫R

W(ri, θj, r′, θ ′)S(V(r′, θ ′, t))r′dr′dθ ′

(1 − r′2)2 + I(ri, θj, t)

which define the discretization of (30):⎧⎨⎩ dVdt

(t) = −αV(t) + W · S(V)(t) + I(t) t ∈ J

V(0) = V0

(31)

where V(t) ∈ MN+1,M+1(R)V(t)i,j = V(ri, θj, t). Similar definitions apply I and V0.

Moreover:

W · S(V)(t)i,j =∫R

W(ri, θj, r′, θ ′)S(V(r′, θ ′, t))r′dr′dθ ′

(1 − r′2)2

Mn,p(R) is the space of the matrices of size n × p with real coefficients. It remains to

discretize the integral term. For this as in [33], we use the rectangular rule for the

quadrature so that for all (r, θ) ∈ R we have:

∫ a

0

∫ 2π

0W(r, θ , r′ , θ ′)S(V(r′, θ ′, t))

r′dr′dθ ′

(1 − r′2)2

∼= h1h2

N+1∑k=1

M+1∑l=1

W(r, θ , rk, θl)S(V(rk, θl, t))rk

(1 − r2k )2

We end up with the following numerical scheme, where Vi,j(t) (resp. Ii,j(t)) is an

approximation of Vi,j(t) (resp. Ii,j), ∀(i, j) ∈ [[1, N + 1]] × [[1, M + 1]]:

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 29 of 41

Page 30: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

dVi,j

dt(t) = −αVi,j(t) + h1h2

N+1∑k=1

M+1∑l=1

Wi,jk,l S(Vk,l)(t) + Ii,j(t)

With Wi,jk,l

def= W(ri, θj, rk, θl)

rk

(1 − r2k )2 .

6.1.2 Discussion

We discuss the error induced by the rectangular rule for the quadrature. Let f be a func-

tion which is C2 on a rectangular domain [a, b] × [c, d]. If we denote by Ef this error, then

|Ef | ≤ (b − a)2(d − c)2

4mn||f ||C2 where m and n are the number of subintervals used and

||f ||C2 =∑

|α|≤2sup[a,b]×[c,d]|∂α f |where, as usual, a is a multi-index. As a consequence, if

we want to control the error, we have to impose that the solution is, at least, C2 in space.

Four our numerical experiments we use the specific function ode45 of Matlab which

is based on an explicit Runge-Kutta (4,5) formula (see [34] for more details on Runge-

Kutta methods).

We can also establish a proof of the convergence of the numerical scheme which is

exactly the same as in [33] excepted that we use the theorem of continuous depen-

dence of the solution for ordinary differential equations.

6.2 Purely excitatory exponential connectivity function

In this subsection, we give some numerical solutions of (13) in the case where the con-

nectivity function is an exponential function,w(x) = e

−|x|b, with b a positive parameter.

Only excitation is present in this case. In all the experiments we set a = 0.1 and

S(x) =1

1 + e−μxwith μ = 10.

Constant input We fix the external input I(z) to be of the form:

I(z) = Ie−

d2(z, 0)2

σ 2

In all experiments we set I = 0.1 and s = 0.05, this means that the input has a sharp

profile centered at 0.

We show in Figure 2 plots of the solution at time T = 2500 for three different values

of the width b of the exponential function. When b = 1, the whole network is highly

excited, whereas as b changes from 1 to 0.1 the amplitude of the solution decreases,

and the area of high excitation becomes concentrated around the external input.

Variable input In this paragraph, we allow the external current to depend upon the

time variable. We have:

I(z, t) = Ie−

d2(z, z0(t))2

σ 2

where z0(t) = r0ei�0t. This is a bump rotating with angular velocity Ω0 around the

circle of radius r0 centered at the origin. In our numerical experiments we set r0 = 0.4,

Ω0 = 0.01, I = 0.1 and s = 0.05. We plot in Figure 3 the solution at different times

T = 100, 150, 200, 250.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 30 of 41

Page 31: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

High gain limit We consider the high gain limit μ ® ∞ of the sigmoid function and

we propose to illustrate section 5 with a numerical simulation. We set a = 1, � = 0.04,

ω = 0.18. We fix the input to be of the form:

I(z) = Ie−

d2(z, 0)2

σ 2

Figure 2 Plots of the solution of equation (13) at T = 2500 for the values μ = 10, a = 0.1 and fordecreasing values of the width b of the connectivity, see text.

Figure 3 Plots of the solution of equation (13) in the case of an exponential connectivity functionwith b = 0.1 at different times with a time-dependent input, see text.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 31 of 41

Page 32: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

with I = 0.04 and s = 0.05. Then the condition of existence of a stationary pulse

(23) is satisfied, see Figure 1. We plot a bump solution according to (23) in Figure 4.

6.3 Excitatory and inhibitory connectivity function

We give some numerical solutions of (13) in the case where the connectivity function

is a difference of Gaussians, which features an excitatory center and an inhibitory sur-

round:

w(x) =1√

2πσ 21

e−

x2

σ 21 − A√

2πσ 22

e−

x2

σ 22 with σ1 = 0.1, σ2 = 0.2 and A = 1.

We illustrate the behaviour of the solutions when increasing the slope μ of the sig-

moid. We set the S(x) =1

1 + e−μx− 1

2so that it is equal to 0 at the origin and we

choose the external input equal to zero, I(z, t) = 0. In this case the constant function

equal to 0 is a solution of (13).

For small values of the slope μ, the dynamics of the solution is trivial: every solution

asymptotically converges to the null solution, as shown in top left hand corner of

Figure 5 with μ = 1. When increasing μ, the stability bound, found in subsection 4.5 is

no longer satisfied and the null solution may no longer be stable. In effect this solution

may bifurcate to other, more interesting solutions. We plot in Figure 5, some solutions

at T = 2500 for different values of μ (μ = 3, 5, 10, 20 and 30). We can see exotic pat-

terns which feature some interesting symmetries. The formal study of these bifurcated

solutions is left for future work.

Figure 4 Plot of a bump solution of equation (22) for the values a = 1, � = 0.04, ω = 0.18 and forb = 0.2 for the width of the connectivity, see text.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 32 of 41

Page 33: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

7 ConclusionIn this paper, we have studied the existence and uniqueness of a solution of the evolu-

tion equation for a smooth neural mass model called the structure tensor model. This

model is an approach to the representation and processing of textures and edges in

the visual area V1 which contains as a special case the well-known ring model of

orientations (see [1,2,19]). We have also given a rigorous functional framework for the

study and computation of the stationary solutions to this nonlinear integro-differential

Figure 5 Plots of the solutions of equation (13) in the case where the connectivity function is thedifference of two Gaussians at time T = 2500 for a = 0.1 and for increasing values of the slope μof the sigmoid, see text.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 33 of 41

Page 34: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

equation. This work sets the basis for further studies beyond the spatially periodic case

studied in [15], where the hypothesis of spatial periodicity allows one to replace the

unbounded (hyperbolic) domain by a compact one, hence making the functional analy-

sis much simpler.

We have completed our study by constructing and analyzing spatially localised

bumps in the high-gain limit of the sigmoid function. It is true that networks with

Heaviside nonlinearities are not very realistic from the neurobiological perspective and

lead to difficult mathematical considerations. However, taking the high-gain limit is

instructive since it allows the explicit construction of stationary solutions which is

impossible with sigmoidal nonlinearities. We have constructed what we called a hyper-

bolic radially symmetric stationary-pulse and presented a linear stability analysis

adapted from [31]. The study of stationary solutions is very important as it conveys

information for models of V1 that is likely to be biologically relevant. Moreover our

study has to be thought of as the analog in the case of the structure tensor model to

the analysis of tuning curves of the ring model of orientations (see [1,2,19,35]). How-

ever, these solutions may be destabilized by adding lateral spatial connections in a spa-

tially organized network of structure tensor models; this remains an area of future

investigation. As far as we know, only Bressloff and coworkers looked at this problem

(see [3,11-14,4]).

Finally, we illustrated our theoretical results with numerical simulations based on rig-

orously defined numerical schemes. We hope that our numerical experiments will lead

to new and exciting investigations such as a thorough study of the bifurcations of the

solutions of our equations with respect to such parameters as the slope of the sigmoid

and the width of the connectivity function.

A Isometries of DWe briefly descrbies the isometries of D, i.e the transformations that preserve the dis-

tance d2. We refer to the classical textbooks in hyperbolic goemetry for details, e.g,

[17]. The direct isometries (preserving the orientation) in D are the elements of the

special unitary group, noted SU(1, 1), of 2 × 2 Hermitian matrices with determinant

equal to 1. Given:

γ =(

α β

β α

)such that, |α|2 − |β|2 = 1 ,

an element of SU(1, 1), the corresponding isometry g in D is defined by:

γ · z =αz + β

βz + α, z ∈ D (32)

Orientation reversing isometries ofD are obtained by composing any transformation (32)

with the reflection κ : z → z. The full symmetry group of the Poincaré disc is therefore:

U(1, 1) = SU(1, 1) ∪ κ · SU(1, 1)

Let us now describe the different kinds of direct isometries acting in D. We first

define the following one parameter subgroups of SU(1, 1):

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 34 of 41

Page 35: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

⎧⎪⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎪⎩

Kdef= {rotφ =

⎛⎝ eiφ

2 0

0 e−iφ

2

⎞⎠ , φ ∈ S1}

Adef= {ar =

(cosh r sinh rsinh r cosh r

), r ∈ R}

Ndef= {ns =

(1 + is −is

is 1 − is

), s ∈ R}

Note that rotφ · z = eiφz and also ar·O = tanh r, with O being the center of the Poin-

caré disk that is the point represented by z = 0.

The group K is the orthogonal group O(2). Its orbits are concentric circles. It is pos-

sible to express each point z ∈ D in hyperbolic polar coordinates:

z = rotφar · O = tanh reiφ and r = d2(z, 0).

The orbits of A converge to the same limit points of the unit circle ∂D, b±1 = ±1

when r ® ±∞. They are circular arcs in D going through the points b1 and b-1.

The orbits of N are the circles inside D and tangent to the unit circle at b1. These

circles are called horocycles with base point b1. N is called the horocyclic group. It is

also possible to express each point z ∈ D in horocyclic coordinates: zsar·O, where ns are

the transformations associated with the group N (s Î ℝ) and ar the transformations

associated with the subroup A (r Î ℝ).

Iwasawa decomposition The following decomposition holds, see [36]:

SU(1, 1) = KAN

This theorem allows us to decompose any isometry of D as the product of at most

three elements in the groups, K, A and N.

B Volume element in structure tensor spaceLet T be a structure tensor

T =[

x1 x3

x3 x2

],

Δ2 its determinant, Δ ≥ 0. T can be written

T = �T ,

Where T has determinant 1. Let z = z1 + iz2 be the complex number representation

of T in the Poincaré disk D. In this part of the appendix, we present a simple form for

the volume element in full structure tensor space, when parametrized as (Δ, z).

Proposition B.0.1. The volume element in (Δ, z1, z2) coordinates is

dV = 8√

2d�

dz1 dz2

(1 − |z|2)2 (33)

Proof. In order to compute the volume element in (Δ, z1, z2) space, we need to

express the metric gT in these coordinates. This is obtained from the inner product in

the tangent space TT at point T of SDP(2). The tangent space is the set S(2) of sym-

metric matrices and the inner product is defined by:

gT (A, B) = tr(T −1AT −1B), A, B ∈ S(2),

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 35 of 41

Page 36: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

We note that gT (A, B) = gT (A, B)/�2. We write g instead of gT . A basis of TT (or

TT for that matter) is given by:

∂x1=[

1 00 0

]∂

∂x2=[

0 00 1

]∂

∂x3=[

0 11 0

],

and the metric is given by:

gij = gT (∂

∂xi,

∂xj), i, j = 1, 2, 3

The determinant GT of gT is equal to G/Δ6, where G is the determinant of g = gT . G

is found to be equal to 2. The volume element is thus:

dV =

√2

�3dx1dx2dx3

We then use the relations:

x1 = �x1 x2 = �x2 x3 = �x3,

where xi, i = 1, 2, 3 is given by:⎧⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎩

x1 =(1 + z1)2 + z2

2

1 − z21 − z2

2

x2 =(1 − z1)2 + z2

2

1 − z21 − z2

2

x3 =2z2

1 − z21 − z2

2

,

The determinant of the Jacobian of the transformation (x1, x2, x3) ® (Δ, z1, z2) is

found to be equal to:

− 8�2

(1 − |z|2)2

Hence, the volume element in (Δ, z1, z2) coordinates is

dV = 8√

2d�

dz1dz2

(1 − |z|2)2

C Global existence of solutionsTheorem C.0.1. Let O be an open connected set of a real Banach space F and J be an

open interval of ℝ. We consider the initial value problem:{V’(t) = f (t, V(t)),V(t0) = V0

(34)

We suppose that f ∈ C(J × O,F) and is locally Lipschitz with respect to its second

argument. Then for all (t0, V0) ∈ J × O, there exists τ > 0 and V ∈ C1(]tτ , t0 + τ [,O)

unique solution of (34).

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 36 of 41

Page 37: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Lemma C.0.1. Under hypotheses of theorem C.0.1, if V1 ∈ C1(J1,O) and

V2 ∈ C1(J2,O) are two solutions and if there exists t0 Î J1 ∩ J2 such that V1(t0) = V2(t0)

then:

V1(t) = V2(t) for all t ∈ J1 ∩ J2

This lemma shows the existence of a larger interval J0 on which the initial value pro-

blem (34) has a unique solution. This solution is called the maximal solution.

Theorem C.0.2. Under hypotheses of theorem C.0.1, let V ∈ C1(J0,O) be a maximal

solution. We denote by b the upper bound of J and b the upper bound of J0. Then either

b = b or for all compact set K ⊂ O, there exists h < b such that:

V(t) ∈ O|K, for all t ≥ η with t ∈ J0.

We have the same result with the lower bounds.

Theorem C.0.3. We suppose f ∈ C(J × F ,F) and is globally Lipschitz with respect to

its second argument. Then for all (t0, V0) ∈ J × F , there exists a unique V ∈ C1(J,F)

solution of (34).

D Proof of lemma 3.1.1Lemma D.0.2. When W is only a function of d0(T ,T ′), then W does not depend upon

the variable T .

Proof. We work in (z, Δ) coordinates and we begin by rewriting the double integral

(6) for all (z, �) ∈ R+∗ × D:

W(z, �, t) =∫ +∞

0

∫D

W(√

2(log � − log �′)2 + d22(z, z′), t

)d�′�′

dz′1dz′

2

(1 − |z′|2)2

The change of variable �′ → �′

�yields:

W(z, �, t) =∫ +∞

0

∫D

W(√

2(log �′)2 + d22(z, z′), t

)d�′�′

dz′1dz′

2

(1 − |z′|2)2

And it establishes that W does not depend upon the variable Δ. To finish the proof,

we show that the following integral does not depend upon the variable z ∈ D:

�(z) =∫D

f (d2(z, z′))dz′

1dz′2

(1 − |z′|2)2 (35)

where f is a real-valued function such that Ξ(z) is well defined.

We express z in horocyclic coordinates: zsar.O (see appendix A) and (35) becomes:

�(z) =∫

R

∫Rf (d2(nsar .O, ns′ ar′ .O))e−2r′

ds′dr′

=∫

R

∫Rf (d2(ns−s′ ar .O, ar′ .O))e−2r′

ds′dr′

With the change of variable s − s’ = −xe2r, this becomes:

�(z) =∫

R

∫Rf (d2(n−xe2r ar .O, ar′ .O))e−2(r′−r)dxdr′

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 37 of 41

Page 38: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

The relation n−xe2r ar .O = arn−x.O (proved e.g. in [22]) yields:

�(z) =∫R

∫R

f (d2(arn−x.O, ar′ .O))e−2(r−r)′dxdr′

=∫

R

∫Rf (d2(O, nxar′−r .O))e−2(r′−r)dxdr′

=∫R

∫R

f (d2(O, nxay.O))e−2ydxdy

=∫

Df (d2(O, z′))dm(z′)

with z′ = z′1 + iz′

2 and dm(z′) =dz′

1dz′2

(1 − |z′|2)2, which shows that Ξ(z) does not depend

upon the variable z, as announced. □

E Proof of lemma 3.1.2In this section we prove the following lemma.

Lemma E.0.3. When W is the following Mexican hat function:

W(z, �, z′�′) = w(√

2(log � − log �′)2 + d22(z, z′)

)where:

w(x) =1√

2πσ 21

e−

x2

σ 21 − A√

2πσ 22

e−

x2

σ 22

with 0 ≤ s1 ≤ s2 and 0 ≤ A ≤ 1.

Then:

W =π

32

2

(σ1e2σ 2

1 erf(√

2σ1

)− Aσ2e2σ 2

2 erf(√

2σ2

))where erf is the error function defined as:

erf(x) =2√π

∫ x

0e−u2

du

Proof. We consider the following double integrals:

�i =∫ +∞

0

∫D

1√2πσ 2

i

e−

(log � − log �′)2

σ 2i e

−d2

2(z, z′)2σ 2

id�′

�′dz′

1dz′2

(1 − |z′|2)2 , i = 1, 2, (36)

so that:

W = �1 − A�2

Since the variables are separable, we have:

�i =

⎛⎜⎜⎝∫ +∞

0

1√2πσ 2

i

e−

(log � − log �′)2

σ 2i

d�′�′

⎞⎟⎟⎠⎛⎜⎜⎝∫

De−

d22(z, z′)2σ 2

idz′

1dz′2

(1 − |z′|2)2

⎞⎟⎟⎠

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 38 of 41

Page 39: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

One can easily see that:

∫ +∞

0

1√2πσ 2

i

e−

(log � − log �′)2

σ 2i

d�′�′ =

1√2

We now give a simplified expression for Ξi. We setfi(x) = e

−x2

2σ 2iand then we have,

because of lemma 3.1.1:

�i =1√2

∫Dfi(d2(O, z′))dm(z′) =

1√2

∫Dfi(arctanh(|z′|))dm(z′)

=1√2

∫ 1

0

∫ 2π

0fi(arctanh(r))

rdrdθ

(1 − r2)2 =√

∫ 1

0fi(arctanh(r))

rdr

(1 − r2)2

=√

∫ 1

0e−

arctanh2(r)

2σ 2i

rdr

(1 − r2)2

The change of variable x = arctanh(r) implies dx =dr

1 − r2and yields:

�i =√

∫ +∞

0e−

x2

2σ 2i

tanh(x)

1 − tanh2(x)dx =

√2π

∫ +∞

0e−

x2

2σ 2i sinh(x) cosh(x)dx

=π√2

∫ +∞

0e−

x2

2σ 2i sinh(2x)dx =

π

2√

2

⎛⎜⎜⎝∫ +∞

0e−

x2

2σ 2i

+2x

dx −∫ +∞

0e−

x2

2σ 2i

−2x

dx

⎞⎟⎟⎠

2√

2e2σ 2

i

⎛⎜⎜⎜⎝∫ +∞

0e−

(x − 2σ 2i )2

2σ 2i dx −

∫ +∞

0e−

(x + 2σ 2i )2

2σ 2i dx

⎞⎟⎟⎟⎠=

π

2σie

2σ 2i

(∫ +∞

−√2σi

e−u2du−

∫ +∞√

2σi

e−u2du)

2σie

2σ 2i

∫ √2σi

−√2σi

e−u2du

then we have a simplified expression for Ξi:

�i =π

32

2σie

2σ 2i erf(

√2σi)

F Proof of lemma 5.2.2Lemma F.0.4. For all ω > 0 the following formula holds:∫

Bh(0,ω)�λ(z)dm(z) = π sinh (ω)2 cosh (ω)2�

(1,1)λ (ω)

Proof. We write z in hyperbolic polar coordinates, z = tanh (r) eiθ (see appendix A).

We have:∫Bh(0,ω)

�λ(z)dm(z) =12

∫ ω

0

∫ 2π

0�λ(tanh(r)eiθ ) sinh(2r)dr dθ

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 39 of 41

Page 40: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

Because of the above definition of Fl, this reduces to

π

∫ ω

0�λ(tanh(r)) sinh(2r)dr

In [22] Helgason proved that:

�λ(tanh(r)) = F(ν, 1 − ν; 1; − sinh (r)2)

with ν =12

(1 + iλ). We then use the formula obtained by Erdelyi in [32]:

F(ν, 1 − ν; 1; z) =ddz

(zF(ν, 1 − ν; 2; z)

)Using some simple hyperbolic trigonometry formulae we obtain:

sinh(2r)F(ν, 1 − ν; 1; − sinh (r)2) =ddr

(sinh (r)2F(ν, 1 − ν; 2; − sinh (r)2)

),

from which we deduce

∫Bh(0,ω)�λ(z)dm(z) = π sinh (ω)2F(ν, 1 − ν; 2; − sinh (ω)2)

Finally we use the equality shown in [32]:

F(a, b; c; z) = (1 − z)c−a−bF(c − a, c − b; c; z)

In our case we have: a = ν, b = 1 - ν, c = 2 and z = - sinh (ω)2, so

2 − ν =12

(3 − iλ), 1 + ν =12

(3 + iλ). We obtain

∫Bh(0,ω)�λ(z)dm(z) = π sinh (ω)2 cosh (ω)2F(

12

(3 − iλ),12

(3 + iλ); 2; − sinh (ω)2)

.

Since Hypergometric functions are symmetric with respect to the first two variables:

F(a, b;c;z) = F(b, a;c;z),

we write

F(

12

(3 − iλ),12

(3 + iλ); 2; − sinh (ω)2)

= F(

12

(3 + iλ),12

(3 − iλ); 2; − sinh (ω)2)

= �(1,1)λ (ω),

which yields the announced formula

∫Bh(0,ω)�λ(z)dm(z) = π sinh (ω)2 cosh (ω)2�(1,1)λ (ω)

AcknowledgementsThis work was partially funded by the ERC advanced grant NerVi number 227747.

Author details1NeuroMathComp Laboratory, INRIA, Sophia Antipolis, CNRS, ENS Paris, France 2Dept. of Mathematics, University ofNice Sophia-Antipolis, JAD Laboratory and CNRS, Parc Valrose, 06108 Nice Cedex 02, France

Competing interestsThe authors declare that they have no competing interests.

Received: 7 January 2011 Accepted: 6 June 2011 Published: 6 June 2011

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 40 of 41

Page 41: Analysis of a hyperbolic geometric model for visual ...gfaye/articles/jmn.pdf · Keywords: Neural fields nonlinear integro-differential equations, functional analysis, non-Euclidean

References1. Ben-Yishai R, Bar-Or R, Sompolinsky H: “Theory of orientation tuning in visual contex,”. Proceedings of the Nation

Academy of Sciences 1995, 92(9):3844-3848.2. Hansel D, Sompolinsky H: “Modeling feature selectivity in local cortical circuits,”. Methods of neuronal modeling 1997,

499-567.3. Bressloff P, Bressloff N, Cowan J: “Dynamical mechanism for sharp orientation tuning in a integrate-and-fire model

of a cortical hypercolumn,”. Neural computation 2000, 12(11):2473-2511.4. Bressloff PC, Cowan JD: “A spherical model for orientation and spatial frequency tuning in a cortical hypercolumn,”.

Philosophical Transactions of the Royal Society B 2003.5. Orban G, Kennedy H, Bullier J: “Velocity sensitivity and direction selectivity of neurons in areas V1 and V2 of the

monkey: influence of eccentricity,”. Journal of Neurophysiology 1986, 56:462-480.6. Hubel D, Wiesel T: “Receptive fields and functional architecture of monkey striate cortex,”. The Journal of Physiology

1968, 195(1):215.7. Chossat P, Faugeras O: “Hyperbolic planforms in relation to visual edges and textures perception,”. Plos Comput Biol

2009, 5:e1000625.8. Bigun J, Granlund G: “Optimal orientation detection of linear symmetry,”. Proc First Int’l Conf Comput Vision, EEE

Computer Society Press 1987, 433-438.9. Knutsson H: “Representing local structure using tensors,”. Scandinavian Conference on Image Analysis 1989, 244-251.10. Wilson H, Cowan J: “Excitatory and inhibitory interactions in localized populations of model neurons,”. Biophys J

1972, 12:1-24.11. Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M: “Geometric visual hallucinations, euclidean symmetry and

the functional architecture of striate cortex,”. Phil Trans R Soc Lond B 2001, 306:299-330.12. Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M: “What Geometric Visual Hallucinations Tell Us about the

Visual Cortex,”. Neural Computation 2002, 14(3):473-491.13. Bressloff P, Cowan J: “The visual cortex as a crystal,”. Physica D: Nonlinear Phenomena 2002, 173:226-258.14. Bressloff P, Cowan J: “So(3) symmetry breaking mechanism for orientation and spatial frequency tuning in the

visual cortex,”. Phys Rev Lett 2002, 88.15. Chossat P, Faye G, Faugeras O: “Bifurcation of hyperbolic planforms,”. Journal of Nonlinear Science, JNLS-D-10-00105R1

2010.16. Bonnabel S, Sepulchre R: “Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank,”.

SIAM Journal on Matrix Analysis and Applications 2009, 31:1055-1070.17. Katok S: Fuchsian Groups Chicago Lectures in Mathematics, The University of Chicago Press; 1992.18. Moakher M: “A differential geometric approach to the geometric mean of symmetric positive-definite matrices,”.

SIAM J Matrix Anal Appl 2005, 26:735-747.19. Shriki O, Hansel D, Sompolinsky H: “Rate models for conductance-based cortical neuronal networks,”. Neural

Computation 2003, 15(8):1809-1841.20. Amari S-I: “Dynamics of pattern formation in lateral-inhibition type neural fields,”. Biological Cybernetics 1977,

27:77-87.21. Potthast R, Graben PB: Existence and properties of solutions for neural field equations. Mathematical Methods in the

Applied Sciences 2010, 33:935-949.22. Helgason S: In Groups and geometric analysis, of Mathematical Surveys and Monographs. Volume 83. American

Mathematical Society; 2000.23. Faugeras O, Veltz R, Grimbert F: “Persistent neural states: stationary localized activity patterns in nonlinear

continuous n-population, q-dimensional neural networks,”. Neural Computation 2009, 21(1):147-187.24. Camperi M, Wang X: “A model of visuospatial working memory in prefrontal cortex: Recurrent network and cellular

bistability,”. Journal of Computational Neuroscience 1998, 5:383-405.25. Pinto D, Ermentrout G: “Spatially structured activity in synaptically coupled neuronal networks: 1. traveling fronts

and pulses,”. SIAM J of Appl Math 2001, 62:206-225.26. Laing C, Troy W, Gutkin B, Ermentrout G: “Multiple bumps in a neuronal model of working memory,”. SIAM J Appl

Math 2002, 63(1):62-97.27. Laing CR, Troy WC: “PDE methods for nonlocal models,”. SIAM Journal on Applied Dynamical Systems 2003,

2(3):487-516.28. Laing C, Troy W: “Two-bump solutions of amari-type models of neuronal pattern formation,”. Physica D 2003,

178:190-218.29. Owen M, Laing C, Coombes S: “Bumps and rings in a two-dimensional neural field: splitting and rotational

instabilities,”. New Journal of Physics 2007, 9(10):378-401.30. Coombes S: “Waves, bumps, and patterns in neural fields theories,”. Biological Cybernetics 2005, 93(2):91-108.31. Folias SE, Bressloff PC: “Breathing pulses in an excitatory neural network,”. SIAM Journal on Applied Dynamical Systems

2004, 3(3):378-407.32. Erdelyi : In Higher Transcendental Functions. Volume 1. Edited by: Robert E. Krieger Publishing Companya; 1985.33. Faye G, Faugeras O: Some theoretical and numerical results for delayed neural field equations. Physica D 2010,

Special issue on Mathematical Neuroscience.34. Bellen A, Zennaro M: Numerical Methods for Delay Differential Equations Oxford University Press; 2003.35. Veltz R, Faugeras O: “Local/global analysis of the stationary solutions of some neural field equations,”. SIAM Journal

on Applied Dynamical Systems 2010, 9:954-998.36. Iwaniec H: In Spectral methods of automorphic forms, of AMS Graduate Series in Mathematics. Volume 53. AMS Bookstore;

2002.

doi:10.1186/2190-8567-1-4Cite this article as: Faye et al.: Analysis of a hyperbolic geometric model for visual texture perception. The Journalof Mathematical Neuroscience 2011 1:4.

Faye et al. The Journal of Mathematical Neuroscience 2011, 1:4http://www.mathematical-neuroscience.com/content/1/1/4

Page 41 of 41