network tuning by genetic algorithmshenger.org/~henger/publ/neuroinf09.pdf · network tuning by...

Post on 19-Aug-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Network tuning by genetic algorithms

Hakon Enger, Tom Tetzlaff, and Gaute T. Einevoll

Dept. of Mathematical Sciences and Technology (IMT),Norwegian University of Life Sciences (UMB), As, Norway

Introduction◮ Neuronal network parameters:

– network architecture– dynamics of synapses and single cells

◮ Biologically realistic systems:

– high-dimensional parameter space– constrained only to some extent

◮ Genetic algorithms provide a potential solution to this problem

◮ Questions:

– To what extent can network parameters be determined by fittingthe population statistics of neural activity?

– What’s a good fit strategy?– What’s the precision of the fit result?– How important are individual parameters for the functional per-

formance of the model?

Balanced Random Network Model

X

E

I

η

θ, τm

θ, τm

J

J

J −gJ

J

−gJ

A Network model: Multi-population random network with fixed in-degrees (Brunel, 2000)

Populations E (excitatory): LIF neurons (see B), size NE

I (inhibitory): LIF neurons (see B), size NI

X (external): Poisson point processes with rate ηνθ , size KE(NE + NI)

Connectivity EE, IE: Random convergent KE → 1, excitatory synapses

EI, II: Random convergent KI → 1, inhibitory synapses (see C)

EX, IX: Non-overlapping KE → 1, excitatory synapses (see C)

Parameters Population sizes N{E,I}, in-degrees K{E,I} = ǫN{E,I}, connectivity ǫ,relative external drive η

B Neuron model: Leaky integrate-and-fire (LIF) neuron (Lapicque, 1907; Tuckwell, 1988)

Spike emission Neuron k ∈ [1, NE + NI] fires at all times {tjk|Vk(tjk) = θ, jk ∈ N}

Subthreshold dy-namics

τmVk = −Vk + RIk(t) if ∀jk : t /∈ (tjk, tjk + τref] Total synaptic input currentIk(t) =

l

jlikl(t − tjl) (see C)

Reset + refractori-ness

Vk(t) = Vreset if ∀jk : t ∈ (tjk, tjk + τref]

Parameters Membrane time constant τm, membrane resistance R, spike threshold θ,reset potential Vreset, refractory period τref

C Synapse model: Static current synapse with α-function shaped PSC

PSC kernel ikl(t + d) =

{

Jkleτ−1s t e−t/τs t > 0

0 else

Synaptic weights Jkl =

J if synapse kl exists and is excitatory

−gJ if synapse kl exists and is inhibitory

0 else

Parameters Excitatory synaptic weight J , relative strength g of inhibition ,synaptic time constant τs, synaptic delay d

D Spike-train analysis

Spike trains sk(t) =∑

jkδ(t − tjk)

Population aver-aged firing rate

r0 = 〈sk(t)〉k,t

Coefficient of vari-ation of inter-spikeinterval

CV =

⟨√

T 2jk

jk−

Tjk

⟩2

jk/⟨

Tjk

jk

k

with Tjk = tjk+1 − tjk

Population aver-aged spike-traincoherence

κ(ω) = C(ω)/P (ω)

with cross-spectrum C(ω) = Fτ

[

〈sk(t)sl(t + τ )〉k,l 6=k,t

]

(ω)

and power-spectrum P (ω) = Fτ

[

〈sk(t)sk(t + τ )〉k,t

]

(ω)

Total coherence κ0 =

ωmax∫

−ωmax

dω κ(ω) ≈ ∆ω

ωmax∑

ω=−ωmax

κ(ω)

Description of the model and the spike-train analysis. Blue-marked parameters are varied duringthe optimisation.

5 10 15 20 25g

0.51.01.52.02.53.03.54.0�

0124102040

100200

Rate

(H

z)

5 10 15 20 25g

0.51.01.52.02.53.03.54.0�

0.000.150.300.450.600.750.901.05

CV

5 10 15 20 25g

0.51.01.52.02.53.03.54.0�

0124102040

100

Tota

l C

ohere

nce

(H

z)

Average rate, CV and total coherence for the 2-dimensional (g, η)parameter space, other parameters kept constant. ⋆ marks referencepoint g = 8, η = 1.25, Jpsp = 0.1mV, θ = 20 mV, τs = 0.01ms.

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Variability of measures◮ Sources of variability in measured states:

– due to recording from only a selection of neurons in the network– due to statistical fluctuations in network structure

The statistical fluctuations may be studied by varying the seed of thepseudorandom number generator used to construct the network.

102 103 104

Neurons observed

10-3

10-2

10-1

100

Sta

ndard

devia

tion (

Hz)

Rate

102 103 104

10-3

10-2

10-1

100

Rela

tive s

td. dev.

102 103 104

Neurons observed

10-4

10-3

10-2

10-1

Sta

ndard

devia

tion CV

102 103 104

10-3

10-2

10-1

100

Rela

tive s

td. dev.

102 103 104

Neurons observed

10-3

10-2

10-1

100

Sta

ndard

devia

tion (

Hz)

Total Coherence

102 103 104

10-3

10-2

10-1

100

Rela

tive s

td. dev.

Varying random seed ⋆ Varying observed neurons

Alternative cost functionsThe cost function used in the optimization algorithm should be di-mensionless. There are two natural ways of achieving this:

− Scaling by target value: f (p) =∑

i(si(p) − ti)2/t2i

− Scaling by natural target variability: f (p) =∑

i(si(p) − ti)2/∆t2i

(p: parameter vector, si(p): measured state, ti: target state, ∆ti:standard deviation of target state.)

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y t

arg

et

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y e

xp.

err

or

Local cost landscapes◮ Observation: shallow valley near minimum.

– Indicates insensitivity to certain parameter combinations.(Gutenkunst et al., 2007)

◮ Exploration of full cost landscape not practical for high-dimensionalparameter spaces.

◮ Instead, find the local curvature of the cost landscape by studyingthe Hessian Hjk = ∂2f/∂pj∂pk of the cost function.

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y t

arg

et

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y e

xp.

err

or

Ellipses show isocontour lines of quadratic approximation of costfunction found by analysis of the Hessian (Gutenkunst et al., 2007).

Because of the intrinsic statistical fluctuations of the measured quan-tities, the data used to calculate the Hessian is noisy. The noise isamplified when calculating derivatives, since this involves subtractingtwo data points of equal magnitude. In order to obtain the Hessianused for the above plot, we averaged over 200 different network real-izations in each point.

AcknowledgementsWe acknowledge support by the NOTUR andeScience programs of the Research Council ofNorway. All network simulations were carriedout with the neural simulation tool NEST (seehttp://www.nest-initiative.org).

N E

TS

•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••

Genetic algorithm◮ Evaluating the cost function is very time expensive.

◮ Must use a minimal number of individuals per generation and astrategy which converges quickly to the minimum.

We use a genetic algorithm with 20 individuals in each generation,and generational replacement except for an elitist rule where the bestindividual from the previous generation is kept. We employ roulettewheel selection with linear ranking and crossover mating. The muta-tion probability is 0.01 per bit and the selective pressure is 2.0.

◮ Results from repeated searches using different initial generation:

2-dimensional parameter space

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y t

arg

et

5 10 15 20 25g

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0�0

12

4

10

20

40

Cost

sca

led b

y e

xp.

err

or

Figures show the best result from 50 optimizations using a geneticalgorithm.

5-dimensional parameter space

Results and correlations:

Parameter Resultg 14 ± 5η 2.3 ± 0.6Jpsp 0.2 ± 0.1 mVθ 29 ± 8 mVτs 0.5 ± 0.2 ms

g J �Sg

J�S

�1.0

�0.8

�0.6

�0.4

�0.2

0.0

0.2

0.4

0.6

0.8

1.0

Corr

ela

tion c

oeff

icie

nt

Variations in measured quantities:

02040

N

0

10N

4 5 6 7 8Rate (Hz)

0

10N

02040

N

0

15

N0.5 0.6 0.7 0.8 0.9 1.0

CV

051015

N

0

10N

0

10N

1.0 1.5 2.0 2.5 3.0Total Coherence (Hz)

0

4

N

Blue: Target variability; Green: Best results from genetic algorithmusing cost function scaled by target value; Red: Best results usingcost function scaled by experimental error.

◮ Comparing network activity from “best” and “worst” result of opti-mization:

4000 4200 4400 4600 4800 5000Time (ms)

0

500

1000

1500

2000

Neuro

n

Best result from GA

4000 4200 4400 4600 4800 5000Time (ms)

0

500

1000

1500

2000

Neuro

n

Worst result from GA

Conclusion◮ Intrinsic variability in network state

⇒ Inevitable variability in fitted parameters

◮ Different observables have different precision

◮ Cost landscape in vicinity of minima often shallow along certaindirections, i.e. similar performance for different parameter combi-nations

◮ Choice of cost function affects the accuracy of the genetic algo-rithm

ReferencesBrunel N (2000). Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8(3): 183–208

Gutenkunst RN, et al. (2007). Universally Sloppy Parameter Sensitivities in Systems Biology Models. PLoS Comput Biol 3(10): e189

Lapicque L (1907). Recherches quantitatives sur l’excitation electrique des nerfs traitee comme une polarisation. J Physiol Pathol Gen 9: 620–635

Tuckwell HC (1988). Introduction to Theoretical Neurobiology, vol. 1 (Cambridge University Press)

top related