spm 2002 c1c2c3 x = c1 c2 xb l c1 l c2 c1 c2 xb l c1 l c2 y xb e space of x c1 c2 xb space x c1...

Post on 19-Jan-2016

350 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

SPM 2002SPM 2002SPM 2002SPM 2002

C1C2C3X =X =

C1C1C2C2

XbXb

LC1

LC2

C1C1C2C2

XbXb

LC1

LC2

YY

XbXb

ee

Space of XSpace of X

C1C1

C2C2

XbXb Sp

ace

XSp

ace

X

C1C2

C1

C3 PC1C2

Xb

XbXbSpace of XSpace of X

C1

C2C1

C3

P C1C3 Xb

C2

C3

XbXb Space of XSpace of XC1C2C1

C3P C1C3

XbC2

C3

= (X’X)= (X’X)-- X’Y = X’Y = ^̂ ( )C1Y/S1

2

C2Y/S22

Jean-Baptiste PolineOrsay SHFJ-CEA

A priori complicated … a posteriori very simple !

SPM course - 2002SPM course - 2002LINEARLINEAR MODELS and CONTRASTSMODELS and CONTRASTS

The RFT

Hammering a Linear Model

Use forNormalisation

T and F tests : (orthogonal projections)

Jean-Baptiste PolineOrsay SHFJ-CEAwww.madic.org

realignement &coregistration smoothing

normalisation

Corrected p-values

images Adjusted signalDesignmatrix

Anatomical Reference

Spatial filter

Random Field Theory

Your question:a contrast

Statistical MapUncorrected p-values

General Linear Model Linear fit

statistical image

Make sure we understand the testing procedures : t and F testsMake sure we understand the testing procedures : t and F tests

Correlation in our model : do we mind ? Correlation in our model : do we mind ?

PlanPlan

A bad model ... And a better oneA bad model ... And a better one

Make sure we know all about the estimation (fitting) part ...Make sure we know all about the estimation (fitting) part .... .

A (nearly) real exampleA (nearly) real example

Temporal series fMRI

Statistical image(SPM)

voxel time course

One voxel = One test (t, F, ...)One voxel = One test (t, F, ...)amplitude

time

General Linear Modelfittingstatistical image

Regression example…Regression example…Regression example…Regression example…

= + +

voxel time series

90 100 110

box-car reference function

-10 0 10

90 100 110

Mean value

Fit the GLM

-2 0 2

Regression example…Regression example…Regression example…Regression example…

= + +

voxel time series

90 100 110

90 100 110

Mean value

box-car reference function

-2 0 2

error

-2 0 2

……revisited : matrix formrevisited : matrix form……revisited : matrix formrevisited : matrix form

= + +

s= + +f(ts)1Ys

error

Box car regression: design matrix…Box car regression: design matrix…Box car regression: design matrix…Box car regression: design matrix…

= +

= +Y X

data v

ecto

r

(v

oxel

time s

eries

)

design

mat

rix

param

eters

erro

r vec

tor

Add more reference functions ...Add more reference functions ...Add more reference functions ...Add more reference functions ...

Discrete cosine transform basis functionsDiscrete cosine transform basis functions

……design matrixdesign matrix……design matrixdesign matrix

=

+

= +Y X

data v

ecto

r

design

mat

rix

param

eters

erro

r vec

tor

= the b

etas (

here :

1 to

9)

Fitting the model = finding some Fitting the model = finding some estimateestimate of the betas of the betas= = minimising the sum of square of the error Sminimising the sum of square of the error S22

Fitting the model = finding some Fitting the model = finding some estimateestimate of the betas of the betas= = minimising the sum of square of the error Sminimising the sum of square of the error S22

raw fMRI time series adjusted for low Hz effects

residuals

fitted “high-pass filter”

fitted box-car

the squared values of the residuals s2number of time points minus the number of estimated betas

We put in our model regressors (or covariates) that represent We put in our model regressors (or covariates) that represent how we think the signal is varying (of interest and of no interest how we think the signal is varying (of interest and of no interest alike)alike)

Summary ...Summary ...

Coefficients (=estimated parameters or betas) are found such Coefficients (=estimated parameters or betas) are found such that the sum of the weighted regressors is as close as possible to that the sum of the weighted regressors is as close as possible to our dataour data As close as possible = such that the sum of square of the As close as possible = such that the sum of square of the residuals is minimalresiduals is minimal These estimated parameters (the “betas”) These estimated parameters (the “betas”) dependdepend on the scaling on the scaling of the regressors : keep in mind when comparing !of the regressors : keep in mind when comparing ! The residuals, their sum of square, the tests (t,F), The residuals, their sum of square, the tests (t,F), do notdo not depend depend on the scaling of the regressors on the scaling of the regressors

Make sure we understand t and F testsMake sure we understand t and F tests

Correlation in our model : do we mind ? Correlation in our model : do we mind ?

PlanPlan

A bad model ... And a better oneA bad model ... And a better one

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .... .

A (nearly) real exampleA (nearly) real example

SPM{t}

A contrast = a linear combination of parameters: c´

c’ = 1 0 0 0 0 0 0 0

test H0 : c´ b > 0 ?

T test - one dimensional contrasts - SPM{T test - one dimensional contrasts - SPM{tt}}

T =

contrast ofestimated

parameters

varianceestimate

T =

ss22c’(X’X)c’(X’X)++cc

c’bc’b

box-car amplitude > 0 ?=

b > 0 ? (b : estimation of ) =

1xb + 0xb + 0xb + 0xb + 0xb + . . . > 0 ?=

b b b b b ....

How is this computed ? (t-test)How is this computed ? (t-test)How is this computed ? (t-test)How is this computed ? (t-test)^̂̂̂contrast ofestimated

parameters

varianceestimate

YY = = X X + + ~ ~ N(0,I) N(0,I) (Y : at one position)(Y : at one position)

b = (X’X)b = (X’X)+ + X’Y X’Y (b = estimation of (b = estimation of ) -> ) -> beta??? images beta??? images

e = Y - Xbe = Y - Xb (e = estimation of (e = estimation of ))

ss22 = (e’e/(n - p)) = (e’e/(n - p)) (s = estimation of (s = estimation of n: temps, p: paramètres)n: temps, p: paramètres) -> -> 1 image ResMS1 image ResMS

Estimation [Y, X] [b, s]

Test [b, s2, c] [c’b, t]

Var(c’b) Var(c’b) = s= s22c’(X’X)c’(X’X)++c c (compute for each contrast c)(compute for each contrast c)

t = c’b / sqrt(st = c’b / sqrt(s22c’(X’X)c’(X’X)++c) c) (c’b -> (c’b -> images spm_con???images spm_con???

compute the t images -> compute the t images -> images spm_t??? images spm_t??? ))

under the null hypothesis Hunder the null hypothesis H00 : t ~ Student( df ) df = n-p : t ~ Student( df ) df = n-p

Tests multiple linear hypotheses : Does X1 model anything ?

F test (SPM{F test (SPM{FF}) : a reduced model or ...}) : a reduced model or ...

X1 X0

This model ?

H0: True model is X0

S2

Or this one ?

X0

S02 > S2 F =

errorvarianceestimate

additionalvariance

accounted forby tested

effects

F ~ ( S02 - S2 ) / S2

H0: 3-9 = (0 0 0 0 ...)

0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 00 0 0 0 1 0 0 00 0 0 0 0 1 0 00 0 0 0 0 0 1 00 0 0 0 0 0 0 1

c’ =c’ = +1 0 0 0 0 0 0 0

SPM{F}

tests multiple linear hypotheses. Ex : does HPF model anything?

F test (SPM{F test (SPM{FF}) : a reduced model or ...}) : a reduced model or ...multi-dimensional contrasts ? multi-dimensional contrasts ?

test H0 : c´ b = 0 ?

X1 (3-9)X0

This model ? Or this one ?

H0: True model is X0

X0

How is this computed ? (F-test)How is this computed ? (F-test)How is this computed ? (F-test)How is this computed ? (F-test)^̂̂̂Error

varianceestimate

additionalvariance accounted for

by tested effects

Test [b, s, c] [ess, F]

F = (eF = (e00’e’e0 0 - e’e)/(p - p- e’e)/(p - p00) / s) / s2 2 -> image (e-> image (e00’e’e0 0 - e’e)/(p - p- e’e)/(p - p00) : ) : spm_ess???spm_ess???

-> image of F : -> image of F : spm_F???spm_F???

under the null hypothesis : F ~ F(under the null hypothesis : F ~ F(df1df1,,df2df2) ) p - pp - p0 0 n-p n-p

bb00 = (X = (X’X’X))+ + XX’Y’Y

ee00 = Y - X = Y - X0 0 bb00 (e(e = estimation of = estimation of ))

ss2200 = (e = (e00’e’e00/(n - p/(n - p00)) )) (s(s = estimation of = estimation of n: time, pn: time, p: parameters): parameters)

Estimation [Y, X0] [b0, s0] (not really like that )

Estimation [Y, X] [b, s]

YY == X X + + ~ N(0, ~ N(0, I) I)

YY == XX + + ~ N(0, ~ N(0, I) I) XX : X Reduced : X Reduced

Make sure we understand t and F testsMake sure we understand t and F tests

Correlation in our model : do we mind ? Correlation in our model : do we mind ?

PlanPlan

A bad model ... And a better oneA bad model ... And a better one

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .... .

A (nearly) real exampleA (nearly) real example

A A badbad model ... model ...A A badbad model ... model ...

True signal and observed signal (---)

Model (green, pic at 6sec) et TRUE signal (blue, pic at 3sec)

Fitting (b1 = 0.2, mean = .11)

=> Test for the green regressor non significant

Noise (still contains some signal)

= +

Y X

b1= 0.22 b2= 0.11

A A badbad model ... model ...A A badbad model ... model ...

Residual Variance = 0.3

P( b1 > 0 ) = 0.1 (t-test)

P( b1 = 0 ) = 0.2 (F-test)

A « A « betterbetter » model ... » model ...A « A « betterbetter » model ... » model ...

True signal + observed signal

Global fit (blue)and partial fit (green & red)Adjusted and fitted signal

=> Test of the green regressor significant=> Test F very significant=> Test of the red regressor very significant

Noise (a smaller variance)

Model (green and red)and true signal (blue ---)Red regressor : temporal derivative of the green regressor

A A betterbetter model ... model ...A A betterbetter model ... model ...

= +

Y X

b1= 0.22 b2= 2.15 b3= 0.11

Residual Var = 0.2

P( b1 > 0 ) = 0.07 (t test)

P( [b1 b2] [0 0] ) = 0.000001 (F test)

=

Flexible models :Flexible models :Fourier BasisFourier Basis

Flexible models :Flexible models :Fourier BasisFourier Basis

Flexible models : Flexible models : Gamma BasisGamma BasisFlexible models : Flexible models : Gamma BasisGamma Basis

We rather test flexible models if there is little a priori  information, and precise ones with a lot a priori information

The residuals should be looked at ...(non random structure ?)

In general, use the F-tests to look for an overall effect, then look at the betas or the adjusted signal to characterise the origin of the signal

Interpreting the test on a single parameter (one function) can be very confusing: cf the delay or magnitude situation

Summary ... (2)Summary ... (2)

Make sure we understand t and F testsMake sure we understand t and F tests

Correlation in our model : do we mind ? Correlation in our model : do we mind ?

PlanPlan

A bad model ... And a better oneA bad model ... And a better one

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .....

A (nearly) real exampleA (nearly) real example

?

True signal

CorrelationCorrelation between regressors between regressorsCorrelationCorrelation between regressors between regressors

Fitting (blue : global fit)

Noise

Model (green and red)

= +

Y X

b1= 0.79 b2= 0.85 b3 = 0.06

CorrelationCorrelation between regressors between regressorsCorrelationCorrelation between regressors between regressors

Residual var. = 0.3

P( b1 > 0 ) = 0.08 (t test)

P( b2 > 0 ) = 0.07 (t test)

P( [b1 b2] 0 ) = 0.002 (F test)

=

true signal

CorrelationCorrelation between regressors - 2 between regressors - 2CorrelationCorrelation between regressors - 2 between regressors - 2

Noise

Fit

Model (green and red)red regressor has been

orthogonolised with respect to the green one= remove every thing that can correlate with

the green regressor

= +

Y X

b1= 1.47 b2= 0.85 b3 = 0.06

Residual var. = 0.3

P( b1 > 0 ) = 0.0003 (t test)

P( b2 > 0 ) = 0.07 (t test)

P( [b1 b2] <> 0 ) = 0.002 (F test)

See « explore design »

CorrelationCorrelation between regressors -2 between regressors -2CorrelationCorrelation between regressors -2 between regressors -2

0.79 0.85 0.06

Design orthogonality :Design orthogonality : « explore design » « explore design » Design orthogonality :Design orthogonality : « explore design » « explore design »

BewareBeware : when there is more than 2 regressors (C1,C2,C3...), you : when there is more than 2 regressors (C1,C2,C3...), you may think that there is little correlation (light grey) between them, may think that there is little correlation (light grey) between them, but C1 + C2 + C3 may be correlated with C4 + C5 but C1 + C2 + C3 may be correlated with C4 + C5

Black = completely correlated White = completely orthogonal

Corr(1,1) Corr(1,2)1 2

1 2

1

2

1 2

1 2

1

2

Implicit or explicit Implicit or explicit ((decorrelation (or decorrelation (or orthogonalisation)orthogonalisation)

Implicit or explicit Implicit or explicit ((decorrelation (or decorrelation (or orthogonalisation)orthogonalisation)

C1C1C2C2

XbXb

YY

XbXb

ee

Space of XSpace of X

C1C1

C2C2

LC2 :

LC1:

test of C2 in the implicit model

test of C1 in the explicit model

C1C1C2C2

XbXb

LC1

LC2

C2C2

See Andrade et al., NeuroImage, 1999

This GENERALISES when testing several regressors (F tests)

1 0 11 0 10 1 1 0 1 1 1 0 11 0 10 1 10 1 1

X =X =

MeanMeanCond 1Cond 1 Cond 2Cond 2

Y = Xb + e Y = Xb + e

C1C1

C2C2

Mean = C1+C2Mean = C1+C2

^̂̂̂ ““completely” correlated ... completely” correlated ... ““completely” correlated ... completely” correlated ...

Parameters are not unique in general ! Some contrasts have no meaning: NON ESTIMABLE

Example here : c’ = [1 0 0] is not estimable ( = no specific information in the first regressor);

c’ = [1 -1 0] is estimable;

We are implicitly testing additional effect only, so we may miss We are implicitly testing additional effect only, so we may miss the signal if there is some correlation in the model using t teststhe signal if there is some correlation in the model using t tests

Summary ... (3)Summary ... (3)

Orthogonalisation is not generally needed - parameters and test Orthogonalisation is not generally needed - parameters and test on the changed regressor don’t change on the changed regressor don’t change

It is always simpler (when possible !) to have orthogonal It is always simpler (when possible !) to have orthogonal (uncorrelated) regressors (uncorrelated) regressors

In case of correlation, use F-tests to see the overall significance. In case of correlation, use F-tests to see the overall significance. There is generally no way to decide where the « common » part There is generally no way to decide where the « common » part shared by two regressors should be attributed toshared by two regressors should be attributed to

In case of correlation and you need to orthogonolise a part of In case of correlation and you need to orthogonolise a part of the design matrix, there is no need to re-fit a new model : the the design matrix, there is no need to re-fit a new model : the contrast only should change. contrast only should change.

PlanPlan

Make sure we understand t and F testsMake sure we understand t and F tests

Correlation in our model : do we mind ? Correlation in our model : do we mind ?

A bad model ... And a better oneA bad model ... And a better one

Make sure we all know about the estimation (fitting) part ...Make sure we all know about the estimation (fitting) part .....

A (nearly) real exampleA (nearly) real example

A real example   A real example   (almost !)(almost !) A real example   A real example   (almost !)(almost !)

Factorial design with 2 factors : modality and category 2 levels for modality (eg Visual/Auditory)3 levels for category (eg 3 categories of words)

Experimental Design Design Matrix

V

A

C1

C2

C3C1

C2

C3

V A C1 C2 C3

Asking ouselves some questions ...Asking ouselves some questions ...Asking ouselves some questions ...Asking ouselves some questions ...V A C1 C2 C3

• Design Matrix not orthogonal • Many contrasts are non estimable• Interactions MxC are not modeled

Test C1 > C2 : c = [ 0 0 1 -1 0 0 ]Test V > A : c = [ 1 -1 0 0 0 0 ]Test the modality factor : c = ? Use 2-Test the category factor : c = ? Use 2-Test the interaction MxC ?

2 ways :1- write a contrast c and test c’b = 02- select colons of X for themodel under the null hypothesis.

The contrast manager allows that

Modelling the interactionsModelling the interactionsModelling the interactionsModelling the interactions

Asking ouselves some questions ...Asking ouselves some questions ...Asking ouselves some questions ...Asking ouselves some questions ...

V A V A V A

• Design Matrix orthogonal• All contrasts are estimable• Interactions MxC modelled• If no interaction ... ? Model too “big”

Test C1 > C2 : c = [ 1 1 -1 -1 0 0 0]Test V > A : c = [ 1 -1 1 -1 1 -1 0]

Test the differences between categories :[ 1 1 -1 -1 0 0 0]

c = [ 0 0 1 1 -1 -1 0]

C1 C1 C2 C2 C3 C3

Test the interaction MxC ? :[ 1 -1 -1 1 0 0 0]

c = [ 0 0 1 -1 -1 1 0][ 1 -1 0 0 -1 1 0]

Test everything in the category factor , leaves out modality :[ 1 1 0 0 0 0 0]

c = [ 0 0 1 1 0 0 0][ 0 0 0 0 1 1 0]

Asking ouselves some questions ... With a Asking ouselves some questions ... With a more flexible modelmore flexible model

Asking ouselves some questions ... With a Asking ouselves some questions ... With a more flexible modelmore flexible model

V A V A V ATest C1 > C2 ?Test C1 different from C2 ?from c = [ 1 1 -1 -1 0 0 0]to c = [ 1 0 1 0 -1 0 -1 0 0 0 0 0 0]

[ 0 1 0 1 0 -1 0 -1 0 0 0 0 0]becomes an F test!

C1 C1 C2 C2 C3 C3

Test V > A ? c = [ 1 0 -1 0 1 0 -1 0 1 0 -1 0 0]

is possible, but is OK only if the regressors coding for the delay are all equal

The RFT

Hammering a Linear Model

Use forNormalisation

T and F tests : (orthogonal projections)

We don’t use that one

Jean-Baptiste PolineOrsay SHFJ-CEAwww.madic.org

SPM course - 2002SPM course - 2002LINEARLINEAR MODELS and CONTRASTSMODELS and CONTRASTS

top related