statistical non parametric mapping manual

49
Statistical nonParametric Mapping Manual Manual SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols This page... overview getting started & SnPM GUI design setup computation viewing results SnPM pages... SnPM manual Toolbox overview The Statistical nonParametric Mapping toolbox provides an extensible framework for non-parametric permutation/randomisation tests using the General Linear Model and pseudo t-statistics for independent observations. This manual page describes how to use the package. Because the non-parametric approach is computationally intensive, involving the computation of a statistic image for every possible relabelling of the data, the toolbox has been designed to permit batch mode computation. An SnPM statistical analysis is broken into three stages: i. design setup (spm_snpm_ui) ii. computation of permutation distributions (spm_snpm) iii. postprocessing & display of results (spm_snpm_pp) Each stage is handled by a separate function, callable either from the command line or the SnPM GUI , as described below. A non-parametric analysis requires the specification of the possible relabellings, and the generation of the corresponding permutations matrix. This is design specific, so the setup stage of SnPM adopts a PlugIn architecture, utilising PlugIn M-files specific for each design

Upload: krishna-p-miyapuram

Post on 04-Mar-2015

123 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Statistical Non Parametric Mapping Manual

Statistical nonParametric Mapping Manual

 

Manual

SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols

This page...

overview

getting started& SnPM

GUI

design setup

computation

viewing results

SnPM pages...

SnPM

manual

PET example

Toolbox overview

The Statistical nonParametric Mapping toolbox provides an extensible framework for non-parametric

permutation/randomisation tests using the General Linear Model and pseudo t-statistics for independent

observations. This manual page describes how to use the package.

Because the non-parametric approach is computationally intensive, involving the computation of a statistic

image for every possible relabelling of the data, the toolbox has been designed to permit batch mode

computation. An SnPM statistical analysis is broken into three stages:

i. design setup  (spm_snpm_ui)

ii. computation of permutation distributions  (spm_snpm)

iii. postprocessing & display of results  (spm_snpm_pp)

Each stage is handled by a separate function, callable either from the command line or the SnPM GUI, as

described below.

A non-parametric analysis requires the specification of the possible relabellings, and the generation of the

corresponding permutations matrix. This is design specific, so the setup stage of SnPM adopts a PlugIn

architecture, utilising PlugIn M-files specific for each design that setup the permutations appropriately.

(The PlugIn architecture is documented in spm_snpm_ui.m.) With SnPM99, the following

experimental designs are supported:

Single subject simple activation  (2 conditions)

Page 2: Statistical Non Parametric Mapping Manual

fMRI example

FSL users

PlugIn spm_snpm_SSA2x

Single subject correlation  (single covariate of interest)

PlugIn spm_snpm_SSC

Multi subject simple activation  (2 conditions, randomisation of condition presentation order to even number

of subjects)

PlugIn spm_snpm_MSA2x

Interim communication between the stages is via MatLab *.mat files. The setup stage

(spm_snpm_ui) saves configuration information inSnPM_cfg.mat, written to the current working

directory. Note that the filenames of the scan data are coded into the SnPM_cfg.mat configuration file,

so don't move or delete the images between the setup and computation stages! The computation stage

(spm_snpm) asks you to locate a SnPM_cfg.mat file, and writes it's results *.mat files alongside

the configuration file. Finally the postprocessing & results stage (spm_snpm_pp) asks you to locate the

directory containing the results and configuration files, and goes to work on them, writing any images and

printouts to the present working directory.

Note: Please also read the main SnPM page!

Getting started

First install the SnPM software alongside an SPM99 installation, ensuring both packages are on the MATLABPATH.

GUI USAGE

SnPM runs on top of the SPM environment, and is launched from within MatLab by typingsnpm in the command window. This

will start the SPM environment if not already launched (or switch an existing SPM session to the PET modality), bring up the

SnPM GUI, and display late breaking SnPM information in the SPM graphics window. SnPM99 prints status information to the

MatLab command line for long computations, so it's a good idea to keep the MatLab window visible whilst using SnPM.

COMMAND LINE & BATCH USAGE

SnPM can also be run in command-line mode. Start MatLab and type global CMDLINE; CMDLINE=1 in the MatLab

command window to switch to interactive command line use. The postprocessing and results function spm_snpm_pp will open

an SPM Graphics when required. (If you rely on the default header parameters, then you should also initialise the SPM global

environment variables: Type spm('Defaults','PET') ) Thus, having setup a design, the SnPM computation engine

routine spm_snpm can be run without windows, in batch mode, by providing the directory containing the

appropriate SnPM_cfg.matconfiguration file to use as an argument.

Page 3: Statistical Non Parametric Mapping Manual

SNPM GUI

The SnPM GUI has buttons for launching the three stages of an SnPM analysis, with corresponding help buttons. This online help

text is similar to that presented on this page, the "About SnPM" topic being the overview given above. Detailed instructions for the

three modules are given below.

Setting up the design & defining appropriate permutations

Derived from help text for spm_snpm_ui.m

spm_snpm_ui sets up the parameters for a non-parametric

permutation/randomisation analysis. The approach taken with SnPM

analyses differs from that of SPM. Instead of intiating the analysis

with one command, an analysis consists of 3 steps:

1. Configuration of Analysis --- interactive

2. Calculation of Raw Statistics --- noninteractive

3. Post Processing of Raw Statistics --- interactive

( In SPM, all of these steps are done with a click to the "Statistics" )

( button, though the 3rd step is often redone with "Results" or "SPM{Z}")

The first step is embodied in this function, spm_snpm_ui. spm_snpm_ui

configures the design matrix and calls "plug in" modules that specify

how the relabeling is to be done for particular designs.

The result of this function is a mat file, "SnPM_cfg.mat", written

Page 4: Statistical Non Parametric Mapping Manual

to the present working directory. This file contains all the

parameters needed to perform the second step, which is in embodied in

spm_snpm. Design parameters are displayed in the SPM graphics window,

and are printed.

-----------------------------------------------------------------------

-The Prompts Explained

=======================================================================

'Select design type...': Choose from the available designs. Use the

'User Specifed PlugIn' option to supply your own PlugIn function. Use

the 'keyboard' option to manually set the required PlugIn variables

(as defined below under "PlugIn Must Supply the following").

- At this point you will be prompted by the PlugIn file;

- see help for the PlugIn file you selected.

'FWHM(mm) for Variance smooth': Variance smoothing gives the

nonparmetric approach more power over the parametric approach for

low-df analyses. If your design has fewer than 20 degrees of freedom

variance smoothing is advised. 10 mm FWHM is a good starting point

for the size of the smoothing kernal. For non-isotropic smoothing,

enter three numbers: FWHM(x) FWHM(y) FWHM(z), all in millimeters.

If there are enough scans and there is no variance smoothing in the z

direction, you will be asked...

'# scans: Work volumetrically?': Volumetric means that the entire

data set is loaded into memory; while this is more efficient than

iterating over planes it is very memory intensive.

( Note: If you specify variance smoothing in the z-direction, SnPM )

( (in spm_snpm.m) has to work volumetrically. Thus, for moderate to )

( large numbers of scans there might not be enough memory to complete )

( the calculations. This shouldn't be too much of a problem because )

( variance smoothing is only necesary at low df, which usually )

( corresponds to a small number of scans. Alternatively, specify only )

( in-plane smoothing. )

'Collect Supra-Threshold stats?': In order to use the permutation

test on supra-threshold cluster size you have to collect a

Page 5: Statistical Non Parametric Mapping Manual

substantial amount of additional data for each permutation. If you

want to look at cluster size answer yes, but have lots of free space

(the more permutations, the more space needed). You can however

delete the SnPM_ST.mat file containing the supra-threshold cluster

data at a later date without affecting the ability to analyze at the

voxel level.

The remaining questions all parallel the standard parametric analysis

of SPM; namely

'Select global normalisation' - account for global flow confound

'Gray matter threshold ?' - threshold of image to determine gray matter

'Value for grand mean ?' - arbitrary value to scale grand mean to

PLUGINS

PlugIns are provided for three basic designs:

SINGLE SUBJECT SIMPLE (2 CONDITION) ACTIVATION

Derived from help text for spm_snpm_SSA2x PlugIn

spm_snpm_SSA2x is a PlugIn for the SnPM design set-up program,

creating design and permutation matrix appropriate for single

subject, two condition activation (with replication) experiments.

-Number of permutations

=======================================================================

There are nScan-choose-nRepl possible permutations, where

nScan is the number of scans and nRepl is the number of replications

(nScan = 2*nRepl). Matlab doesn't have a choose function but

you can use this expression

prod(1:nScan)/prod(1:nRepl)^2

-Prompts

=======================================================================

'# replications per condition': Here you specify how many times

each of the two conditions were repeated.

Page 6: Statistical Non Parametric Mapping Manual

'Size of exchangability block': This is the number of adjacent

scans that you believe to be exchangeable. The most common cause of

non-exchangeability is a temporal confound. Exchangibility blocks

are required to be the same size, a divisor of the number of scans.

See snpm.man for more information and references.

'Select scans in time order': Enter the scans to be analyzed.

It is important to input the scans in time order so that temporal

effects can be accounted for.

'Enter conditions index: (B/A)': Using A's to indicate activation

scans and B's to indicate baseline, enter a sequence of 2*nRepl

letters, nRepl A's and nRepl B's, where nRepl is the number of

replications. Spaces are permitted.

SINGLE SUBJECT CORRELATION (SINGLE COVARIATE OF INTEREST)

Derived from help text for spm_snpm_SSC PlugIn

spm_snpm_SSC is a PlugIn for the SnPM design set-up program,

creating design and permutation matrix appropriate for single-

subject, correlation design.

-Number of permutations

=======================================================================

There are nScan! (nScan factoral) possible permutations, where nScan

is the number of scans of the subject. You can compute this using

the gamma function in Matlab: nScan! is gamma(nScan+1); or by direct

computation as prod(1:nScan)

-Prompts

=======================================================================

'Select scans in time order': Enter the scans to be analyzed.

It is important to input the scans in time order so that temporal

effects can be accounted for.

'Enter covariate values': These are the values of the experimental

covariate.

'Size of exchangability block': This is the number of adjacent

scans that you believe to be exchangeable. The most common cause

of nonexchangeability is a temporal confound (e.g. while 12 adjacent

Page 7: Statistical Non Parametric Mapping Manual

scans might not be free from a temporal effect, 4 adjacent scans

could be regarded as having neglible temporal effect). See snpm.man

for more information and references.

'### Perms. Use approx. test?': If there are a large number of

permutations it may not be necessary (or possible!) to compute

all the permutations. A common guideline is that 1000 permutations

are sufficient to characterize the permutation distribution well.

More permutations will probably not change the results much.

If you answer yes you will be prompted for the number...

'# perms. to use?'

MULTI SUBJECT SIMPLE ACTIVATION (2 CONDITIONS, RANDOMISATION)

Derived from help text for spm_snpm_MSA2x PlugIn

spm_snpm_MSA2x is a PlugIn for the SnPM design set-up program,

creating design and permutation matrix appropriate for multi-

subject, two condition with replication design, where where

the condition labels are permuted over subject.

This PlugIn is only designed for studies where there are two

sets of labels, each the A/B complement of the other

(e.g. ABABAB and BABABA), where half of the subjects have one

labeling and the other half have the other labeling. Of course,

there must be an even number of subjects.

-Number of permutations

=======================================================================

There are (nSubj)-choose-(nSubj/2) possible permutations, where

nSubj is the number of subjects. Matlab doesn't have a choose

function but you can use this expression

prod(1:nSubj)/prod(1:nSubj/2)^2

-Prompts

=======================================================================

'# subjects': Number of subjects to analyze

'# replications per condition': Here you specify how many times

each of the two conditions were repeated.

Page 8: Statistical Non Parametric Mapping Manual

For each subject you will be prompted:

'Subject #: Select scans in time order': Enter this subject's scans.

It is important to input the scans in time order so that temporal

effects can be accounted for.

'Enter conditions index: (B/A)': Using A's to indicate activation

scans and B's to indicate baseline, enter a sequence of 2*nRepl

letters, nRepl A's and nRepl B's, where nRepl is the number of

replications. Spaces are permitted.

There can only be two possible condition indicies: That of the

first subject and the A<->B flip of the first subject.

'### Perms. Use approx. test?': If there are a large number of

permutations it may not be necessary (or possible!) to compute

all the permutations. A common guideline is that 1000 permutations

are sufficient to characterize the permutation distribution well.

More permutations will probably not change the results much.

If you answer yes you will be prompted for the number...

'# perms. to use?'

...to top

Computing the nonParametric permutation distributions

Derived from help text for spm_snpm.m

spm_snpm is the engine of the SnPM toolbox and implements the general

linear model for a set of design matrices, each design matrix

constituting one permutation. First the "correct" permutation

is calculated in its entirety, then all subsequent permutations are

calculated, possibly on a plane-by-plane basis.

The output of spm_snpm parallels spm_spm: for the correct permutation

.mat files containing parameter estimates, adjusted values,

statistic values, and F values are saved; the permutation

distribution of the statistic interest and (optionally) suprathreshold

stats are also saved. All results are written to the directory

that CfgFile resides in. IMPORTANT: Existing results are overwritten

without prompting.

Page 9: Statistical Non Parametric Mapping Manual

Unlike spm_spm, voxels are not discarded on the basis of the F statistic.

All gray matter voxels (as defined by the gray matter threshold) are

retained for analysis; note that this will increase the size of all .mat

files.

-----------------------------------------------------------------------

Output File Descriptions:

SPMF.mat contains a 1 x S vector of F values reflecting the omnibus

significance of effects [of interest] at each of the S voxels in brain

(gray matter) *for* the correct permutation.

XYZ.mat contains a 3 x N matrix of the x,y and z location of the

voxels in SPMF in mm (usually referring the the standard anatomical

space (Talairach and Tournoux 1988)} (0,0,0) corresponds to the

centre of the voxel specified by ORIGIN in the *.hdr of the original

and related data.

BETA.mat contains a p x S matrix of the p parameter estimates at

each of the S voxels for the correct permutation. These parameters

include all effects specified by the design matrix.

XA.mat contains a q x S matrix of adjusted activity values for the

correct permutations, where the effects of no interest have been removed

at each of the S voxels for all q scans.

SnPMt.mat contains a 1 x S matrix of the statistic of interest (either

t or pseudo-t if variance smoothing is used) supplied for all S voxels at

locations XYZ.

SnPM.mat contains a collection of strings and matrices that pertain

to the analysis. In contrast to spm_spm's SPM.mat, most of the essential

matrices are in the any of the matrices stored here in the CfgFile

and hence are not duplicated here. Included are the number of voxels

analyzed (S) and the image and voxel dimensions [V]. See below

for complete listing.

-----------------------------------------------------------------------

Page 10: Statistical Non Parametric Mapping Manual

As an "engine", spm_snpm does not produce any graphics; if the SPM windows

are open, a progress thermometer bar will be displayed.

If out-of-memory problems are encountered, the first line of defense is to

run spm_snpm in a virgin matlab session with out first starting SPM.

...to top

Examining the results

Derived from help text for spm_snpm_pp.m

spm_snpm_pp is the PostProcessing function for the SnPM nonParametric

statistical analysis. SnPM statistical analyses are split into three

stages; Setup, Compute & Assess. This is the third stage.

Nonparametric randomisation distributions are read in from MatLab

*.mat files, with which the observed statistic image is assessed

according to user defined parameters. It is the SnPM equivalent of

the "Results" section of SPM, albeit with reduced features.

Voxel level corrected p-values are computed from the permutation

distribution of the maximal statistic. If suprathreshold cluster

statistics were collected in the computation stage (and the large

SnPM_STC.mat file hasn't been deleted!), then assessment by

suprathreshold cluster size is also available, using a user-specified

primary threshold.

Instructions:

=======================================================================

You are prompted for the following:

(1) ResultsDir: If the results directory wasn't specified on the command

line, you are prompted to locate the SnPM results file SnPM.mat.

The directory in which this file resides is taken to be the

results directory, which must contain *all* the files listed

below ("SnPM files required").

Results (spm.ps & any requested image files) are written in the

present working directory, *not* the directory containing the

results of the SnPM computations.

Page 11: Statistical Non Parametric Mapping Manual

----------------

(2) +/-: Having located and loaded the results files, you are asked to

chosse between "Positive or negative effects?". SnPM, like SPM,

only implements single tailed tests. Choose "+ve" if you wish to

assess the statistic image for large values, indicating evidence

against the null hypothesis in favour of a positive alternative

(activation, or positive slope in a covariate analysis).

Choose "-ve" to assess the negative contrast, i.e. to look for

evidence against the null hypothesis in favour of a negative

alternative (de-activation, or a negative slope in a covariate

analysis). The "-ve" option negates the statistic image and

contrast, acting as if the negative of the actual contrast was

entered.

A two-sided test may be constructed by doing two separate

analyses, one for each tail, at half the chosen significance

level, doubling the resulting p-values.

( Strictly speaking, this is not equivalent to a rigorous two-sided )

( non-parametric test using the permutation distribution of the )

( absolute maximum statistic, but it'll do! )

----------------

(3) WriteFiles: After a short pause while the statistic image SnPMt.mat

is read in and processed, you have the option of writing out the

complete statistic image as an Analyze file. If the statistic is

a t-statistic (as opposed to a pseudo-t computed with smoothed

variance estimator) you are given the option of "Gaussianising"

the t-statistics prior to writing, replacing each t-statistic

with an equivalently extreme standard Gaussian ordinate. (This

t->z conversion does *not* result in "Z-scores" a la Cohen, as is

commonly thought in the SPM community.)

The image is written to the present working directory, as

SPMt.{hdr,img} or SPMt_neg.{hdr,img}, "_neg" being appended when

assessing the -ve contrast. (So SPMt_neg.{hdr,img} is basically

the inverse of SPMt.{hdr,img}!) These images are 16bit, scaled by

a factor of 1000. All voxels surviving the "Grey Matter

threshold" are written, remaining image pixels being given zero

Page 12: Statistical Non Parametric Mapping Manual

value.

Similarly you are given the option of writing the complete

single-step adjusted p-value image. This image has voxel values

that are the non-parametric corrected p-value for that voxel (the

proportion of the permutation distribution for the maximal

statistic which exceeds the statistic image at that voxel). Since

a small p indicates strong evidence, 1-p is written out. So,

large values in the 1-p image indicate strong evidence against

the null hypothesis. This image is 8bit, with p=1 corresponding

to voxel value 0, p=0 to 255. The image is written to the

present working directory, as SnPMp_SSadj.{img,hdr} or

SnPMp_SSadj_neg.{img,hdr}. This image is computed on the fly, so

there may be a slight delay...

Note that voxel level permutation distributions are not

collected, so "uncorrected" p-values cannot be obtained.

----------------

Next come parameters for the assessment of the statistic image...

(4) alpha: (Corrected p-value for filtering)

Next you enter the \alpha level, the statistical significance

level at which you wish to assess the evidence against the null

hypothesis. In SPM this is called "filtering by corrected

p-value". SnPM will only show you voxels (& suprathreshold

regions if you choose) that are significant (accounting for

multiple comparisons) at level \alpha. I.e. only voxels (&

regions) with corrected p-value less than \alpha are shown to

you.

Setting \alpha to 1 will show you all voxels with a positive statistic.

(5) SpatEx: If you collected supra-threshold cluster statistics during

the SnPM computation phase, you are offered the option to assess

the statistic image by supra-threshold cluster size (spatial

extent). Note that SnPM99 assesses suprathreshold clusters by

their size, whilst SPM99 uses a bivariate test based on size &

height (Poline et al., 1996), the framing of which cannot be

exactly mimicked in a non-parametric manner.

Page 13: Statistical Non Parametric Mapping Manual

5a) ST_Ut: If you chose to asses spatial extent, you are now prompted

for the primary threshold. This is the threshold applied to the

statistic image for the identification of supra-threshold

clusters.

The acceptable range is limited. SnPM has to collect

suprathreshold information for every relabelling. Rather that

pre-specify the primary threshold, information is recorded for

each voxel exceeding a low threshold (set in spm_snpm) for every

permutation. From this, suprathreshold cluster statistics can be

generated for any threshold higher than the low recording

threshold. This presents a lower limit on the possible primary

threshold.

The upper limit (if specified) corresponds to the statistic value

at which voxels become individually significant at the chosen

level (\alpha). There is little point perusing a suprathreshold

cluster analysis at a threshold at which the voxels are

individually significant.

If the statistics are t-statistics, then you can also specify the

threshold via the upper tail probability of the t-distribution.

(NB: For the moment, \alpha=1 precludes suprathreshold analysis, )

( since all voxels are significant at \alpha=1. )

That's it. SnPM will now compute the appropriate significances,

reporting its progress in the MatLab command window. Note that

computing suprathreshold cluster size probabilities can take a long

time, particularly for low thresholds or large numbers of

relabellings. Eventually, the Graphics window will come up and the

results displayed.

- Results

========================================================================

The format of the results page is similar to that of SPM:

A Maximum Intensity Projection (MIP) of the statistic image is shown

top left: Only voxels significant (corrected) at the chosen level

Page 14: Statistical Non Parametric Mapping Manual

\alpha are shown. (If suprathreshold cluster size is being assessed,

then clusters are shown if they have significant size *or* if they

contain voxels themselves significant at the voxel level.) The MIP is

labelled SnPM{t} or SnPM{Pseudo-t}, the latter indicating that

variance smoothing was carried out.

On the top right a graphical representation of the Design matrix is

shown, with the contrast illustrated above.

The lower half of the output contains the table of p-values and

statistics, and the footnote of analysis parameters. As with SPM, the

MIP is tabulated by clusters of voxels, showing the maximum voxel

within each cluster, along with at most three other local maxima

within the cluster. The table has the following columns:

* region: The id number of the suprathreshold. Number 1 is assigned to

the cluster with the largest maxima.

* size{k}: The size (in voxels) of the cluster.

* P(Kmax>=k): P-value (corrected) for the suprathreshold cluster size.

This is the probability (conditional on the data) of the experiment

giving a suprathreshold cluster of size as or more extreme anywhere

in the statistic image. This is the proportion of the permutation

distribution of the maximal suprathreshold cluster size exceeding

(or equalling) the observed size of the current cluster. This

field is only shown when assessing "spatial extent".

* t / Pseudo-t: The statistic value.

* P(Tmax>=t): P-value (corrected) for the voxel statistic.

This is the probability of the experiment giving a voxel statistic

this extreme anywhere in the statistic image. This is the

proportion of the permutation distribution of the maximal

suprathreshold cluster size exceeding (or equalling) the observed

size of the current cluster.

* (uncorrected): If the statistic is a t-statistic (i.e. variance smoothing

was *not* carried out, then this field is computed by comparing the

t-statistic against Students t-distribution of appropriate degrees

of freedom. Thus, this is a parametric uncorrected p-value. If

using Pseudo-t statistics, then this field is not shown, since the

Page 15: Statistical Non Parametric Mapping Manual

data necessary for computing non-parametric uncorrected p-values is

not computed by SnPM.

* {x,y,z} mm: Locations of local maxima.

The SnPM parameters footnote contains the following information:

* Primary threshold: If assessing "spatial extent", the primary

threshold used for identification of suprathreshold clusters is

printed. If using t-statistics (as opposed to Pseudo-t's), the

corresponding upper tail probability is also given.

* Critical STCS: The critical suprathreshold cluster size. This is

size above which suprathreshold clusters have significant size at

level \alpha It is computed as the 100(1-alpha)%-ile of the

permutation distribution of the maximal suprathreshold cluster

size. Only shown when assessing "spatial extent".

* alpha: The test level specified.

* Critical threshold: The critical statistic level. This is the value

above which voxels are significant (corrected) at level \alpha. It

is computed as the 100(1-alpha)%-ile of the permutation

distribution of the maximal statistic.

* df: The degrees of freedom of the t-statistic. This is printed even

if

variance smoothing is used, as a guide.

* Volume & voxel dimensions:

* Design: Description of the design

* Perms: Description of the exchangability and permutations used.

SnPM example

In this section we analyze a simple motor activation experiment with the SnPM software. The aim of this example is three-fold:

i. Demonstrate the steps of an SnPM analysis

ii. Explain and illustrate the key role of exchangeability

iii. Provide a bench mark analysis for validation of an SnPM installation

Page 16: Statistical Non Parametric Mapping Manual

Please read existing publications, in particaular Nichols & Holmes (2001) provides an accessible presentation on the theory and

thoughtful use of SnPM. This example also is a way to understand key concepts and practicalities of the SnPM toolbox.

 

The Example Data

This example will use data from a simple primary motor activation experiment. The motor stimulus was

the simple finger opposition task. For the activation state subjects were instructed to touch their thumb to

their index finger, then to their middle finger, to their ring finger, to their pinky, then repeat; they were to

do this at a rate of 2 Hz, as guided by a visual cue. For baseline, there was no finger movement, but the

visual cue was still present. There was no randomization and the task labeling used was

A B A B A B A B A B A B

You can down load the data from ftp://www.fil.ion.ucl.ac.uk/spm/data/PET_motor.tar.gz or, in North

America, from ftp://rowdy.pet.upmc.edu/pub/outgoing/PET_motor.tar.gz. We are indebted to Paul Kinahan

and Doug Noll for sharing this data. See this reference for details: Noll D, Kinahan et al. (1996)

"Comparison of activation response using functional PET and MRI" NeuroImage3(3):S34.

Currently this data is normalized with SPM94/95 templates, so the activation site will not map correctly to

ICBM reference images.

 

Before you touch the computer

The most important consideration when starting an analysis is the choice of exchangeability block size

and the impact of that choice on the number of possible permutations. We don't assume that the reader

is familiar with either exchangeability or permutation tests, so we'll attempt to motivate the permutation

test through exchangeability, then address these central considerations.

Exchangeability

First we need some definitions.

 

Labels & Labelings   A designed experiment entails repeatedly collecting data under conditions

that are as similar as possible except for changes in an experimental variable. We use the

term labels to refer to individual values of the experimental variable, and labelingto refer to a

particular assignment of these values to the data. In a randomized experiment the labeling used

in the experiment comes from a random permutation of the labels; in a non-randomized

experiment the labeling is manually chosen by the experimenter.

 

Page 17: Statistical Non Parametric Mapping Manual

Null Hypothesis   The bulk of statistical inference is built upon what happens when the

experimental variable has no effect. The formal statement of the "no effect" condition is thenull

hypothesis.

 

Statistic   We will need to use the term statistic in it's most general sense: A statistic is a

function of observed data and a labeling, usually serving to summarize or describe some attribute

of the data. Sample mean difference (for discrete labels) and the sample correlation (for

continuous labels) are two examples of statistics.

We can now make a concise statement of exchangeability. Observations are said to beexchangeable if

their labels can be permuted with out changing the expected value of anystatistic. We will always

consider the exchangeability of observations under the null hypothesis.

We'll make this concrete by defining these terms with our data, considering just one voxel (i.e 12 values).

Our labels are 6 A's and 6 B's. A reasonable null hypothesis is ``The A observations have the same

distribution as the B observations''. For a simple statistic we'll use the difference between the sample

means of the A & B observations. If we say that all 12 scans are exchangeable under the null hypothesis

we are asserting that for any permutation of A's and B's applied to the data the expected value of the

difference between A's & B's would be zero.

This should seem reasonable: If there is no experimental effect the labels A and B are arbitrary, and we

should be able to shuffle them with out changing the expected outcome of a statistic.

But now consider a confound. The most ubiquitous confound is time. Our example data took over two

hours to collect, hence it is reasonable to suspect that the subject's mental state changed over that time.

In particular we would have reason to think that difference between the sample means of the A's and B's

for the labeling

A A A A A A B B B B B B

would not be zero under the null because this labeling will be sensitive to early versus late effects. We

have just argued, then, that in the presence of temporal confound all 12 scans are notexchangeable.

Exchangeability Blocks

The permutation approach requires exchangeability under the null hypothesis. If all scans are not

exchangeable we are not defeated, rather we can define exchangeability blocks (EBs), groups of scans

which can be regarded as exchangeable, then only permute within EB.

We've made a case for the non-exchangeability of all 12 scans, but what if we considered groups of 4

scans. While the temporal confound may not be eliminated its magnitude within the 4 scans will be

smaller simply because less time elapses during those 4 scans. Hence if we only permute labels within

blocks of 4 scans we can protect ourselves from temporal confounds. In fact, the most temporally

confounded labeling possible with an EB size of 4 is

A A B B A A B B A A B B

Page 18: Statistical Non Parametric Mapping Manual

Number of Permutations

This brings us to the impact of EB size on the number of permutations. The table below shows how EB size

affects the number of permutations for our 12 scan, 2 condition activation study. As the EB size gets

smaller we have few possible permutations.

EB size Num EB's Num Permutations

12 1 12C6 = 924

6 2 (6C3)2 = 400

4 3 (4C2)3 = 216

2 6 (2C1)6 = 64

This is important because the crux of the permutation approach is calculating a statistic for lots of

labelings, creating a permutation distribution. The permutation distribution is used to calculate

significance: the p-value of the experiment is the proportion of permutations with statistic values greater

than or equal to that of the correct labeling. But if there are only, say, 20 possible relabelings and the

most significant result possible will be 1/20=0.05 (which would occurs if the correctly labeled data yielded

the largest statistic).

Hence we have to make a trade off. We want small EBs to ensure exchangeability within block, but very

small EBs yield insufficient numbers of permutations to describe the permutation distribution well, and

hence assign significance finely. We usually will use the smallest EB that allows for at least hundreds of

permutations (unless, of course, we were untroubled by temporal effects).

...to top

 

Design Setup

It is intended that you are actually sitting at a computer and are going through these steps with Matlab.

We assume that you either have the sample data on hand or a similar, single subject 2 condition with

replications data set.

First, if you have a choice, choose a machine with lots of memory. We found that this example causes the

Matlab process to grow to at least 90MB.

Create a new directory where the results from this analysis will go. Either start Matlab in this directory, or

cd to this directory in an existing Matlab session.

Start SnPM by typing

snpm

which will bring up the SnPM control panel (and the three SPM windows if you haven't started SPM

already). Click on

Page 19: Statistical Non Parametric Mapping Manual

Setup

A popup menu will appear. Select the appropriate design type from the the menu. Our data conforms to

Single Subject: 2 Conditions, replications

It then asks for

# replications per conditions

We have 6.

Now we come to

Size of exchangeability block

From the discussion above we know we don't want a 12-scan EB, so will use a EB size of 4, since this

gives over 200 permutations yet is a small enough EB size to protect against severe temporal confounds.

The help text for each SnPM plug in file gives a formula to calculate the number of possible

permutations given the design of your data. Use the formula when deciding what size EB you

should use.

It will now prompt you to select the image data files. In the prompted dialog box, you need to input the

correct file directory, and then click on the image data files (.img) one by one. It is important that you

enter them in time order. Or you can click on All button if you want to choose all the files. After you finish

choosing all of the files, click on "Done".

Next you need to enter the ``conditions index.'' This is a sequence of A's and B's (A's for activation, B's

for baseline) that describe the labeling used in the experiment. Since this experiment was not randomized

we have a nice neat arrangement:

A B A B A B A B A B A B

Exchangeability Business

Next you are asked about variance smoothing. If there are fewer than 20 degrees of freedom available to

estimate the variance, variance smoothing is a good idea. If you have around 20 degrees of freedom you

might look at at the variance from a SPM run (Soon we'll give a way to look at the variance images from

any SPM run). This data has 12-2-1=9 degrees of freedom at each voxel, so we definately want to smooth

the variance.

It is our experience that the size of the variance smoothing is not critical, so we suggest 10 mm FWHM

variance smoothing. Values smaller than 4 mm won't do much smoothing and smoothing probably won't

buy you anything and will take more time. Specify 0 for no smoothing.

The next question is "Collect Supra-Threshold stats?" The default statistic is the maximum intensity of

the t-statistic image, or max pseudo-t if variance smoothing is used. If you would like to use the maximum

supra-threshold cluster size statistic you have to collect extra data at the time of the analysis. Beware,

this can take up a tremendous amount of disk space; the more permutations the more disk space

Page 20: Statistical Non Parametric Mapping Manual

required. This example generates a 70MB suprathreshold statistics mat file. Answer 'yes' to collect these

stats. Or click on 'no' to save some disk space.

The remaining questions are the standard SPM questions. You can choose global normalization (we

choose 3, Ancova), choose 'global calculation' (we choose 2, mean voxel value), then choose 'Threshold

masking' (we choose proportion), and keep 'Prop'nal threshod' as its default 0.8. Choose 'grand mean

scaling' (we choose 1, scaling of overall grand mean) and keep 'scale overall grand mean' at its default

50. the gray matter threshold (we choose the default, 0.8) and the value for grand mean (again we

choose the default, 50).

Now SnPM will run a short while while it builds a configuration file that will completely specify the

analysis. When finished it will display a page (or pages) with file names and design information. When it is

finished you are ready to run the SnPM engine.

 

Computing the nonParametric permutation distributions

In the SnPM window click on

Compute

You will be asked to find the configuration file SnPM has just created (It should be in the directory where

you run matlab); it's called

SnPM_cfg.mat

Some text messages will be displayed and the thermometer progress bar will also indicate progress.

On fast new machines, like Sun Sparc Ultras or a Hewlett Packard C180, the computation of permutation

should only take about 5 minutes.

One of the reasons that SnPM is divided into 3 discrete operations (Configure, Compute, Results) is to

allow the Compute operation to be run at a later time or in the background. To this end, the 'Compute'

function does not need any the windows of SPM and can be run with out initializing SPM (though the

MATLABPATH environment variable must be set). This maybe useful to remember if you have trouble with

running out of memory

To maximize the memory available for the 'Compute' step, and to see how to run it in batch mode, follow

these steps.

1. If running, quit matlab

2. In the directory with the SnPM_cfg.mat file, start matlab

3. At the matlab prompt type

snpm_cp .

Page 21: Statistical Non Parametric Mapping Manual

This will 'Compute' just as before but there will be no progress bar. When it is finished you could type

'spm PET' to start SPM99, but since Matlab is not known for it's brilliant memory management it is best to

quit, then restart matlab and SnPM.

On a Sun UltraSPARC 167 MHz this took under 6 minutes.

...to top

 

Results

In the SnPM window click on

Results

You will be asked to find the configuration file SnPM has just created; it's called

SnPM.mat

Next it will prompt for positive or negative effects. Positive corresponds to ``A-B'' and negative to ``B-A''.

If you are interested in a two tailed test repeat this whole procedure twice but halve your p value

threshold value in the next entry.

Then, you will be asked questions such as 'Write filtered statistic img?' and 'Write FWE-corrected p-value

img?'. You can choose 'no' for both of them. It will save time and won't change the final result.

Next it will ask for a corrected p value for filtering. The uncorrected and FWE-corrected p-values are

exact, meaning if the null hypothesis is true exactly 5% of the time you'll find a P-value 0.05 or smaller

(assuming 0.05 is a possible nonparameteric P-value, as the permutaiton distribution is discrete and all p-

values are multiples of 1/nPerm). FDR p-values are valid based on an assumption of positive dependence

between voxels; this seems to be a reasonable assumption for image data. Note that SPM's corrected p

values derived from the Gaussian random field theory are only approximate.

Next, if you collected supra-threshold stats, it will ask if you want to assess spatial extent. For now, let's

not assess spatial extent.

You will be given the opportunity to write out the statistic image and the p-value image. Examining the

location of activation on an atlas image or coregistered anatomical data is one of the best way to

understand your data.

Shortly the Results screen will first show the permutation distributions.

Page 22: Statistical Non Parametric Mapping Manual

You need to hit the ENTER button in the matlab main window to get the second page. The screen will

show the a maximum intensity projection (MIP) image, the design matrix, and a summary of the

significantly activated areas.

 

Page 23: Statistical Non Parametric Mapping Manual

The figure is titled ``SnPM{Pseudo-t}'' to remind you that the variance has been smoothed and hence the

intensity values listed don't follow a t distribution. The tabular listing indicates that there are 68 voxels

significant at the 0.05 level; the maximum pseudo-t is 6.61 and it occurs at (38, -28, 48).

 

This information at the bottom of the page documents the parameters of the analysis. The ``bhPerms=1''

is noting that only half of the permutations were calculated; this is because this simple A-B paradigm

gives you two labelings for every calculation. For example, the maximum pseudo-t of the

A A B B B B A A A A B B

labeling is the minimum pseudo-t of the

B B A A A A B B B B A A

labeling.

Page 24: Statistical Non Parametric Mapping Manual

Now click again on

          Results

proceeding as before but now answer Yes when it asks to assess spatial extent. Now you have to decide

on a threshold. This is a perplexing issue which we don't have good suggestions for right now. Since we

are working with the pseudo-t, we can't relate a threshold to a p-value, or we would suggest a threshold

corresponding to, say, 0.01.

When SnPM saves supra-threshold stats, it saves all pseudo-t values above a given threshold

for allpermutations. The lower lower limit shown (it is 1.23 for the motor data) is this ``data collection''

threshold. The upper threshold is the pseudo-t value that corresponds to the corrected p-value threshold

(4.98 for our data); there is no sense entering threshold above this value since any voxels above it are

already significant by intensity alone.

Trying a couple different thresholds, we found 2.5 to be a good threshold. This, though, is a problem. The

inference is strictly only valid when the threshold is specified a priori. If this were a parametric t image

(i.e. we had not smoothed the variance) we could specify a univariate p-value which would translate to

specify a t threshold; since we are using a pseudo t, we have no parametric distributional results with

which to convert a p-value into a pseudo t. The only strictly valid approach is to determine a threshold

from one dataset (by fishing with as many thresholds as desired) and then applying that threshold to a

different dataset. We are working to come up with guidelines which will assist in the threshold selection

process.

 

Page 25: Statistical Non Parametric Mapping Manual

Now we see that we have identified one 562 voxel cluster as being significant at 0.005 (all significances

must be multiples of 1 over the number of permutations, so this significance is really 1/216=0.0046). This

means that when pseudo-t images from all the permutations were thresholded at 2.5, no permutation had

a maximum cluster size greater than 562.

Overview

The purpose of this section is to supply the steps necessary to carry out a second level SnPM analysis using the results from an

analysis carried out using the FMRIB Software Library (FSL). Specifically, instructions are given that will show the user how to select

the contrasts of parameter estimates (copes) from FSL using SnPM. The individual copes can either be obtained from the individual

analyses or from a group analyses that used all individuals of interest.

If only the single subject analyses have been performed, the copes from all subjects must be registered to the same atlas space before

reading into SnPM.

If a group analysis has already been performed, then the individual copes have been registered to the same atlas space. FSL 3.0 and

3.1, SPM2, and SPM99 read Analyze format. By default SPM2, though, only allows a single volume per Analyze file, while FSL

routinely uses multiple volume files. This becomes a problem only if a SnPM analysis is applied to a 2nd level FEAT directory, as

the multiple subject's copes are all stored in a single multivolume image file. If the copes are coming from separate single-subject

first level FEAT analyses, then there is no need to read multivolume Analyze files. The details of how to deal with multivolume files

Page 26: Statistical Non Parametric Mapping Manual

are discussed below.

Note that FSL3.2beta creates NIFTI files, as does SPM5. Currently SnPM has no support for NIFTI files.

 

Using Single-Subject FEAT directories

If you would like to run your SnPM analysis using the original single-subject FEAT directories, you first

need to create copies of the copes and varcopes that are in the standard atlas space. This can be done

using the featregapply on each single-subject FEAT directory.

featregapply <feat-directory-name> creates a reg_standard directory in the feat directory.

Inreg_standard is a stat subdirectory that contains copies of the copes and varcopes in the standard

atlas space, one for each contrast.

Reading in the data

Once you've run featregapply on all .feat directories of interest you are ready to run SnPM! You can

follow the steps for an fMRI analysis and when selecting cope images go to the *.feat/reg_standard/stats

directory for each FEAT analysis and you will find the transformed copes and varcopes, as shown below.

Page 27: Statistical Non Parametric Mapping Manual

Using Group FEAT directories

After a group FEAT analysis is run a .gfeat directory is created which, among other things, includes a copy

of the first-level copes for each subject in the form of a multivolume analyze volume.

Each cope is located directly in the .gfeat directory and is labeled as cope#.img, where # is the number

of the cope. The default settings of SPM are not set to recognize multiple volumes and hence if the

cope#.img file is read into SPM without changing the defaults it will mistake the .img file as one subject

instead of multiple subjects. The SPM defaults can either be changed for a single session or permanantly

so that multivolume analyze volumes are recognized.

Changing SPM settings to allow for multivolume analyze volumes

CHANGE FOR SINGLE SESSION

To change the defaults for a single session, must select 'defaults' found on the bottom of the main SPM2

gui.

You will then be asked which defaults area you are interested in and you want to select 'Miscellaneous

Defaults'. Select the first two choices ('Log to file?' and 'Command Line input?') however you like. The

third choice is 'Allow multi-volume Analyze files?' and for this you should select 'yes'. There is one more

item to select after this and choose whatever value you like.

Page 28: Statistical Non Parametric Mapping Manual

Note, this will only work for the current session of SPM. If you close and reopen SPM, you will need to reset

the default to read multi-volume analyze files.

CHANGE PERMANENTLY

If you would rather change the SPM defaults so that it will always recognize multi-volume analyze files,

you will need to alter the spm_defaults.m file directly. Within this file, find the 'File format specific' section

(shown below) and change defaults.analyze.multivol=0 to defaults.analyze.multivol=1.

% File format specific

%=======================================================================

defaults.analyze.multivol = 0;

Reading in the data

SPM will now recognize multiple volume data and you can follow the steps for a fMRI analysis. When you

want to select the images, look in the .gfeat directory and you will see that each cope has mutiple files,

similar to what is shown below. If your cope#.img file is not follwed by comma and a number, first check

that you did step 2 correctly and then check that your fsl analysis included all of the subjects you wanted.

You may get an error message similar to the following, which may be ignored.

Warning: Assuming a scalefactor of 1 for "*/GroupRight.gfeat/cope1.img".

A Worked fMRI Example 

Page 29: Statistical Non Parametric Mapping Manual

SnPM is an SPM toolbox developed by Andrew Holmes & Tom Nichols

 

This page...

introduction

example data

background

design setup

computation

viewing results

SnPM pages...

SnPM

manual

PET example

fMRI example

FSL users

New SnPM example

In this section, we analyze multi-subject event-related fMRI data with the SnPM software. The aim of this

example is:

i. Give another example to demonstrate the steps of an SnPM analysis by analyzing the fMRI data.

The same set of data have also been analyzed by SPM. The details can be found atSPM website ("fMRI:

multi-subject (random effects) analyses - Canonical" data set).

The reference is: Henson, R.N.A, Shallice, T., Gorno-Tempini, M.-L. & Dolan, R.J (2002). Face repetition

effects in implicit and explicit memory tests as measured by fMRI. Cerebral Cortex, 12, 178-186.

We will give two standard methods to analyze the data by using nonparametric methods:

i. Without smoothed variance t

ii. With Smoothed variance t

 

The Example Data

The data are from a study on face repetition effects in implicit and explicit memory tests (Henson et al.

2002; see above).

In this study, twelve volunteers (six male; aged 22-42 years, median 29 years) participated in the

experiment. Faces of famous and nonfamous people were presented to the subjects for 500 ms, and

Page 30: Statistical Non Parametric Mapping Manual

replaced by a baseline of an oval chequerboard throughout the interstimulus interval. Each subject was

scanned during the experiment and his or her fMRI images were obtained.

Each subject's data were analyzed, creating a difference image between faces and chequerboad

(baseline) watchings. So each image here is the contrast image for each subject.

Under the null hypothesis we can permute the labels of the effects of interest. One way of implimenting

this with contrast images is to randomly change the sign of each subject's contrast. This sign-flipping

approach can be justified by a symmetric distribution for each voxel's data under the null hypothesis.

While symmetry may sound like a strong assumption, it is weaker than Normality, and can be justified by

a subtraction of two sample means with the same (arbitrary) distribution.

Hence the null hypothesis here is:

H0: The symmetric distribution of (the voxel values of the) subjects' contrast images have zero mean.

Exchangeability of Second Level fMRI Data

fMRI data presents a special challenge for nonparametric methods. Because fMRI data exhibits temporal

autocorrelation, an assumption of exchangeability of scans within subject is not tenable. However, to

analyze a group of subjects for population inference, we need to only assume exchangeability of subjects.

The conventional assumption of independent subjects implies exchangeability, and hence a single

exchangeability block (EB) consisting of all subjects.

(On a technical note, the assumption of exchangeability can actually be relaxed for the one-sample case

considered here. A sufficient assumption for the contrast data to have a symmetric distribution, is for

each subject's contrast data to have a symmetric but possibly different distribution. Such differences

between subjects violates exchangeability of all the data; however, since the null distribution of the

statistic of interest is invariant with respect to sign-flipping, the test is valid.)

Nonparametric Analysis

(Without smoothed variance t)

You can implement a nonparametric random effects analysis using the SnPM software which you can

download from http://www.fil.ion.ucl.ac.uk/spm/snpm/.

First follow the instructions on the above web page to download and install SnPM (don't forget the patches

!).

Then, in matlab (in a new directory !) type

snpm

SnPM is split up into three components (1) Setup, (2) Compute and (3) Results.

Page 31: Statistical Non Parametric Mapping Manual

First click on

Setup

Then type in the following options (your responses are in square brackets). Select design type [Multisub:

One sample T test on differences; 1 condition] Select all scan files from the corresponding directory in a

window as below [con_0006.img ->

con_0017.img] 

Page 32: Statistical Non Parametric Mapping Manual

Number of confounding covariates [0] 4096 Perms. Use approx test ? [No]

(typically, with fewer than 5000 Perms your computer should be quick enough to use an exact test - ie. to

go through all permutations)

FWHM(mm) for Variance smooth [0]

See below (and http://www.fil.ion.ucl.ac.uk/spm/snpm/) for more info on the above option.

Collect Supra-Threshold stats [Yes]

Define the thresh now? [No]

Collecting suprathreshold statistics is optional because the file created is huge; it is essentially the

"mountain tops" of the statistic image of every permutation is saved. Say "No" if you want to save disk

space and time.

Select Global Normalisation [No Global Normalization]

Grand Mean Scaling [No Grand Mean Scaling]

The above option doesn't matter because no normalisation will be done (this is specified in the next step)

Threshold masking [None]

Note, there's no need to use threshold masking since the data are already implicitly masked with NaN's.

Finally, the Setup Menu is as below:

Page 33: Statistical Non Parametric Mapping Manual

SnPM will now create the file SnPMcfg.mat. and show the Design Matrix in the Graphics window.

Now click on

Compute

Page 34: Statistical Non Parametric Mapping Manual

Select the file (SnPMcfg.mat) as

below 

The computation should take between 5 and 10 minutes depending on your computer.In one of the SnPM

window, it will show the percentage of the completeness, and in the matlab window, the permutation step

that is being performed will be listed.

Note that it shows how many minutes and seconds spent on each permutation. The number in

parentheses is the percentage of time spent on variance smoothing. Since we choose no variance

smoothing, this is 0%.

Finally click on

Results

Page 35: Statistical Non Parametric Mapping Manual

Select the SnPM.mat file in the corresponding directory as

below, 

In the menu, choose the following options:

Positive or negative effects?: (+ve)

Write filtered statistic img?: (yes)

Filename?: SnPMt_filtered

Results for which img? (T)

Voxelwise: Use Corrected thresh (FWE)

FWE-Corrected p value threshold: (0.05)

Page 36: Statistical Non Parametric Mapping Manual

Finally, the SnPM PostProcess menu will be as

below, 

SnPM will then show the distribution of the maximum t-statistic.

Page 37: Statistical Non Parametric Mapping Manual

A small dialog box will come out and ask you to review the permutation distributions and to choose either

'Print & Continue' (to print the histogram to spm_date.ps file and then to continue) or just 'Continute' only.

Click on one of the two buttons.

On next page, SnPM will then show the permutation distributions of the uncorrected P values, together

with a FDR plot.

Page 38: Statistical Non Parametric Mapping Manual

Choose to print the page to spm_date.ps file and then continue or to continue directly by using the

prompted small dialog box. 

Page 39: Statistical Non Parametric Mapping Manual

On next page, SnPM will then plot a MIP of those voxels surviving the SnPM critical threshold (this value is

displayed at the bottom of the image and for this data set should be 7.92).

Page 40: Statistical Non Parametric Mapping Manual

You can then use this value in SPM (in the RESULTS section, say 'No' to corrected height threshold, and

then type in 7.9248 for the threshold) and take advantage of SPMs rendering routines (not available in

SnPM).

Note that the SnPM threshold is lower than the SPM threshold (9.07). Consequently, SnPM shows more

active voxels.

...to top

 

Nonparametric Analysis

(With smoothed variance t, Pseudo-t)

Note that the result just obtained looks "jaggedy". That is, while the image data is smooth (check the con*

images), the t statistic image is rough. A t statistic is a estimate divided by a square root of the variance

of the estimate, and this roughness is due to uncertainty of the variance estimate; this uncertainty is

especially bad when the degrees of freedom are low (here, 11). By smoothing the variance before

creating a t ratio we can eliminate this roughness and effectively increase our degrees of freedom,

increasing our power.

Create a new directory for the smoothed variance results.

First click on

Setup

Then type in the following options (your responses are in square brackets).

[Multisub: One Sample T test on differences; 1 condition]

Select all scans [con_0006.img -> con_0017.img]

Number of confounding covariates [0]

4096 Perms. Use approx test ? [No]

FWHM(mm) for Variance smooth [8]

A rule of thumb for the variance smoothing is to use the same FWHM that was applied to the data (which

is what we've used here), though a little as 2 x VoxelSize may be sufficient.

Page 41: Statistical Non Parametric Mapping Manual

Collect Supra-Threshold stats [Yes]

Select Global Normalisation [No Global Normalization]

Define the thresh now? [No]

Grand Mean Scaling [No Grand Mean Scaling]

Again, this doesn't matter because no normalisation will be done.

Threshold masking [None]

The final Setup Menu will be as below, 

SnPM will now create the file SnPMcfg.mat. In the Graphics window, the design matrix will be shown.

Now click on

Compute

Select the file (SnPMcfg.mat)

Page 42: Statistical Non Parametric Mapping Manual

In the matlab window, the permutation step that is being performed will be listed.

Note that it shows how many minutes and seconds spent on each permutation. The number in

parentheses is the percentage of time spent on variance smoothing. Compare this result with what we get

from "Without smoothed variance t" method (0% in that case).

The above computation should take between 10 and 25 minutes depending on your computer.

Finally click on

Results

Select the SnPM.mat file

Make the following choices:

Positive or negative effects?: (+ve)

Write filtered statistic img?: (yes)

Filename?: SnPMt_filtered

Results for which img? (T)

Write FWE-corrected p-value img?: (yes)

Use corrected threshold?: (FWE)

Voxelwise FWE-Corrected p value threshold: (0.05)

Page 43: Statistical Non Parametric Mapping Manual

The final Results Setup Menu will be as

below, 

SnPM will then show the distribution of the maximum *pseudo* t-statistic, or smoothed variance t statistic

as below.

Page 44: Statistical Non Parametric Mapping Manual

A small dialog box will come out and ask you to review the permutation distributions and to choose either

'Print & Continue' (to print the histogram to spm_date.ps file and then to continue) or just 'Continute' only.

Click on one of the two buttons.

On next page, SnPM will then show the permutation distributions of the uncorrected P values, together

with a FDR plot.

Page 45: Statistical Non Parametric Mapping Manual

Choose to print the page to spm_date.ps file and then continue or to continue directly by using the

prompted small dialog box. 

Page 46: Statistical Non Parametric Mapping Manual

On next page, SnPM will then plot a MIP of those voxels surviving the SnPM critical threshold (this value is

displayed at the bottom of the image and for this data set should be 5.33).

Page 47: Statistical Non Parametric Mapping Manual

Observe how there are both more suprathreshold voxels, and that the image is smoother. For example,

note that the anterior cingulate activation (3,15,45) is now 356 voxels, as compared with 75 with SnPM{t}

or 28 with SPM{t}.

Very important!!! This is not a t image. So you cannot apply this threshold to a t image in SPM. You can,

however, create overlay images with the following:

1. Use 'Display' to select the image you would like for a background. Via the keyboard only you

could do...

2.

3. Img = spm_get(1,'.img','select background reference');

spm_image('init',Img)

 

4. Create filtered image with NaN's instead of zero's.

5.

6. In = 'SnPMt_filtered';

7. Out = 'SnPMt_filteredNaN';

8. f = 'i1.*(i1./i1)';

9. flags = {0,0,spm_type('float')};

10. spm_imcalc_ui(In,Out,f,flags);

Ignore the division by zero errors.

 

11. Overlay the filtered image

spm_orthviews('addimage',1,'SnPMt_filteredNaN')